From nginx-forum at nginx.us Thu Jan 1 00:06:59 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 31 Dec 2014 19:06:59 -0500 Subject: Happy new 2015 ! Message-ID: <987fd2a86334ea6da0e230760a9d3bf9.NginxMailingListEnglish@forum.nginx.org> And may your nginx keep performing no matter which OS it's running on ! Also from the support staff at the forums and all contributors have a good one ! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255924,255924#msg-255924 From nginx-forum at nginx.us Thu Jan 1 19:44:43 2015 From: nginx-forum at nginx.us (saravsars) Date: Thu, 01 Jan 2015 14:44:43 -0500 Subject: Timeout for whole request body Message-ID: <10126d36ea5bcde3bb2cd6f34adc325d.NginxMailingListEnglish@forum.nginx.org> Nginx provides client_body_timeout which is only for a period between two successive read operations but in one of our use case we want to set timeout for whole request body. To set the timeout for whole request body, we changed the source to add a new timer. We would like to know whether this approach is correct or not. Please correct me if there is any issue in the following code. Thanks diff -bur src/core/ngx_connection.c src/core/ngx_connection.c --- src/core/ngx_connection.c 2014-12-19 15:33:48.000000000 +0530 +++ src/core/ngx_connection.c 2015-01-02 00:18:19.000000000 +0530 @@ -884,6 +884,10 @@ ngx_del_timer(c->write); } + if (c->read_full->timer_set) { + ngx_del_timer(c->read_full); + } + if (ngx_del_conn) { ngx_del_conn(c, NGX_CLOSE_EVENT); diff -bur src/core/ngx_connection.h src/core/ngx_connection.h --- src/core/ngx_connection.h 2014-12-19 15:33:48.000000000 +0530 +++ src/core/ngx_connection.h 2015-01-02 00:42:41.000000000 +0530 @@ -114,6 +114,7 @@ void *data; ngx_event_t *read; ngx_event_t *write; + ngx_event_t *read_full; ngx_socket_t fd; diff -bur src/http/ngx_http_request_body.c src/http/ngx_http_request_body.c --- src/http/ngx_http_request_body.c 2014-12-19 15:33:48.000000000 +0530 +++ src/http/ngx_http_request_body.c 2015-01-02 00:53:37.000000000 +0530 @@ -27,6 +27,7 @@ static ngx_int_t ngx_http_request_body_save_filter(ngx_http_request_t *r, ngx_chain_t *in); +static void ngx_http_full_body_timer_handler(ngx_event_t *wev); ngx_int_t ngx_http_read_client_request_body(ngx_http_request_t *r, @@ -355,6 +356,15 @@ clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); ngx_add_timer(c->read, clcf->client_body_timeout); + if (c->read_full == NULL) { + c->read_full=ngx_pcalloc(c->pool, sizeof(ngx_event_t)); + c->read_full->handler = ngx_http_full_body_timer_handler; + c->read_full->data = r; + c->read_full->log = r->connection->log; + ngx_add_timer(c->read_full, 10000); + } + + if (ngx_handle_read_event(c->read, 0) != NGX_OK) { return NGX_HTTP_INTERNAL_SERVER_ERROR; } @@ -1081,3 +1091,13 @@ return NGX_OK; } + +static void ngx_http_full_body_timer_handler(ngx_event_t *wev) +{ + if (wev->timedout) { + ngx_http_request_t *r; + r = wev->data; + //ngx_close_connection(r->connection); + ngx_http_finalize_request(r, NGX_HTTP_REQUEST_TIME_OUT); + } +} diff -bur src/http/ngx_http_request.c src/http/ngx_http_request.c --- src/http/ngx_http_request.c 2014-12-19 15:33:48.000000000 +0530 +++ src/http/ngx_http_request.c 2015-01-02 00:24:32.000000000 +0530 @@ -2263,6 +2263,10 @@ if (c->write->timer_set) { ngx_del_timer(c->write); } + + if (c->read_full->timer_set) { + ngx_del_timer(c->read_full); + } } c->read->handler = ngx_http_request_handler; @@ -2376,6 +2380,10 @@ ngx_del_timer(c->write); } + if (c->read_full->timer_set) { + ngx_del_timer(c->read_full); + } + if (c->read->eof) { ngx_http_close_request(r, 0); return; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255925,255925#msg-255925 From nginx-forum at nginx.us Thu Jan 1 20:56:33 2015 From: nginx-forum at nginx.us (xdiaod) Date: Thu, 01 Jan 2015 15:56:33 -0500 Subject: http module handler, chain buffer and output_filter In-Reply-To: <017735e3e59afbe191ef0b619169fcb7.NginxMailingListEnglish@forum.nginx.org> References: <017735e3e59afbe191ef0b619169fcb7.NginxMailingListEnglish@forum.nginx.org> Message-ID: In my handler function, the buffer has the 0 value for its last_buf property. I do not understand as i assign 1 in its initialisation. Someone know why does it behave like that? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255866,255926#msg-255926 From nginx-forum at nginx.us Thu Jan 1 21:00:27 2015 From: nginx-forum at nginx.us (xdiaod) Date: Thu, 01 Jan 2015 16:00:27 -0500 Subject: http module handler, chain buffer and output_filter In-Reply-To: References: <017735e3e59afbe191ef0b619169fcb7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2856ed24272530ddbe856307ae735425.NginxMailingListEnglish@forum.nginx.org> Ok i have reread my init function XD i was affected a value in somewhere random in memory XD, i was really tired when i does that. thank you for those who have read my post. have an happy new year :D Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255866,255927#msg-255927 From nginx-forum at nginx.us Fri Jan 2 01:54:26 2015 From: nginx-forum at nginx.us (xdiaod) Date: Thu, 01 Jan 2015 20:54:26 -0500 Subject: http module handler, chain buffer and output_filter In-Reply-To: <017735e3e59afbe191ef0b619169fcb7.NginxMailingListEnglish@forum.nginx.org> References: <017735e3e59afbe191ef0b619169fcb7.NginxMailingListEnglish@forum.nginx.org> Message-ID: Finally, i have to change all as i need a multithreaded environment XD Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255866,255928#msg-255928 From ivan at pangea.org Fri Jan 2 15:13:40 2015 From: ivan at pangea.org (Ivan Vilata i Balaguer) Date: Fri, 2 Jan 2015 16:13:40 +0100 Subject: proxy_pass ignoring gai.conf/RFC3484 Message-ID: <20150102151340.GA2141@sax.selidor.net> Hi everyone (and a happy new year!), I'm trying to setup NginX as a reverse proxy to an internal machine which has both private IPv4 and ULA IPv6 addresses, both resolvable from the same name ``internal_machine`` to A and AAAA entries in our local DNS servers. Outbound connections are still using IPv4, but I want to phase out our private IPv4 ones in favour of ULA IPv6, thus I'm using ``/etc/gai.conf`` to leverage the mechanism described in [RFC?3484][] to configure ``getaddrinfo()`` responses. This is my configuration: precedence ::1/128 50 # loopback IPv6 first precedence fdf4:7759:a7d2::/48 47 # then our ULA IPv6 range precedence ::ffff:0:0/96 45 # then IPv4 (private and public) precedence ::/0 40 # then IPv6 ... precedence 2002::/16 30 precedence ::/96 20 [RFC?3484]: http://tools.ietf.org/html/rfc3484 This configuration seems to be correct, i.e. running ``getent ahosts internal_machine`` puts ULA IPv6 addresses before private IPv4. If I exchange the priorities of ULA IPv6 and IPv4, the command puts IPv4 addresses first. So far so good. BUT if I configure NginX with ``proxy_pass http://internal_machine;``, it always insists in using the IPv4 address first, regardless of what ``gai.conf`` says. The only way I have to force IPv6 first is hardwiring it in the URL (which is ugly) or including the resolution in ``/etc/hosts`` (which disperses configuration). Is this behaviour expected? Maybe I missed some configuration aspect? I'm currently using: # nginx -V # from Debian Wheezy backports nginx version: nginx/1.6.2 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fstack-protector \ --param=ssp-buffer-size=4 -Wformat -Werror=format-security \ -D_FORTIFY_SOURCE=2' --with-ld-opt=-Wl,-z,relro \ --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf \ --http-log-path=/var/log/nginx/access.log \ --error-log-path=/var/log/nginx/error.log \ --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid \ --http-client-body-temp-path=/var/lib/nginx/body \ --http-fastcgi-temp-path=/var/lib/nginx/fastcgi \ --http-proxy-temp-path=/var/lib/nginx/proxy \ --http-scgi-temp-path=/var/lib/nginx/scgi \ --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug \ --with-pcre-jit --with-ipv6 --with-http_ssl_module \ --with-http_stub_status_module --with-http_realip_module \ --with-http_auth_request_module --with-http_addition_module \ --with-http_dav_module --with-http_geoip_module \ --with-http_gzip_static_module --with-http_image_filter_module \ --with-http_spdy_module --with-http_sub_module \ --with-http_xslt_module --with-mail --with-mail_ssl_module \ --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-auth-pam \ --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-dav-ext-module \ --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-echo \ --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-upstream-fair \ --add-module=/tmp/buildd/nginx-1.6.2/debian/modules/ngx_http_substitutions_filter_module # uname -a Linux frontend01 2.6.32-4-pve #1 SMP Mon May 9 12:59:57 CEST 2011 x86_64 GNU/Linux I found [an nginx-devel thread][1] revolving around a similar issue, but the proposed solutions overlooked ``/etc/gai.conf``. [1]: http://www.mail-archive.com/nginx-devel%40nginx.org/msg01893.html "proxy_pass behavior" Thank you very much for your help! -- Ivan Vilata i Balaguer From nginx-forum at nginx.us Sun Jan 4 00:35:59 2015 From: nginx-forum at nginx.us (ASTRAPI) Date: Sat, 03 Jan 2015 19:35:59 -0500 Subject: Exclude ip's from Nginx limit_req zone In-Reply-To: <20141225131251.GL79300@mdounin.ru> References: <20141225131251.GL79300@mdounin.ru> Message-ID: Ok all done fixed ! Thanks :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255697,255947#msg-255947 From nginx-forum at nginx.us Sun Jan 4 14:12:40 2015 From: nginx-forum at nginx.us (nrahl) Date: Sun, 04 Jan 2015 09:12:40 -0500 Subject: Skip Location Based On Query String Parameter? In-Reply-To: <20141113231749.GG3771@daoine.org> References: <20141113231749.GG3771@daoine.org> Message-ID: > It would need testing, and it does depend on what is in the > "apache-pass" > file, but presuming that it does do "proxy_pass" and does not do > anything > that is invalid in an "if in location", then > > > location ~ ^/([a-zA-Z0-9\-]+)/ { #Use cache if possible, then > proxy pass > > if ($arg_nocache = true) { > include /etc/nginx/apache-pass; > } > > > try_files /cache/$1.html.gz /cache/$1.html @apache; > > } > > could possibly work. When trying to use if(){include} I get the error, "'include' directive is not allowed here", I guess include itself is one of those things not allowed in an if statement? I found another post on the board that says "if" isn't allowed in include because they are both block directives and no one wants to fix it because "if" is a hack anyway. Maybe it can be done by using a nested location blocks, something like: location (!$arg_nc) { # the qs parameter "ns" does not exist location ~ ^/([a-zA-Z0-9\-]+)/ { try_files /cache/$1.html @apache; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,254800,255948#msg-255948 From nginx-forum at nginx.us Sun Jan 4 14:44:19 2015 From: nginx-forum at nginx.us (ASTRAPI) Date: Sun, 04 Jan 2015 09:44:19 -0500 Subject: limit_conn module exclude also on Maxim Dunin recommended code Message-ID: <4ced0c2d3e8f2c00f362461386b4dcbe.NginxMailingListEnglish@forum.nginx.org> Hi I am using this code to limit requests and exclude some ip's" http { limit_req_zone $limit zone=delta:8m rate=60r/s; geo $limited { default 1; 192.168.45.56/32 0; 199.27.128.0/21 0; 173.245.48.0/20 0; } map $limited $limit { 1 $binary_remote_addr; 0 ""; } And this on the domain config: server { limit_req zone=delta burst=90 nodelay; Now i have two questions: 1)Does nginx realy knows how to exclude ip's in this format .0/21 or i must use them as 199.27.128.5 for example? 199.27.128.0/21 2)Now i want to use the limit_conn_zone on the above recommendation from Maxim Dunin... like this: http { limit_conn_zone $binary_remote_addr zone=alpha:8m; limit_req_zone $limit zone=delta:8m rate=60r/s; geo $limited { default 1; 192.168.45.56/32 0; 199.27.128.0/21 0; 173.245.48.0/20 0; } map $limited $limit { 1 $binary_remote_addr; 0 ""; } And this on the domain config: server { limit_conn alpha 20; limit_req zone=delta burst=90 nodelay; But how i can use the above exclude list for the limit_conn module also? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255949,255949#msg-255949 From nginx-forum at nginx.us Sun Jan 4 14:46:21 2015 From: nginx-forum at nginx.us (nrahl) Date: Sun, 04 Jan 2015 09:46:21 -0500 Subject: Skip Location Based On Query String Parameter? In-Reply-To: References: <20141113231749.GG3771@daoine.org> Message-ID: I found soemthing that I think works, but it feels very hackish: location ~ ^/([a-zA-Z0-9\-]+)/ { error_page 418 = @apache; #proxy pass recursive_error_pages on; if ($arg_nc = 1) { return 418; } try_files /cache/$1.html @apache; } Is this the best (only?) way to bypass a location based on query sting? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,254800,255950#msg-255950 From admin at grails.asia Sun Jan 4 23:24:31 2015 From: admin at grails.asia (jtan) Date: Mon, 5 Jan 2015 07:24:31 +0800 Subject: Happy new 2015 ! In-Reply-To: <987fd2a86334ea6da0e230760a9d3bf9.NginxMailingListEnglish@forum.nginx.org> References: <987fd2a86334ea6da0e230760a9d3bf9.NginxMailingListEnglish@forum.nginx.org> Message-ID: Happy New Year too! On Thu, Jan 1, 2015 at 8:06 AM, itpp2012 wrote: > And may your nginx keep performing no matter which OS it's running on ! > > Also from the support staff at the forums and all contributors have a good > one ! > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,255924,255924#msg-255924 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Freelance Grails and Java developer -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jan 5 04:17:45 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 5 Jan 2015 07:17:45 +0300 Subject: proxy_pass ignoring gai.conf/RFC3484 In-Reply-To: <20150102151340.GA2141@sax.selidor.net> References: <20150102151340.GA2141@sax.selidor.net> Message-ID: <20150105041745.GC47350@mdounin.ru> Hello! On Fri, Jan 02, 2015 at 04:13:40PM +0100, Ivan Vilata i Balaguer wrote: [...] > BUT if I configure NginX with ``proxy_pass http://internal_machine;``, > it always insists in using the IPv4 address first, regardless of what > ``gai.conf`` says. The only way I have to force IPv6 first is > hardwiring it in the URL (which is ugly) or including the resolution in > ``/etc/hosts`` (which disperses configuration). > > Is this behaviour expected? Maybe I missed some configuration aspect? If a name in proxy_pass resolves to multiple addresses, nginx will use them all with round-robin balancing algorithm. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Jan 5 06:04:45 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 5 Jan 2015 09:04:45 +0300 Subject: How Nginx behaves with "proxy_bind" and DNS resolver with non matching ip versions between bind ip and resolved ip? In-Reply-To: <5aded18c889ccb93e16281f6a1b259d7.NginxMailingListEnglish@forum.nginx.org> References: <20141229164844.GD3656@mdounin.ru> <5aded18c889ccb93e16281f6a1b259d7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150105060444.GD47350@mdounin.ru> Hello! On Tue, Dec 30, 2014 at 06:58:51AM -0500, shmulik wrote: > Thank you. > > So if i understood correctly: > > When i bind an ipv6 address, and the resolver returns 1 ipv4 address and 1 > ipv6 address - if the first attempted address is the ipv4 address, the > result will be an error + sending back to the client a "500 Internal Server > Error"? Yes. > In such scenarios, is there any way i can tell Nginx to skip the non > matching ip version? (i.e. in the above example, to skip directly to the > resolved ipv6 address). No. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Jan 5 18:11:08 2015 From: nginx-forum at nginx.us (blu) Date: Mon, 05 Jan 2015 13:11:08 -0500 Subject: Location served by all virtual servers Message-ID: <62b5dd15612fc38a65a9c39cb3b18d29.NginxMailingListEnglish@forum.nginx.org> Hi, I have some configuration issue with my nginx. Currently both URLs return the same page when I open: http://domain1.com/SharedFIles and http://domain2.com/SharedFiles. Location "SharedFiles" is definied only in one virtual server (domain2) however it is accessible from both domains. How come? I'd like to have it only in a way that only domain2.com serves SharedFiles location. What's wrong? THank you! Here are two config files (doamin1 and domain2) I have in sites-available: file domain1: server { listen 80; ## listen for ipv4; this line is default and implied root /home/pi/webapps/domain1/public_html; index index.html index.htm; server_name *.domain1.com; } file domain2: server { listen 80; server_name *.domain2.com; access_log /home/pi/webapps/domain2/logs/nginx-access.log; error_log /home/pi/webapps/domain2/logs/nginx-error.log; location /SharedFiles { root /media/Seagate/Video; auth_basic "Restricted"; auth_basic_user_file /etc/nginx/.htpasswd; autoindex on; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255955,255955#msg-255955 From semenukha at gmail.com Mon Jan 5 18:47:21 2015 From: semenukha at gmail.com (Styopa Semenukha) Date: Mon, 05 Jan 2015 13:47:21 -0500 Subject: Location served by all virtual servers In-Reply-To: <62b5dd15612fc38a65a9c39cb3b18d29.NginxMailingListEnglish@forum.nginx.org> References: <62b5dd15612fc38a65a9c39cb3b18d29.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1692580.Ul924FacvZ@tornado> On Monday, January 05, 2015 01:11:08 PM blu wrote: > Hi, I have some configuration issue with my nginx. Currently both URLs > return the same page when I open: > http://domain1.com/SharedFIles and http://domain2.com/SharedFiles. > > Location "SharedFiles" is definied only in one virtual server (domain2) > however it is accessible from both domains. How come? > I'd like to have it only in a way that only domain2.com serves SharedFiles > location. > > What's wrong? THank you! > > > Here are two config files (doamin1 and domain2) I have in sites-available: > > file domain1: > server { > listen 80; ## listen for ipv4; this line is default and implied > root /home/pi/webapps/domain1/public_html; > index index.html index.htm; > server_name *.domain1.com; > } > > file domain2: > server { > listen 80; > server_name *.domain2.com; > > access_log /home/pi/webapps/domain2/logs/nginx-access.log; > error_log /home/pi/webapps/domain2/logs/nginx-error.log; > > location /SharedFiles { > root /media/Seagate/Video; > auth_basic "Restricted"; > auth_basic_user_file /etc/nginx/.htpasswd; > autoindex on; > } > } The hostname "domain1.com" is NOT matched by wildcard "*.domain1.com" (this only matches subdomains), so it gets served by the default virtual host. Since you don't have an explicit definiton of the default vhost, it's the first one (most likely, alphabetically). In your case, the default one is "*.domain2.com". Solution: add "domain1.com" and "domain2.com" server names to your config. -- Best regards, Styopa Semenukha. From me at myconan.net Mon Jan 5 18:51:14 2015 From: me at myconan.net (Edho Arief) Date: Tue, 6 Jan 2015 03:51:14 +0900 Subject: Location served by all virtual servers In-Reply-To: <1692580.Ul924FacvZ@tornado> References: <62b5dd15612fc38a65a9c39cb3b18d29.NginxMailingListEnglish@forum.nginx.org> <1692580.Ul924FacvZ@tornado> Message-ID: On Tue, Jan 6, 2015 at 3:47 AM, Styopa Semenukha wrote: > The hostname "domain1.com" is NOT matched by wildcard "*.domain1.com" (this > only matches subdomains), so it gets served by the default virtual host. Since > you don't have an explicit definiton of the default vhost, it's the first one > (most likely, alphabetically). In your case, the default one is > "*.domain2.com". > > Solution: add "domain1.com" and "domain2.com" server names to your config. or use `.domain1.com` instead of `*.domain1.com` as documented in http://nginx.org/r/server_name From list_nginx at bluerosetech.com Mon Jan 5 20:10:29 2015 From: list_nginx at bluerosetech.com (Darren Pilgrim) Date: Mon, 05 Jan 2015 12:10:29 -0800 Subject: Use of Certs In-Reply-To: References: Message-ID: <54AAEFB5.9040201@bluerosetech.com> On 12/29/2014 11:36 AM, Peter Fraser wrote: > Hi All > I am very new to nginx and am currently doing a lot of reading but would > just love to have a nudge in the right direction > > I want to set up nginx as a reverse proxy for about three IIS servers > behind a firewall. > One of them is a public web server that handles secure logins. It is > configured with a certificate signed by a CA. Do I need to import the > web server's private key on to the nginx box or is this something I > don't need to worry about? If you want nginx to proxy HTTPS connections, it needs to be the SSL endpoint. In that case, nginx needs the certificate and key so it presents the correct credentials to the client. Without it, the most you could do is port-forward 443 on the nginx box to the secure server behind it (i.e., no proxying at all). From kpariani at zimbra.com Mon Jan 5 23:04:52 2015 From: kpariani at zimbra.com (Kunal Pariani) Date: Mon, 5 Jan 2015 17:04:52 -0600 (CST) Subject: resolver directive doesn't fallback to the system DNS resolver Message-ID: <1347758115.1009694.1420499092638.JavaMail.zimbra@zimbra.com> Hello, I am looking at how to use nginx's resolver directive (http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver) to address this one issue i am facing. I have a host for which there is already an entry in the system DNS resolver (verified using nslookup/dig) but when i specify the same host in the proxy_pass directive inside a location block, i get the following error thrown in nginx.log 015/01/05 14:24:13 [error] 22560#0: *5 no resolver defined to resolve ... Seems like nginx is not falling back to the system DNS resolver in case the 'resolver' directive is not used. Isn't this incorrect behaviour ? Thanks -Kunal -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jan 6 11:06:18 2015 From: nginx-forum at nginx.us (Gona) Date: Tue, 06 Jan 2015 06:06:18 -0500 Subject: Upstream Keepalive connection close Message-ID: <65ef672b0fa1d3e42ed5a00831878975.NginxMailingListEnglish@forum.nginx.org> I have Nginx server configured with couple of backend servers with keepalive connections enabled. I am trying to understand what will be the Nginx's behaviour in case the connection is closed by an upstream server legitimately when Nginx is trying to send a new request exactly at the same time. In this race condition, does Nginx re-try the request internally or does it return an error code? In case Nginx needs to be forced to retry, should I be using proxy_next_upstream? My understanding is that this setting will make the request re-tried on the next server in the upstream block. On the same note, how do I force the retry on the failed server first to avoid cache misses. Thanks, Gopala Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255966,255966#msg-255966 From nginx-forum at nginx.us Tue Jan 6 13:34:53 2015 From: nginx-forum at nginx.us (blu) Date: Tue, 06 Jan 2015 08:34:53 -0500 Subject: Location served by all virtual servers In-Reply-To: References: Message-ID: That helps! Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255955,255968#msg-255968 From nginx-forum at nginx.us Tue Jan 6 13:54:52 2015 From: nginx-forum at nginx.us (nginxsantos) Date: Tue, 06 Jan 2015 08:54:52 -0500 Subject: Conversion Scripts Message-ID: Hi, Does anyone has any scripts to convert the F5 config to Nginx (acting as a reverse proxy) config? Thanks.. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255969,255969#msg-255969 From reallfqq-nginx at yahoo.fr Tue Jan 6 19:19:26 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 6 Jan 2015 20:19:26 +0100 Subject: Conversion Scripts In-Reply-To: References: Message-ID: IMHO, it is no trivial thing to switch from a grammar to another. Automated tools already fail at properly converting Apache configurations to nginx ones. I wonder why it whould be any different for F5. I suggest you use standard GNU/Linux tools (grep, sed, awk, cut...) to rough out the job before manually fine-tuning. You could get help from templates/configuration management for redundant generation. --- *B. R.* On Tue, Jan 6, 2015 at 2:54 PM, nginxsantos wrote: > Hi, > Does anyone has any scripts to convert the F5 config to Nginx (acting as a > reverse proxy) config? > > Thanks.. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,255969,255969#msg-255969 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From petros.fraser at gmail.com Tue Jan 6 21:39:39 2015 From: petros.fraser at gmail.com (Peter Fraser) Date: Tue, 6 Jan 2015 16:39:39 -0500 Subject: Bug re: openssl-1.0.1 Message-ID: Hi All I'm trying to use nginx to also proxy to owa. I am getting the error *peer closed connection in SSL handshake while SSL handshaking to upstream* I have read that this is due to a bug and that the solution is to downgrade to openssl 1.0 I don't want to downgrade because I would want users to be able to connect using TLS-1.1 and 1.2 and my understanding is that support for these protocols was introduced in openssl-1.0.1 So my question is: Is this a bug in nginx or in openssl? If nginx, has it been fixed yet or will it be soon? -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Tue Jan 6 22:09:00 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 6 Jan 2015 23:09:00 +0100 Subject: Bug re: openssl-1.0.1 In-Reply-To: References: Message-ID: > Hi All > I'm trying to use nginx to also proxy to owa. I am getting the error > peer closed connection in SSL handshake while SSL handshaking to upstream > > I have read that this is due to a bug and that the solution is to > downgrade to openssl 1.0 Where did you read that? From the information you provided, there is no way to understand the issue here at all. Reproduce this with nginx in debug mode, post the output and better yet, post an ssldump sample of the failed handshake as well. Lukas From petros.fraser at gmail.com Tue Jan 6 22:46:12 2015 From: petros.fraser at gmail.com (Peter Fraser) Date: Tue, 6 Jan 2015 17:46:12 -0500 Subject: Bug re: openssl-1.0.1 In-Reply-To: References: Message-ID: Hi. Thanks for replying. I read it in two places. Here are the links. 1. http://serverfault.com/questions/436737/forcing-a-particular-ssl-protocol-for-an-nginx-proxying-server 2. http://w3facility.org/question/forcing-a-particular-ssl-protocol-for-an-nginx-proxying-server/ The full error is this: *peer closed connection in SSL handshake while SSL handshaking, client: , server: request: "POST /Microsoft-Server-ActiveSync?Cmd=Ping&User=%5C&DeviceId=SEC090121863242D&DeviceType=SAMSUNGSMT800 HTTP/1.1", upstream: "https://SERVER_IP:443/Microsoft-Server-ActiveSync?Cmd=Ping&User= %5C&DeviceId=SAMSUNGSGHI337", host: ""* produced with debugging enabled. If I run *openssl s_client -connect wrote: > > Hi All > > I'm trying to use nginx to also proxy to owa. I am getting the error > > peer closed connection in SSL handshake while SSL handshaking to upstream > > > > I have read that this is due to a bug and that the solution is to > > downgrade to openssl 1.0 > > Where did you read that? From the information you provided, there > is no way to understand the issue here at all. > > Reproduce this with nginx in debug mode, post the output and better > yet, post an ssldump sample of the failed handshake as well. > > > > Lukas > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Wed Jan 7 00:56:59 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 7 Jan 2015 01:56:59 +0100 Subject: Bug re: openssl-1.0.1 In-Reply-To: References: , , Message-ID: > Hi. Thanks for replying. > I read it in two places. Here are the links. > 1. > http://serverfault.com/questions/436737/forcing-a-particular-ssl-protocol-for-an-nginx-proxying-server > 2. > http://w3facility.org/question/forcing-a-particular-ssl-protocol-for-an-nginx-proxying-server/ > > The full error is this: peer closed connection in SSL handshake while > SSL handshaking, client: , server: request: > "POST > /Microsoft-Server-ActiveSync?Cmd=Ping&User=%5C&DeviceId=SEC090121863242D&DeviceType=SAMSUNGSMT800 > HTTP/1.1", upstream: > "https://SERVER_IP:443/Microsoft-Server-ActiveSync?Cmd=Ping&User=%5C&DeviceId=SAMSUNGSGHI337", > host: "" > > produced with debugging enabled. > > > If I run openssl s_client -connect CONNECTED(00000003) > 675508300:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake > failure:/usr/src/secure/lib/libssl/../../../crypto/openssl/ssl/s23_lib.c:184: > [...] > If I run openssl s_client -connect works but it won't work from nginx even when I enable SSLv3. Ok, so you are running in this particular bug. However, its supposed to be fixed a very long time ago, in openssl 1.0.1b. I guess are running with an nginx executable from a third party, that has been linked to an older release of openssl. What OS/kernel/nginx/openssl release are you running exactly and how did you install it (for example did you install openssl and nginx via apt-get from original ubuntu repositoriers, or did you install from nginx repository or from source)? Lukas From luky-37 at hotmail.com Wed Jan 7 01:12:22 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 7 Jan 2015 02:12:22 +0100 Subject: Bug re: openssl-1.0.1 In-Reply-To: References: , , , , , Message-ID: > I guess are running with an nginx executable from a third party, that has > been linked to an older release of openssl. Since you can reproduce it with openssl s_client, it probably is more complicated than that. can you provide an ssldump of the failed connection attempt? Lukas From nginx-forum at nginx.us Wed Jan 7 08:13:00 2015 From: nginx-forum at nginx.us (khav) Date: Wed, 07 Jan 2015 03:13:00 -0500 Subject: Nginx restart/reload not working Message-ID: <55eeb1af2a5f5d14648d8f211e3b632c.NginxMailingListEnglish@forum.nginx.org> I have compiled nginx from source and i think that there is something wrong with my init script.I changed the error log from debug to crit but error log was still showing [debug] in logs.I had to killall nginx and then i ran service nginx start to nginx again #!/bin/sh # # nginx - this script starts and stops the nginx daemon # # chkconfig: - 85 15 # description: Nginx is an HTTP(S) server, HTTP(S) reverse \ # proxy and IMAP/POP3 proxy server # processname: nginx # config: /etc/nginx/nginx.conf # config: /etc/sysconfig/nginx # pidfile: /var/run/nginx.pid # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ "$NETWORKING" = "no" ] && exit 0 nginx="/usr/sbin/nginx" prog=$(basename $nginx) NGINX_CONF_FILE="/etc/nginx/nginx.conf" [ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx lockfile=/var/lock/subsys/nginx make_dirs() { # make required directories user=`$nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -` if [ -z "`grep $user /etc/passwd`" ]; then useradd -M -s /bin/nologin $user fi options=`$nginx -V 2>&1 | grep 'configure arguments:'` for opt in $options; do if [ `echo $opt | grep '.*-temp-path'` ]; then value=`echo $opt | cut -d "=" -f 2` if [ ! -d "$value" ]; then # echo "creating" $value mkdir -p $value && chown -R $user $value fi fi done } start() { [ -x $nginx ] || exit 5 [ -f $NGINX_CONF_FILE ] || exit 6 make_dirs echo -n $"Starting $prog: " daemon $nginx -c $NGINX_CONF_FILE retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc $prog -QUIT retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { configtest || return $? stop sleep 1 start } reload() { configtest || return $? echo -n $"Reloading $prog: " killproc $nginx -HUP RETVAL=$? echo } force_reload() { restart } configtest() { $nginx -t -c $NGINX_CONF_FILE } rh_status() { status $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart|configtest) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}" exit 2 esac nginx version: nginx/1.7.9 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --user=nginx --group=nginx --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_spdy_module --with-http_realip_module --with-http_geoip_module --with-http_sub_module --with-http_random_index_module --with-http_gzip_static_module --with-http_stub_status_module --with-debug Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255979,255979#msg-255979 From kpariani at zimbra.com Wed Jan 7 20:49:26 2015 From: kpariani at zimbra.com (Kunal Pariani) Date: Wed, 7 Jan 2015 14:49:26 -0600 (CST) Subject: resolver directive doesn't fallback to the system DNS resolver In-Reply-To: <1347758115.1009694.1420499092638.JavaMail.zimbra@zimbra.com> References: <1347758115.1009694.1420499092638.JavaMail.zimbra@zimbra.com> Message-ID: <738448112.1142650.1420663766577.JavaMail.zimbra@zimbra.com> Ping.. Thanks -Kunal From: "Kunal Pariani" To: nginx at nginx.org Sent: Monday, January 5, 2015 3:04:52 PM Subject: resolver directive doesn't fallback to the system DNS resolver Hello, I am looking at how to use nginx's resolver directive (http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver) to address this one issue i am facing. I have a host for which there is already an entry in the system DNS resolver (verified using nslookup/dig) but when i specify the same host in the proxy_pass directive inside a location block, i get the following error thrown in nginx.log 015/01/05 14:24:13 [error] 22560#0: *5 no resolver defined to resolve ... Seems like nginx is not falling back to the system DNS resolver in case the 'resolver' directive is not used. Isn't this incorrect behaviour ? Thanks -Kunal _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Jan 7 23:14:17 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 7 Jan 2015 23:14:17 +0000 Subject: resolver directive doesn't fallback to the system DNS resolver In-Reply-To: <1347758115.1009694.1420499092638.JavaMail.zimbra@zimbra.com> References: <1347758115.1009694.1420499092638.JavaMail.zimbra@zimbra.com> Message-ID: <20150107231417.GX15670@daoine.org> On Mon, Jan 05, 2015 at 05:04:52PM -0600, Kunal Pariani wrote: Hi there, > 015/01/05 14:24:13 [error] 22560#0: *5 no resolver defined to resolve ... > > Seems like nginx is not falling back to the system DNS resolver in case the 'resolver' directive is not used. Isn't this incorrect behaviour ? == events {} http { server { listen 8080; location /one { proxy_pass http://www.example.com; } } } == Works for me. What config file shows the problem that you report? (If the above fails for you, then it may be worth examining external parts.) f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Jan 7 23:35:04 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 7 Jan 2015 23:35:04 +0000 Subject: Skip Location Based On Query String Parameter? In-Reply-To: References: <20141113231749.GG3771@daoine.org> Message-ID: <20150107233504.GY15670@daoine.org> On Sun, Jan 04, 2015 at 09:12:40AM -0500, nrahl wrote: Hi there, > > > location ~ ^/([a-zA-Z0-9\-]+)/ { #Use cache if possible, then > > proxy pass > > > > if ($arg_nocache = true) { > > include /etc/nginx/apache-pass; > > } > > > > > try_files /cache/$1.html.gz /cache/$1.html @apache; > > > } > > > > could possibly work. > > When trying to use if(){include} I get the error, "'include' directive is > not allowed here", You are correct; I hadn't tested it -- I had read "Context: any" on http://nginx.org/r/include and had incorrectly assumed. But if I just use the "proxy_pass" directive instead of the "include", then it seems to work for me (going to the upstream without attempting /cache/ files first). > Maybe it can be done by using a nested location blocks, something like: Query string does not take part in location matches, so this won't work. f -- Francis Daly francis at daoine.org From kpariani at zimbra.com Wed Jan 7 23:37:22 2015 From: kpariani at zimbra.com (Kunal Pariani) Date: Wed, 7 Jan 2015 17:37:22 -0600 (CST) Subject: resolver directive doesn't fallback to the system DNS resolver In-Reply-To: <20150107231417.GX15670@daoine.org> References: <1347758115.1009694.1420499092638.JavaMail.zimbra@zimbra.com> <20150107231417.GX15670@daoine.org> Message-ID: <416657328.1147705.1420673842245.JavaMail.zimbra@zimbra.com> This is what i have. http { server { listen 443; location ^~ /zss { proxy_pass https://www.example.com$request_uri; } } } Now as per http://gc-taylor.com/blog/2011/11/10/nginx-aws-elb-name-resolution-resolvers/, If you are running nginx as a proxy in front of An Amazon Web Services Elastic Load Balancer (ELB) which is the case for me, it is not safe to merely define an upstream using the hostname of ELB and call it a day. Although i don't want to use this resolver directive here and instead just want nginx to use the system DNS resolver (from /etc/resolv.conf). Is there a way to achieve this ? Thanks -Kunal From: "Francis Daly" To: nginx at nginx.org Sent: Wednesday, January 7, 2015 3:14:17 PM Subject: Re: resolver directive doesn't fallback to the system DNS resolver On Mon, Jan 05, 2015 at 05:04:52PM -0600, Kunal Pariani wrote: Hi there, 015/01/05 14:24:13 [error] 22560#0: *5 no resolver defined to resolve ... Seems like nginx is not falling back to the system DNS resolver in case the 'resolver' directive is not used. Isn't this incorrect behaviour ? == events {} http { server { listen 8080; location /one { proxy_pass http://www.example.com; } } } == Works for me. What config file shows the problem that you report? (If the above fails for you, then it may be worth examining external parts.) f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Jan 7 23:46:49 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 7 Jan 2015 23:46:49 +0000 Subject: limit_conn module exclude also on Maxim Dunin recommended code In-Reply-To: <4ced0c2d3e8f2c00f362461386b4dcbe.NginxMailingListEnglish@forum.nginx.org> References: <4ced0c2d3e8f2c00f362461386b4dcbe.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150107234649.GZ15670@daoine.org> On Sun, Jan 04, 2015 at 09:44:19AM -0500, ASTRAPI wrote: Hi there, > 1)Does nginx realy knows how to exclude ip's in this format .0/21 or i must > use them as 199.27.128.5 for example? http://nginx.org/r/geo > 2)Now i want to use the limit_conn_zone on the above recommendation from <...> > But how i can use the above exclude list for the limit_conn module also? You have > limit_conn_zone $binary_remote_addr zone=alpha:8m; > limit_req_zone $limit zone=delta:8m rate=60r/s; and > limit_conn alpha 20; > limit_req zone=delta burst=90 nodelay; Compare http://nginx.org/r/limit_conn_zone with http://nginx.org/r/limit_req_zone Which part of your "req" config means that you omit some client addresses from accounting? What similar "zone" config could you use? f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Jan 8 00:15:20 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 8 Jan 2015 00:15:20 +0000 Subject: resolver directive doesn't fallback to the system DNS resolver In-Reply-To: <416657328.1147705.1420673842245.JavaMail.zimbra@zimbra.com> References: <1347758115.1009694.1420499092638.JavaMail.zimbra@zimbra.com> <20150107231417.GX15670@daoine.org> <416657328.1147705.1420673842245.JavaMail.zimbra@zimbra.com> Message-ID: <20150108001520.GA15670@daoine.org> On Wed, Jan 07, 2015 at 05:37:22PM -0600, Kunal Pariani wrote: Hi there, > http { > server { > listen 443; > location ^~ /zss > { > proxy_pass https://www.example.com$request_uri; > } > } > } Ok, I see the "no resolver defined to resolve www.example.com" message when I make a request that matches that location. > Although i don't want to use this resolver directive here and instead just want nginx to use the system DNS resolver (from /etc/resolv.conf). Is there a way to achieve this ? Unless something has changed recently that I haven't seen, my understanding is: if the hostname is known at start time, nginx will use the system resolver to resolve it, and will use the result forever otherwise, you must use a "resolver" directive to tell nginx which name servers to use for runtime resolution. http://nginx.org/r/resolver There is no default for "resolver"; if you want one to be used, you must configure it explicitly. So I think the answer to your question is "no". (You could probably come up with a way to read /etc/resolv.conf when it changes, and update the nginx config and reload it; but that's a "dynamic reconfiguration" problem, not an "nginx dynamic reconfiguration" problem.) f -- Francis Daly francis at daoine.org From agentzh at gmail.com Thu Jan 8 00:30:06 2015 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 7 Jan 2015 16:30:06 -0800 Subject: resolver directive doesn't fallback to the system DNS resolver In-Reply-To: <20150108001520.GA15670@daoine.org> References: <1347758115.1009694.1420499092638.JavaMail.zimbra@zimbra.com> <20150107231417.GX15670@daoine.org> <416657328.1147705.1420673842245.JavaMail.zimbra@zimbra.com> <20150108001520.GA15670@daoine.org> Message-ID: Hello! On Wed, Jan 7, 2015 at 4:15 PM, Francis Daly wrote: > (You could probably come up with a way to read /etc/resolv.conf when it > changes, and update the nginx config and reload it; but that's a "dynamic > reconfiguration" problem, not an "nginx dynamic reconfiguration" problem.) > Yeah, I think it's better for the nginx resolver to automatically use whatever is defined in /etc/resolv.conf when the user does not configure the "resolver" directive in her nginx.conf. I'm already tired of seeing all those user questions regarding the error message "no resolver defined to resolve ..." over the years. Alas. Regards, -agentzh From lists at ruby-forum.com Thu Jan 8 04:13:42 2015 From: lists at ruby-forum.com (Miroslav S.) Date: Thu, 08 Jan 2015 05:13:42 +0100 Subject: resolver does not re-resolve upstream servers after initial cache In-Reply-To: <38f956e05389a1f8e0b887e4a00d760e@ruby-forum.com> References: <38f956e05389a1f8e0b887e4a00d760e@ruby-forum.com> Message-ID: <855e9e119b249cd78bb936be615cac9f@ruby-forum.com> any update? -- Posted via http://www.ruby-forum.com/. From ivan at pangea.org Thu Jan 8 09:48:07 2015 From: ivan at pangea.org (Ivan Vilata i Balaguer) Date: Thu, 8 Jan 2015 10:48:07 +0100 Subject: proxy_pass ignoring gai.conf/RFC3484 In-Reply-To: <20150105041745.GC47350@mdounin.ru> References: <20150102151340.GA2141@sax.selidor.net> <20150105041745.GC47350@mdounin.ru> Message-ID: <20150108094807.GA18921@sax.selidor.net> Maxim Dounin (2015-01-05 07:17:45 +0300) wrote: > On Fri, Jan 02, 2015 at 04:13:40PM +0100, Ivan Vilata i Balaguer wrote: > > [...] > > > BUT if I configure NginX with ``proxy_pass > > http://internal_machine;``, it always insists in using the IPv4 > > address first, regardless of what ``gai.conf`` says. The only way I > > have to force IPv6 first is hardwiring it in the URL (which is ugly) > > or including the resolution in ``/etc/hosts`` (which disperses > > configuration). > > > > Is this behaviour expected? Maybe I missed some > > configuration aspect? > > If a name in proxy_pass resolves to multiple addresses, nginx will > use them all with round-robin balancing algorithm. Umm, I should have paid closer attention to the docs. I would have expected to still see it affected by RFC3484 but now I see it's a completely different mechanism. Thank you very much for your reply! -- Ivan Vilata i Balaguer From jadas at akamai.com Thu Jan 8 10:06:37 2015 From: jadas at akamai.com (Das, Jagannath) Date: Thu, 8 Jan 2015 15:36:37 +0530 Subject: HTTPS Load Test Message-ID: Hi Folks, I am trying to get some performance numbers on nginx by sending HTTP and HTTPS requests. My aim is to check the ratio of CPU usage, connections/sec across HTTP and HTTPS requests. In the process, I need to verify certain certificates/keys needed for SSL . Are there any tools which can help in generating the load in the following conditions: 1. Keepalive/Persistent HTTP client support. 2. Options to verify the certificates/keys/CA chain certs. Thanks, Jagannath -------------- next part -------------- An HTML attachment was scrubbed... URL: From black.fledermaus at arcor.de Thu Jan 8 10:16:38 2015 From: black.fledermaus at arcor.de (basti) Date: Thu, 08 Jan 2015 11:16:38 +0100 Subject: HTTPS Load Test In-Reply-To: References: Message-ID: <54AE5906.7080004@arcor.de> You can try "siege". In the past I have take the access log to create a list of urls to be used by siege. Regards, Basti On 08.01.2015 11:06, Das, Jagannath wrote: > Hi Folks, > I am trying to get some performance numbers on nginx by sending > HTTP and HTTPS requests. My aim is to check the ratio of CPU usage, > connections/sec across HTTP and HTTPS requests. > > In the process, I need to verify certain certificates/keys needed for > SSL . Are there any tools which can help in generating the load in the > following conditions: > > 1. Keepalive/Persistent HTTP client support. > 2. Options to verify the certificates/keys/CA chain certs. > > > Thanks, > Jagannath > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From edward at ehibbert.org.uk Thu Jan 8 10:30:25 2015 From: edward at ehibbert.org.uk (Edward Hibbert) Date: Thu, 8 Jan 2015 10:30:25 +0000 Subject: HTTPS Load Test In-Reply-To: <54AE5906.7080004@arcor.de> References: <54AE5906.7080004@arcor.de> Message-ID: Bombard is a useful wrapper round siege. I've had trouble using siege with more than a couple of thousand connections - crashes with buffer overflow. I've not tried to debug this yet but would be interested in other people's experiences. On Thu, Jan 8, 2015 at 10:16 AM, basti wrote: > You can try "siege". > In the past I have take the access log to create a list of urls to be > used by siege. > > Regards, > Basti > > On 08.01.2015 11:06, Das, Jagannath wrote: > > Hi Folks, > > I am trying to get some performance numbers on nginx by sending > > HTTP and HTTPS requests. My aim is to check the ratio of CPU usage, > > connections/sec across HTTP and HTTPS requests. > > > > In the process, I need to verify certain certificates/keys needed for > > SSL . Are there any tools which can help in generating the load in the > > following conditions: > > > > 1. Keepalive/Persistent HTTP client support. > > 2. Options to verify the certificates/keys/CA chain certs. > > > > > > Thanks, > > Jagannath > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jan 8 11:29:08 2015 From: nginx-forum at nginx.us (dadude) Date: Thu, 08 Jan 2015 06:29:08 -0500 Subject: nginx call external api Message-ID: <4c2026b82be039398d5c38a8f39e2656.NginxMailingListEnglish@forum.nginx.org> Hi @all, i need some help with the following situation: we use nginx as reverse proxy for microsoft exchange owa / active sync All working so far but since yesterday we have a new firewall (Palo Alto) which supports "User-ID", meaning that the remote IP is connect to the domain\username. That means that all non-microsoft devices (Apple, Linux) can also use user-based policies in the firewall. Now the problem is, that the username, which is accessing exchange, is bound to the proxy ip and not to the client ip. There exits an Palo Alto API which supports manual mapping via the API. Now my idea was to use the parameters $remote_addr and $remote_user to get this running but i have no idea how to call the api. An example looks like this: https:///api/?type=user-id&key=&action=set&vsys=vsys1&cmd=1.0update "pan\sam1" has to be replaced by $remote_user and ip by $remote_addr, right? But which is the right place in the config to start the api call? My config looks similiar like this: forum.nginx.org/read.php?11,252590,252590 Thanks a lot in advance, Uwe Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256001,256001#msg-256001 From jadas at akamai.com Thu Jan 8 16:28:21 2015 From: jadas at akamai.com (Das, Jagannath) Date: Thu, 8 Jan 2015 21:58:21 +0530 Subject: HTTPS Load Test In-Reply-To: References: <54AE5906.7080004@arcor.de> Message-ID: Thanks Edward. What is the expected ratio of CPU Usage when HTTPS is enabled at Nginx ? How is the ratio affected when we enable persistent HTTP Support? Folks from nginx or who have already bench marked may help here and can point me to useful links on web. From: Edward Hibbert > Reply-To: "nginx at nginx.org" > Date: Thursday, January 8, 2015 at 4:00 PM To: "nginx at nginx.org" > Subject: Re: HTTPS Load Test Bombard is a useful wrapper round siege. I've had trouble using siege with more than a couple of thousand connections - crashes with buffer overflow. I've not tried to debug this yet but would be interested in other people's experiences. On Thu, Jan 8, 2015 at 10:16 AM, basti > wrote: You can try "siege". In the past I have take the access log to create a list of urls to be used by siege. Regards, Basti On 08.01.2015 11:06, Das, Jagannath wrote: > Hi Folks, > I am trying to get some performance numbers on nginx by sending > HTTP and HTTPS requests. My aim is to check the ratio of CPU usage, > connections/sec across HTTP and HTTPS requests. > > In the process, I need to verify certain certificates/keys needed for > SSL . Are there any tools which can help in generating the load in the > following conditions: > > 1. Keepalive/Persistent HTTP client support. > 2. Options to verify the certificates/keys/CA chain certs. > > > Thanks, > Jagannath > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jan 8 16:36:56 2015 From: nginx-forum at nginx.us (xdiaod) Date: Thu, 08 Jan 2015 11:36:56 -0500 Subject: Hash init Message-ID: <7372a91f6f752d0570b7e75cc476af66.NginxMailingListEnglish@forum.nginx.org> Hello, Maybe i am not in the right mailing list, please refer me to the good one if i am at the wrong one. I just want to understand the " for (size = start; size <= hinit->max_size; size++) " loop in the ngx_hash_init function. I do not understand what "size", "key" and "test[key]" mean in first place. Thank you for your help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256004,256004#msg-256004 From nginx-forum at nginx.us Thu Jan 8 17:31:23 2015 From: nginx-forum at nginx.us (ASTRAPI) Date: Thu, 08 Jan 2015 12:31:23 -0500 Subject: limit_conn module exclude also on Maxim Dunin recommended code In-Reply-To: <20150107234649.GZ15670@daoine.org> References: <20150107234649.GZ15670@daoine.org> Message-ID: Thanks for the reply... Ok with the ip's but i can' figure out how to fix th other problem with exclude ip's for limit_conn_zone :( Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255949,256005#msg-256005 From nginx-forum at nginx.us Thu Jan 8 22:49:26 2015 From: nginx-forum at nginx.us (carlg) Date: Thu, 08 Jan 2015 17:49:26 -0500 Subject: How to use Nginx to restrict access to everyfiles to 127.0.0.1, except the php files in / In-Reply-To: <8671623bf13f4b368bae454730cf7a86.NginxMailingListEnglish@forum.nginx.org> References: <20141112112440.GO90224@mdounin.ru> <8671623bf13f4b368bae454730cf7a86.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3f87a57edbfff963d7ef5132be98e513.NginxMailingListEnglish@forum.nginx.org> Here is what i found to achieve this : i denied access to every php files : location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; allow 127.0.0.1; deny all; } and then i create one rule per page (takes time with some scripts, but it worth it :) location ~* ^/myfile.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; try_files $uri $uri/ /index.php?q=$args; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; include /etc/nginx/naxsi.rules; allow all; } Every tutorials i found on nginx tell us to allow / deny in location /. ...but ^(.+\.php) is another location, not included in location / If i follow most tutorials i am still able to reach the php files inside the location / even if i denied access to all of them. Doing this way works great :) I hope this will help someone ... ...someday :) Cheers :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,254785,256007#msg-256007 From nginx-forum at nginx.us Fri Jan 9 00:44:10 2015 From: nginx-forum at nginx.us (carlg) Date: Thu, 08 Jan 2015 19:44:10 -0500 Subject: How to deny access to a folder, but allow access to every subfolders (wildcard) Message-ID: <7beabca543dd41c6422cfece5bb2a296.NginxMailingListEnglish@forum.nginx.org> Hi, I need to deny access to /members but allow access to every folders below. There may be a lot of folders, maybe a thousan, and each of those folders contain 5 other folders. So i need a wildcard. Here is what i tried : location ~ ^/members/([^/]+)/([^/?]+)$ { allow all; } #allow every folders below /members with wildcard location ~ ^/members/ { deny all; } #deny everything else But it doesn't work. What am i missing exactly? Thank you, Carl Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256008,256008#msg-256008 From lvqp at sahsanu.com Fri Jan 9 01:25:10 2015 From: lvqp at sahsanu.com (=?UTF-8?Q?Ra=C3=BAl_Galicia?=) Date: Fri, 09 Jan 2015 02:25:10 +0100 Subject: How to deny access to a folder, but allow access to every subfolders (wildcard) In-Reply-To: <7beabca543dd41c6422cfece5bb2a296.NginxMailingListEnglish@forum.nginx.org> References: <7beabca543dd41c6422cfece5bb2a296.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9d0cecae862805b9e214b7a33f69600a@sahsanu.com> El 2015-01-09 01:44, carlg escribi?: > Hi, > > I need to deny access to /members but allow access to every folders > below. > > There may be a lot of folders, maybe a thousan, and each of those > folders > contain 5 other folders. So i need a wildcard. > > Here is what i tried : > > location ~ ^/members/([^/]+)/([^/?]+)$ { allow all; } #allow > every folders below /members with wildcard > location ~ ^/members/ { deny all; } > #deny > everything else > > But it doesn't work. Hi, This works for me... or I think so ;) location ~* /members/.+/.* { allow all; } location ~* /members/.* { deny all; } Cheers, Ra?l Galicia From nginx-forum at nginx.us Fri Jan 9 10:45:39 2015 From: nginx-forum at nginx.us (cubicdaiya) Date: Fri, 09 Jan 2015 05:45:39 -0500 Subject: A build of nginx with static-linked OpenSSL fails on Mac Message-ID: Hello. A build of nginx with static-linked OpenSSL seems to fail on Mac. $ uname -ar Darwin host 14.0.0 Darwin Kernel Version 14.0.0: Fri Sep 19 00:26:44 PDT 2014; root:xnu-2782.1.97~2/RELEASE_X86_64 x86_64 $ cd nginx-1.7.9 $ ./configure \ --with-http_ssl_module \ --with-openssl=../openssl-1.0.1k $ make . . . Operating system: i686-apple-darwinDarwin Kernel Version 14.0.0: Fri Sep 19 00:26:44 PDT 2014; root:xnu-2782.1.97~2/RELEASE_X86_64 WARNING! If you wish to build 64-bit library, then you have to invoke './Configure darwin64-x86_64-cc' *manually*. You have about 5 seconds to press Ctrl-C to abort. . . . (too many errors) . . . "_sk_value", referenced from: _ngx_ssl_session_cache in ngx_event_openssl.o _ngx_ssl_check_host in ngx_event_openssl.o _ngx_ssl_stapling in ngx_event_openssl_stapling.o ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) make[1]: *** [objs/nginx] Error 1 $ Though the rough patch below fixes failure, is there a better solution expect dynamic-linking OpenSSL? diff -r e9effef98874 auto/lib/openssl/make --- a/auto/lib/openssl/make Fri Dec 26 16:22:59 2014 +0300 +++ b/auto/lib/openssl/make Fri Jan 09 19:24:06 2015 +0900 @@ -56,7 +56,7 @@ $OPENSSL/.openssl/include/openssl/ssl.h: $NGX_MAKEFILE cd $OPENSSL \\ && if [ -f Makefile ]; then \$(MAKE) clean; fi \\ - && ./config --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ + && ./Configure darwin64-x86_64-cc --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\ && \$(MAKE) \\ && \$(MAKE) install LIBDIR=lib Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256012,256012#msg-256012 From nginx-forum at nginx.us Fri Jan 9 12:59:16 2015 From: nginx-forum at nginx.us (nurrony) Date: Fri, 09 Jan 2015 07:59:16 -0500 Subject: Nginx Configuration saying Not found. Why and How to get rid of it? Message-ID: <0d850c34448169d2c969fbab26067811.NginxMailingListEnglish@forum.nginx.org> Hi, I am compiling and installing NGinx from source and installed all following lib sudo yum install gcc \ gcc-c++ \ pcre-devel \ zlib-devel \ make \ unzip \ openssl-devel \ libaio-devel \ glibc \ glibc-devel \ glibc-headers \ libevent \ linux-vdso.so.1 \ libpthread.so.0 \ libcrypt.so.1 \ libstdc++.so.6 \ librt.so.1 \ libm.so.6 \ libpcre.so.0 \ libssl.so.10 \ libcrypto.so.10 \ libdl.so.2 \ libz.so.1 \ libgcc_s.so.1 \ libc.so.6 \ /lib64/ld-linux-x86-64.so.2 \ libfreebl3.so \ libgssapi_krb5.so.2 \ libkrb5.so.3 \ libcom_err.so.2 \ libk5crypto.so.3 \ libkrb5support.so.0 \ libkeyutils.so.1 \ libresolv.so.2 \ libselinux.so.1 $yum groupinstall 'Development Tools' But when I run the following configure command on REHL found some "not found" $./configure \ --with-debug \ --prefix=/etc/nginx \ --sbin-path=/usr/sbin/nginx \ --conf-path=/etc/nginx/nginx.conf \ --pid-path=/var/run/nginx.pid \ --lock-path=/var/run/nginx.lock \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --with-http_gzip_static_module \ --with-http_stub_status_module \ --with-http_realip_module \ --with-http_secure_link_module \ --with-pcre \ --with-file-aio \ --with-cc-opt="-DTCP_FASTOPEN=23" \ --with-ld-opt="-L /usr/local/lib" \ --without-http_scgi_module \ --without-http_uwsgi_module \ --without-http_fastcgi_module \ | grep 'not found' got the following output checking for sys/filio.h ... not found checking for /dev/poll ... not found checking for kqueue ... not found checking for crypt() ... not found checking for F_READAHEAD ... not found checking for F_NOCACHE ... not found checking for directio() ... not found checking for dlopen() ... not found checking for SO_SETFIB ... not found checking for SO_ACCEPTFILTER ... not found checking for kqueue AIO support ... not found checking for setproctitle() ... not found checking for POSIX semaphores ... not found checking for struct dirent.d_namlen ... not found I figure out that the followings are found with another one crypt dlopen kqueue poll POSIX semaphores But other are not found yet. Why this is happening? How to resolve those? and Is it ok having not found while configuration. I am afraid to go further skipping these not found issues Thanks in advance Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256013,256013#msg-256013 From luky-37 at hotmail.com Fri Jan 9 13:27:44 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 9 Jan 2015 14:27:44 +0100 Subject: Nginx Configuration saying Not found. Why and How to get rid of it? In-Reply-To: <0d850c34448169d2c969fbab26067811.NginxMailingListEnglish@forum.nginx.org> References: <0d850c34448169d2c969fbab26067811.NginxMailingListEnglish@forum.nginx.org> Message-ID: > Hi, > > I am compiling and installing NGinx from source > [...] > checking for sys/filio.h ... not found > checking for /dev/poll ... not found > checking for kqueue ... not found > checking for crypt() ... not found > checking for F_READAHEAD ... not found > checking for F_NOCACHE ... not found > checking for directio() ... not found > checking for dlopen() ... not found > checking for SO_SETFIB ... not found > checking for SO_ACCEPTFILTER ... not found > checking for kqueue AIO support ... not found > checking for setproctitle() ... not found > checking for POSIX semaphores ... not found > checking for struct dirent.d_namlen ... not found > > > I figure out that the followings are found with another one > crypt > dlopen > kqueue > poll > POSIX semaphores > > But other are not found yet. Why this is happening? How to resolve those? > and Is it ok having not found while configuration. I am afraid to go further > skipping these not found issues The configure script will worry about those things. Unless you see an actual error, you don't need to worry. "Not found" is an information, not an error in this context, and is expected (for example kqueue is a BSD feature, you don't have it on linux). Lukas From joyce at joycebabu.com Fri Jan 9 13:48:37 2015 From: joyce at joycebabu.com (Joyce Babu) Date: Fri, 9 Jan 2015 19:18:37 +0530 Subject: Multiple matching limit_req Message-ID: I would like to apply rate limiting based on 3 different criteria. 1. CDN should have rate limit of 100 r/s (identified by $http_host) 2. Whitelisted bots should have a rate limit of 15 r/s (identified by $http_user_agent) 3. All other users should have a rate limit of 5 r/s The rules should be applied in the above order of preference. If a rule matches two criteria, the earlier one should get applied. How can I ensure this? I have tried the following config, but it is always rate limited to 5 r/s, irrespective of the order of the limit_req entries. map $http_host $limit_cdn { default ''; "cdn-cname.mydomain.com" $binary_remote_addr; } map $http_user_agent $limit_bot { default ''; ~*(google|bing) $binary_remote_addr; } limit_req_zone $limit_cdn zone=limit_cdn:1m rate=100r/s; limit_req_zone $limit_bot zone=limit_bot:1m rate=15r/s; limit_req_zone $binary_remote_addr zone=limit_all:10m rate=5r/s; limit_req zone=limit_all burst=12; limit_req zone=limit_bot burst=50 nodelay; limit_req zone=limit_cdn burst=200 nodelay; -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Fri Jan 9 18:58:46 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 9 Jan 2015 19:58:46 +0100 Subject: How to use Nginx to restrict access to everyfiles to 127.0.0.1, except the php files in / In-Reply-To: <3f87a57edbfff963d7ef5132be98e513.NginxMailingListEnglish@forum.nginx.org> References: <20141112112440.GO90224@mdounin.ru> <8671623bf13f4b368bae454730cf7a86.NginxMailingListEnglish@forum.nginx.org> <3f87a57edbfff963d7ef5132be98e513.NginxMailingListEnglish@forum.nginx.org> Message-ID: I suggest you put the generic \.php$ regex location into the / default prefix location, like : location / { location \.php$ { [...] } } This avoids having regex location at the first level, since they are sensitive to order. Why using regex locations for individual files? The following would be more efficient: location /myfile.php { [...] } I also suggest you move redundant directives to the upper level whenever possible, this will help maintenance. --- *B. R.* On Thu, Jan 8, 2015 at 11:49 PM, carlg wrote: > Here is what i found to achieve this : > > i denied access to every php files : > > location ~ \.php$ { > fastcgi_split_path_info ^(.+\.php)(/.+)$; > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_index index.php; > include fastcgi_params; > allow 127.0.0.1; > deny all; > } > > > and then i create one rule per page (takes time with some scripts, but it > worth it :) > > location ~* ^/myfile.php$ { > fastcgi_split_path_info ^(.+\.php)(/.+)$; > try_files $uri $uri/ /index.php?q=$args; > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_index index.php; > include fastcgi_params; > include /etc/nginx/naxsi.rules; > allow all; > } > > Every tutorials i found on nginx tell us to allow / deny in location /. > ...but ^(.+\.php) is another location, not included in location / > > If i follow most tutorials i am still able to reach the php files inside > the > location / even if i denied access to all of them. Doing this way works > great :) > > I hope this will help someone ... ...someday :) > Cheers :) > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,254785,256007#msg-256007 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Fri Jan 9 19:03:42 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 9 Jan 2015 20:03:42 +0100 Subject: How to deny access to a folder, but allow access to every subfolders (wildcard) In-Reply-To: <9d0cecae862805b9e214b7a33f69600a@sahsanu.com> References: <7beabca543dd41c6422cfece5bb2a296.NginxMailingListEnglish@forum.nginx.org> <9d0cecae862805b9e214b7a33f69600a@sahsanu.com> Message-ID: nginx provides a prefix to match exact URIs: location = /members { deny all; } All the different prefixes and their use can be found in the location directive documentation. You do not need to set any location for "/members/.+"-like URIs, unless you need to specify directives specific to it. You could then use: location /members { [...] } --- *B. R.* On Fri, Jan 9, 2015 at 2:25 AM, Ra?l Galicia wrote: > El 2015-01-09 01:44, carlg escribi?: > >> Hi, >> >> I need to deny access to /members but allow access to every folders below. >> >> There may be a lot of folders, maybe a thousan, and each of those folders >> contain 5 other folders. So i need a wildcard. >> >> Here is what i tried : >> >> location ~ ^/members/([^/]+)/([^/?]+)$ { allow all; } #allow >> every folders below /members with wildcard >> location ~ ^/members/ { deny all; } >> #deny >> everything else >> >> But it doesn't work. >> > > > Hi, > > This works for me... or I think so ;) > > location ~* /members/.+/.* { allow all; } > location ~* /members/.* { deny all; } > > Cheers, > Ra?l Galicia > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Jan 9 19:16:08 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 9 Jan 2015 19:16:08 +0000 Subject: Multiple matching limit_req In-Reply-To: References: Message-ID: <20150109191608.GC15670@daoine.org> On Fri, Jan 09, 2015 at 07:18:37PM +0530, Joyce Babu wrote: Hi there, > I would like to apply rate limiting based on 3 different criteria. > > 1. CDN should have rate limit of 100 r/s (identified by $http_host) > 2. Whitelisted bots should have a rate limit of 15 r/s (identified by > $http_user_agent) > 3. All other users should have a rate limit of 5 r/s > > The rules should be applied in the above order of preference. If a rule > matches two criteria, the earlier one should get applied. How can I ensure > this? You can't. All limits that match are applied, which means that the most restrictive one is seen. What you *can* do is change your specification with that in mind, and choose your keys so that they are empty when you do not want the limit to apply. > map $http_host $limit_cdn { > default ''; > "cdn-cname.mydomain.com" $binary_remote_addr; > } > map $http_user_agent $limit_bot { > default ''; > ~*(google|bing) $binary_remote_addr; > } Add the following variables with names that more closely resemble what they are intended to do: map $limit_cdn $limit_bot_not_cdn { default ''; '' $limit_bot; } map $limit_cdn$limit_bot $limit_not_bot_not_cdn { '' $binary_remote_addr; default ''; } And use those variables as the keys that you actually mean: > limit_req_zone $limit_cdn zone=limit_cdn:1m rate=100r/s; Leave that one as-is. > limit_req_zone $limit_bot zone=limit_bot:1m rate=15r/s; Change that to be limit_req_zone $limit_bot_not_cdn zone=limit_bot:1m rate=15r/s; > limit_req_zone $binary_remote_addr zone=limit_all:10m rate=5r/s; Change that to be limit_req_zone $limit_not_bot_not_cdn zone=limit_all:10m rate=5r/s; > limit_req zone=limit_all burst=12; > limit_req zone=limit_bot burst=50 nodelay; > limit_req zone=limit_cdn burst=200 nodelay; and the rest should work as you want. (Unless you use "return".) f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Jan 9 19:41:00 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 9 Jan 2015 19:41:00 +0000 Subject: limit_conn module exclude also on Maxim Dunin recommended code In-Reply-To: References: <20150107234649.GZ15670@daoine.org> Message-ID: <20150109194100.GD15670@daoine.org> On Thu, Jan 08, 2015 at 12:31:23PM -0500, ASTRAPI wrote: Hi there, > Ok with the ip's but i can' figure out how to fix th other problem with > exclude ip's for limit_conn_zone :( I'm confused why you're confused. You originally had limit_conn_zone $binary_remote_addr zone=alpha:8m; limit_req_zone $binary_remote_addr zone=delta:8m rate=40r/s; and you wanted to exclude some addresses from the limit_req_zone, so you changed it to be limit_req_zone $limit zone=delta:8m rate=60r/s; Now you want to exclude the same addresses from the limit_conn_zone, but you can't see what configuration change might possibly do that? Replace $binary_remote_addr with $limit. f -- Francis Daly francis at daoine.org From siefke_listen at web.de Fri Jan 9 23:11:20 2015 From: siefke_listen at web.de (Silvio Siefke) Date: Sat, 10 Jan 2015 00:11:20 +0100 Subject: download and movie alias Message-ID: <20150110001120.02a9e906a2e5f84dc123a338@web.de> Hello, i use static directory for my css files, video files and download directory. I understand not why the link www.example.com/download not work but www.example.com/download/ works. Has someone an idea what is wrong? # video files for all websites location ~ ^/video/(.*)$ { alias /var/www/static/video/$1; mp4; flv; mp4_buffer_size 4M; mp4_max_buffer_size 10M; autoindex on; } # download directory for all websites location ~ ^/downloads/(.*)$ { alias /var/www/static/downloads/$1; autoindex on; } Thank you for help & Nice day Silvio From francis at daoine.org Fri Jan 9 23:52:25 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 9 Jan 2015 23:52:25 +0000 Subject: download and movie alias In-Reply-To: <20150110001120.02a9e906a2e5f84dc123a338@web.de> References: <20150110001120.02a9e906a2e5f84dc123a338@web.de> Message-ID: <20150109235225.GE15670@daoine.org> On Sat, Jan 10, 2015 at 12:11:20AM +0100, Silvio Siefke wrote: Hi there, > I understand not why the link www.example.com/download > not work but www.example.com/download/ works. The request /download does not match either of these location{} blocks. > location ~ ^/video/(.*)$ { > location ~ ^/downloads/(.*)$ { Perhaps add location = /download { return 301 /download/; } (You may mean /download or /downloads, I'm not sure.) f -- Francis Daly francis at daoine.org From joyce at joycebabu.com Sat Jan 10 05:50:47 2015 From: joyce at joycebabu.com (Joyce Babu) Date: Sat, 10 Jan 2015 11:20:47 +0530 Subject: Multiple matching limit_req In-Reply-To: <20150109191608.GC15670@daoine.org> References: <20150109191608.GC15670@daoine.org> Message-ID: Hi Francis, Thank you for the clever solution. I have updated my server configuration with the change and it is now working. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Jan 10 14:50:56 2015 From: nginx-forum at nginx.us (exilemirror) Date: Sat, 10 Jan 2015 09:50:56 -0500 Subject: NGINX Access Logs Message-ID: <9b254b20716078b872bc8239a5cdf38d.NginxMailingListEnglish@forum.nginx.org> Hi guys, I'm new to nginx. Can anyone explain what does - - - "-" "-" "-" "-" - means in the access logs? Been getting lots of this in the log file. Would like to know if this is the cause of nginx to show that there's a spike in traffic through the nginx graph. Example of log below: [12/Feb/2014:11:25:28 +0800] "POST /...svc HTTP/1.1" 200 274 1.68 870 0.008 0.002 192.168.10.71:84 - - - "-" "-" "-" "-" - HTTP/1.1" 200 274 1.68 869 0.026 0.006 10.14.241.70:84 - - - "-" "-" "-" "-" - Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256026,256026#msg-256026 From nginx-forum at nginx.us Sat Jan 10 15:02:59 2015 From: nginx-forum at nginx.us (ASTRAPI) Date: Sat, 10 Jan 2015 10:02:59 -0500 Subject: limit_conn module exclude also on Maxim Dunin recommended code In-Reply-To: <20150109194100.GD15670@daoine.org> References: <20150109194100.GD15670@daoine.org> Message-ID: Ok thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255949,256027#msg-256027 From nginx-forum at nginx.us Sat Jan 10 15:16:12 2015 From: nginx-forum at nginx.us (ppwm) Date: Sat, 10 Jan 2015 10:16:12 -0500 Subject: Nginx behind a reverse proxy sending 499 Message-ID: <5442b40c49e129cdbe7f32bb01f317f3.NginxMailingListEnglish@forum.nginx.org> We have a Java based reverse proxy(developed in-house) which is talking to Nginx which is a proxy_pass for gunicorn server(python/django). The HTTP request flows from Java reverse proxy (JRPxy) to nginx to gunicorn. All these servers are running on the same machine. Previously JRPxy was sending Connection: keep-alive to nginx to reuse the connections. However we decided to instead send Connection: close header and use a new connection for every request. Since we made this change we see nginx returning 499 status code. I debugged the JRPxy at my end. I see that each time we write the request headers & body and the very next moment we try to read nginx response we get 0 (no bytes) or -1(eof) as the number of bytes read. When we get 0 we eventually get -1 subsequently (EOF after reading no bytes). >From the perspective of code, we do Socket.shutdownOutput() (http://docs.oracle.com/javase/7/docs/api/java/net/Socket.html#shutdownOutput%28%29) each time we send Connection:close header. In Java's terms it indicates to the remote socket that it is done sending data (http://stackoverflow.com/questions/15206605/purpose-of-socket-shutdownoutput). If I comment this line alone and still sending the Connection:close header, I get valid 200 OK response. I have caputred the netstat output to see the connection state. When we do Socket.shutdownOutput() we see TIME_WAIT from nginx's end indicating that nginx initiated the socket close and is now waiting for an ACK from JRPxy. ------------------------------------------------------------ tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 127.0.0.1:8888 127.0.0.1:47342 TIME_WAIT - timewait (59.17/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 127.0.0.1:8888 127.0.0.1:47342 TIME_WAIT - timewait (58.14/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 127.0.0.1:8888 127.0.0.1:47342 TIME_WAIT - timewait (57.12/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 127.0.0.1:8888 127.0.0.1:47342 TIME_WAIT - timewait (56.09/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 127.0.0.1:8888 127.0.0.1:47342 TIME_WAIT - timewait (55.07/0/0) ------------------------------------------------------------ However if I comment the Socket.shutdownOutput() I see the netstat output in reverse way. This time JRPxy is in TIME_WAIT state, indicating it initiated the socket close. ---------------------------------------------------------------------- tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 127.0.0.1:47379 127.0.0.1:8888 TIME_WAIT - timewait (59.59/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 127.0.0.1:47379 127.0.0.1:8888 TIME_WAIT - timewait (58.57/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 127.0.0.1:47379 127.0.0.1:8888 TIME_WAIT - timewait (57.54/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 127.0.0.1:47385 127.0.0.1:8888 TIME_WAIT - timewait (59.87/0/0) tcp6 0 0 127.0.0.1:47379 127.0.0.1:8888 TIME_WAIT - timewait (56.52/0/0) tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0) tcp6 0 0 127.0.0.1:47385 127.0.0.1:8888 TIME_WAIT - timewait (58.85/0/0) ------------------------------------------------------------------------ By any chance is Socket.shutdownOutput() indicating to nginx that it is closing the connection and hence nginx is sending 499? If that is true then should nginx treat this as half-close and still send back the data? My other assumption is that nginx is responding very quickly and closing the socket immediately even before JRPxy gets a chance to read from the socket. This is less likely as there are delays due to gunicorn processing. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256028,256028#msg-256028 From steve at greengecko.co.nz Sat Jan 10 23:00:49 2015 From: steve at greengecko.co.nz (Steve Holdoway) Date: Sun, 11 Jan 2015 12:00:49 +1300 Subject: NGINX Access Logs In-Reply-To: <9b254b20716078b872bc8239a5cdf38d.NginxMailingListEnglish@forum.nginx.org> References: <9b254b20716078b872bc8239a5cdf38d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1420930849.3677.291.camel@steve-new> On Sat, 2015-01-10 at 09:50 -0500, exilemirror wrote: > Hi guys, > > I'm new to nginx. Can anyone explain what does - - - "-" "-" "-" "-" - > means in the access logs? Been getting lots of this in the log file. > Would like to know if this is the cause of nginx to show that there's a > spike in traffic through the nginx graph. Example of log below: > > [12/Feb/2014:11:25:28 +0800] "POST /...svc HTTP/1.1" 200 274 1.68 870 0.008 > 0.002 192.168.10.71:84 - - - "-" "-" "-" "-" - > > HTTP/1.1" 200 274 1.68 869 0.026 0.006 10.14.241.70:84 - - - "-" "-" "-" "-" > - > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256026,256026#msg-256026 > look in your nginx.conf. I have the following line log_format main '$remote_addr - $remote_user [$time_local] "$request "' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ( sorry for the wrap ) Which itemises the fields. Obviously yours is different, but it'll give you the list. Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at nginx.us Sun Jan 11 06:37:31 2015 From: nginx-forum at nginx.us (ppwm) Date: Sun, 11 Jan 2015 01:37:31 -0500 Subject: Nginx behind a reverse proxy sending 499 In-Reply-To: <5442b40c49e129cdbe7f32bb01f317f3.NginxMailingListEnglish@forum.nginx.org> References: <5442b40c49e129cdbe7f32bb01f317f3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0eaabdb5aed27ca150fce24d26457cb9.NginxMailingListEnglish@forum.nginx.org> To debug the issue further, I wrote a simple Java based HTTP client. This client would open a socket to the nginx server, write the request line (GET / HTTP/1.1), write the host header, write the Connection:close header and commit the request. While committing the write the CRLF character twice. In this setup, I tested by having a proxy pass to www.google.com and not having a proxy pass (nginx servers the default index.html. If there is no proxy pass, nginx never gives a 499 status code. Even if I do Socket.shutdownOutput(), nginx give a valid 200 response. This is irrespective of the Connection header (keepalive/close). If there is proxy pass, I get a valid response if I don't do Socket.shutdownOutput(). But if I do Socket.shutdownOutput(), I get 499 irrespective of Connection header (keepalive/close). This implies that nginx is treating client's Socket.shutdownOutput() as client closing the connection despite all data being written to the socket. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256028,256035#msg-256035 From nginx-forum at nginx.us Sun Jan 11 08:11:04 2015 From: nginx-forum at nginx.us (exilemirror) Date: Sun, 11 Jan 2015 03:11:04 -0500 Subject: NGINX Access Logs In-Reply-To: <1420930849.3677.291.camel@steve-new> References: <1420930849.3677.291.camel@steve-new> Message-ID: <00f271d6614b8a3d8061e9bc32d1f92e.NginxMailingListEnglish@forum.nginx.org> Hi Steve, Thanks for the reply. How do we determine if there's an overload of tcp connections via nginx? Is it via this access logs? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256026,256036#msg-256036 From smntov at gmail.com Sun Jan 11 11:23:42 2015 From: smntov at gmail.com (ST) Date: Sun, 11 Jan 2015 13:23:42 +0200 Subject: verifying that load balancing really works Message-ID: <1420975422.11004.4.camel@debox> Hi, how can I verify that load balancing really works? I have 2 servers with nginx, one of them functions as a LoadBalancer and redirects requests either on itself or on the other server(by default round-robin). When I look on the traffic using jnettop it looks like both servers are loaded(while the LB is loaded more), but if I check traffic statistics with my server provider I see that the LB server shows ca. 233Gb while the other server only 0.012Gb in the same period. What is the rigth way to verify it? Thank you. From reallfqq-nginx at yahoo.fr Sun Jan 11 14:00:28 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 11 Jan 2015 15:00:28 +0100 Subject: NGINX Access Logs In-Reply-To: <00f271d6614b8a3d8061e9bc32d1f92e.NginxMailingListEnglish@forum.nginx.org> References: <1420930849.3677.291.camel@steve-new> <00f271d6614b8a3d8061e9bc32d1f92e.NginxMailingListEnglish@forum.nginx.org> Message-ID: nginx does not handle the TCP stack, which is part of the network layer of the OSI stack, underneath anything nginx does. Have a look at your OS network stack monitoring tools. Exhaustion of TCP sockets (or file descriptors) will lead to the impossibility of opening new connections and might lead to some erratic/strange behavior, looking at the application level. nginx might give a specific error message... or not. Loads of reasons might be responsible of the impossibility of opening new connections. Anyhow, use the proper tool to get the proper piece of information: that is a logic proven to be robust. --- *B. R.* On Sun, Jan 11, 2015 at 9:11 AM, exilemirror wrote: > Hi Steve, > > Thanks for the reply. How do we determine if there's an overload of tcp > connections via nginx? > Is it via this access logs? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256026,256036#msg-256036 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sun Jan 11 14:06:07 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 11 Jan 2015 15:06:07 +0100 Subject: verifying that load balancing really works In-Reply-To: <1420975422.11004.4.camel@debox> References: <1420975422.11004.4.camel@debox> Message-ID: You can tweak the access log entries with log_format to include the upstream server, so you will know which upstream(s) each request got served by. As a manual, simple, test case, I would create a specific, testing location on each of the upstream serving information uniquely identifying them. See the return directive to simply send some basic HTTP codes and attached messages. Refreshing your browser (or using an automated crawler hitting your front-end for testing purpose) will give you those answers to check how requests are being balanced. --- *B. R.* On Sun, Jan 11, 2015 at 12:23 PM, ST wrote: > Hi, > > how can I verify that load balancing really works? I have 2 servers with > nginx, one of them functions as a LoadBalancer and redirects requests > either on itself or on the other server(by default round-robin). When I > look on the traffic using jnettop it looks like both servers are > loaded(while the LB is loaded more), but if I check traffic statistics > with my server provider I see that the LB server shows ca. 233Gb while > the other server only 0.012Gb in the same period. What is the rigth way > to verify it? > > Thank you. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From siefke_listen at web.de Sun Jan 11 14:26:52 2015 From: siefke_listen at web.de (Silvio Siefke) Date: Sun, 11 Jan 2015 15:26:52 +0100 Subject: download and movie alias In-Reply-To: <20150109235225.GE15670@daoine.org> References: <20150110001120.02a9e906a2e5f84dc123a338@web.de> <20150109235225.GE15670@daoine.org> Message-ID: <20150111152652.3e823584443e4cc0127232ef@web.de> Hello, On Fri, 9 Jan 2015 23:52:25 +0000 Francis Daly wrote: > location = /download { return 301 /download/; } Thank you it works. Silvio From nginx-forum at nginx.us Sun Jan 11 16:12:15 2015 From: nginx-forum at nginx.us (hebrew878) Date: Sun, 11 Jan 2015 11:12:15 -0500 Subject: Resize & cache image from 3rd-party server? In-Reply-To: <20090922042629.GA50986@rambler-co.ru> References: <20090922042629.GA50986@rambler-co.ru> Message-ID: <4c30b97ab07487e62b55993e5132f15e.NginxMailingListEnglish@forum.nginx.org> please give cache option for resized images too. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,8246,256043#msg-256043 From mdounin at mdounin.ru Mon Jan 12 12:27:58 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 12 Jan 2015 15:27:58 +0300 Subject: resolver does not re-resolve upstream servers after initial cache In-Reply-To: <855e9e119b249cd78bb936be615cac9f@ruby-forum.com> References: <38f956e05389a1f8e0b887e4a00d760e@ruby-forum.com> <855e9e119b249cd78bb936be615cac9f@ruby-forum.com> Message-ID: <20150112122758.GE47350@mdounin.ru> Hello! On Thu, Jan 08, 2015 at 05:13:42AM +0100, Miroslav S. wrote: > any update? This is now available as a commercial feature in nginx+, see the "resolve" parameter here: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Jan 12 14:05:18 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 12 Jan 2015 17:05:18 +0300 Subject: Nginx behind a reverse proxy sending 499 In-Reply-To: <0eaabdb5aed27ca150fce24d26457cb9.NginxMailingListEnglish@forum.nginx.org> References: <5442b40c49e129cdbe7f32bb01f317f3.NginxMailingListEnglish@forum.nginx.org> <0eaabdb5aed27ca150fce24d26457cb9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150112140518.GJ47350@mdounin.ru> Hello! On Sun, Jan 11, 2015 at 01:37:31AM -0500, ppwm wrote: > To debug the issue further, I wrote a simple Java based HTTP client. This > client would open a socket to the nginx server, write the request line (GET > / HTTP/1.1), write the host header, write the Connection:close header and > commit the request. While committing the write the CRLF character twice. > > In this setup, I tested by having a proxy pass to www.google.com and not > having a proxy pass (nginx servers the default index.html. > > If there is no proxy pass, nginx never gives a 499 status code. Even if I do > Socket.shutdownOutput(), nginx give a valid 200 response. This is > irrespective of the Connection header (keepalive/close). > > If there is proxy pass, I get a valid response if I don't do > Socket.shutdownOutput(). But if I do Socket.shutdownOutput(), I get 499 > irrespective of Connection header (keepalive/close). This implies that nginx > is treating client's Socket.shutdownOutput() as client closing the > connection despite all data being written to the socket. In HTTP, it's generally a bad idea to shutdown the socket before you've got the response. While not strictly prohibited, the server will likely think that the client bored waiting for a response and closed the connection, so there is no need to return any response. Google for something like "http tcp half-close" for more details. The "proxy_ignore_client_abort" directive can be used if you want nginx to be compatible with such clients for some reason, see here: http://nginx.org/r/proxy_ignore_client_abort -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Jan 12 14:37:56 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 12 Jan 2015 17:37:56 +0300 Subject: Upstream Keepalive connection close In-Reply-To: <65ef672b0fa1d3e42ed5a00831878975.NginxMailingListEnglish@forum.nginx.org> References: <65ef672b0fa1d3e42ed5a00831878975.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150112143756.GK47350@mdounin.ru> Hello! On Tue, Jan 06, 2015 at 06:06:18AM -0500, Gona wrote: > I have Nginx server configured with couple of backend servers with keepalive > connections enabled. > > I am trying to understand what will be the Nginx's behaviour in case the > connection is closed by an upstream server legitimately when Nginx is trying > to send a new request exactly at the same time. In this race condition, does > Nginx re-try the request internally or does it return an error code? > > In case Nginx needs to be forced to retry, should I be using > proxy_next_upstream? My understanding is that this setting will make the > request re-tried on the next server in the upstream block. On the same note, > how do I force the retry on the failed server first to avoid cache misses. As of now, nginx will only retry the request to the next server, if any (as soon as "proxy_next_upstream" includes "error", which is the default). -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Jan 12 15:43:16 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 12 Jan 2015 18:43:16 +0300 Subject: A build of nginx with static-linked OpenSSL fails on Mac In-Reply-To: References: Message-ID: <20150112154316.GM47350@mdounin.ru> Hello! On Fri, Jan 09, 2015 at 05:45:39AM -0500, cubicdaiya wrote: > Hello. > > A build of nginx with static-linked OpenSSL seems to fail on Mac. > > $ uname -ar > Darwin host 14.0.0 Darwin Kernel Version 14.0.0: Fri Sep 19 00:26:44 PDT > 2014; root:xnu-2782.1.97~2/RELEASE_X86_64 x86_64 > $ cd nginx-1.7.9 > $ ./configure \ > --with-http_ssl_module \ > --with-openssl=../openssl-1.0.1k > $ make > . > . > . > Operating system: i686-apple-darwinDarwin Kernel Version 14.0.0: Fri Sep 19 > 00:26:44 PDT 2014; root:xnu-2782.1.97~2/RELEASE_X86_64 > WARNING! If you wish to build 64-bit library, then you have to > invoke './Configure darwin64-x86_64-cc' *manually*. > You have about 5 seconds to press Ctrl-C to abort. [...] > Though the rough patch below fixes failure, is there a better solution > expect dynamic-linking OpenSSL? Defining KERNEL_BITS=64 in the environment will convince recent enough OpenSSL to build 64-bit library instead. -- Maxim Dounin http://nginx.org/ From petros.fraser at gmail.com Mon Jan 12 15:58:16 2015 From: petros.fraser at gmail.com (Peter Fraser) Date: Mon, 12 Jan 2015 07:58:16 -0800 Subject: Bug re: openssl-1.0.1 In-Reply-To: References: Message-ID: Sorry for taking so long to reply. I am running FreeBSD 10.1 RELEASE and it is Openssl version is OpenSSL 1.0.1j and I installed it from the ports tree (source). Regards On Tue, Jan 6, 2015 at 4:56 PM, Lukas Tribus wrote: > > Hi. Thanks for replying. > > I read it in two places. Here are the links. > > 1. > > > http://serverfault.com/questions/436737/forcing-a-particular-ssl-protocol-for-an-nginx-proxying-server > > 2. > > > http://w3facility.org/question/forcing-a-particular-ssl-protocol-for-an-nginx-proxying-server/ > > > > The full error is this: peer closed connection in SSL handshake while > > SSL handshaking, client: , server: request: > > "POST > > > /Microsoft-Server-ActiveSync?Cmd=Ping&User=%5C&DeviceId=SEC090121863242D&DeviceType=SAMSUNGSMT800 > > HTTP/1.1", upstream: > > "https://SERVER_IP:443/Microsoft-Server-ActiveSync?Cmd=Ping&User= > %5C&DeviceId=SAMSUNGSGHI337", > > host: "" > > > > produced with debugging enabled. > > > > > > If I run openssl s_client -connect > CONNECTED(00000003) > > 675508300:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake > > > failure:/usr/src/secure/lib/libssl/../../../crypto/openssl/ssl/s23_lib.c:184: > > [...] > > If I run openssl s_client -connect > works but it won't work from nginx even when I enable SSLv3. > > Ok, so you are running in this particular bug. However, its supposed to be > fixed a very long time ago, in openssl 1.0.1b. > > I guess are running with an nginx executable from a third party, that has > been linked to an older release of openssl. > > What OS/kernel/nginx/openssl release are you running exactly and how > did you install it (for example did you install openssl and nginx via > apt-get from original ubuntu repositoriers, or did you install from nginx > repository or from source)? > > > > Lukas > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jan 12 16:34:12 2015 From: nginx-forum at nginx.us (Nikhita) Date: Mon, 12 Jan 2015 11:34:12 -0500 Subject: Adding timer in nginx.c main Message-ID: <3c8316abb68d88a8d7daf66c92d2ce42.NginxMailingListEnglish@forum.nginx.org> Hi, I am adding a timer in nginx's main loop..... if (counter == -1) { ngx_log_error(NGX_LOG_EMERG, cycle->log, 0, "counter is null adding imer"); /* Registring Timer */ ngx_ipc_event.data = &dumb; ngx_ipc_event.handler = ngx_ipc_event_handler; ngx_ipc_event.log = cycle->log; if (!ngx_ipc_event.timer_set) { ngx_log_error(NGX_LOG_EMERG, cycle->log, 0, "Addding timer"); ngx_add_timer(&ngx_ipc_event, 3000); } } else { ngx_log_error(NGX_LOG_EMERG, cycle->log, 0, "Counter is not null %d",counter); } static void ngx_ipc_event_handler(ngx_event_t *ev) { ngx_log_error(NGX_LOG_EMERG, ev->log, 0, "Invoked event handler"); } My handler is not being triggered at all.......Although i get following logs in error.log 2015/01/12 21:56:48 [emerg] 22399#0: counter is null adding imer nginx: [emerg] counter is null adding imer 2015/01/12 21:56:48 [emerg] 22399#0: Addding timer nginx: [emerg] Addding timer Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256066,256066#msg-256066 From petros.fraser at gmail.com Mon Jan 12 17:21:20 2015 From: petros.fraser at gmail.com (Peter Fraser) Date: Mon, 12 Jan 2015 09:21:20 -0800 Subject: Bug re: openssl-1.0.1 In-Reply-To: References: Message-ID: I did an ssldump and this is the conversation between both servers: New TCP connection #1: nginx.domain.net(46318) <-> backend.domain.net((443) TCP: nginx.domain.net((46318) -> backend.domain.net((443) Seq 54751863.(307) ACK 350741031 PUSH 1 1 1421082336.3009 (0.0012) C>SV3.1(302) Handshake ClientHello Version 3.3 random[32]= 62 5f 64 b9 b1 3f b7 22 17 f0 87 92 f1 0e e5 9f 5d c5 1b 66 c8 49 af 17 dc f7 5d b7 cc 7d 8d 49 cipher suites Unknown value 0xc030 Unknown value 0xc02c Unknown value 0xc028 Unknown value 0xc024 Unknown value 0xc014 Unknown value 0xc00a Unknown value 0xa3 Unknown value 0x9f Unknown value 0x6b Unknown value 0x6a Unknown value 0x39 Unknown value 0x38 Unknown value 0x88 Unknown value 0x87 Unknown value 0xc032 Unknown value 0xc02e Unknown value 0xc02a Unknown value 0xc026 Unknown value 0xc00f Unknown value 0xc005 Unknown value 0x9d Unknown value 0x3d Unknown value 0x35 Unknown value 0x84 Unknown value 0xc02f Unknown value 0xc02b Unknown value 0xc027 Unknown value 0xc023 Unknown value 0xc013 Unknown value 0xc009 Unknown value 0xa2 Unknown value 0x9e TLS_DHE_DSS_WITH_NULL_SHA Unknown value 0x40 Unknown value 0x33 Unknown value 0x32 Unknown value 0x9a Unknown value 0x99 Unknown value 0x45 Unknown value 0x44 Unknown value 0xc031 Unknown value 0xc02d Unknown value 0xc029 Unknown value 0xc025 Unknown value 0xc00e Unknown value 0xc004 Unknown value 0x9c Unknown value 0x3c Unknown value 0x2f Unknown value 0x96 Unknown value 0x41 TLS_RSA_WITH_IDEA_CBC_SHA Unknown value 0xc011 Unknown value 0xc007 Unknown value 0xc00c Unknown value 0xc002 TLS_RSA_WITH_RC4_128_SHA TLS_RSA_WITH_RC4_128_MD5 Unknown value 0xc012 Unknown value 0xc008 TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA Unknown value 0xc00d Unknown value 0xc003 TLS_RSA_WITH_3DES_EDE_CBC_SHA TLS_DHE_RSA_WITH_DES_CBC_SHA TLS_DHE_DSS_WITH_DES_CBC_SHA TLS_RSA_WITH_DES_CBC_SHA TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA TLS_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA TLS_RSA_EXPORT_WITH_DES40_CBC_SHA TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 TLS_RSA_EXPORT_WITH_RC4_40_MD5 Unknown value 0xff compression methods NULL On Tue, Jan 6, 2015 at 5:12 PM, Lukas Tribus wrote: > > I guess are running with an nginx executable from a third party, that has > > been linked to an older release of openssl. > > Since you can reproduce it with openssl s_client, it probably is more > complicated than that. > > can you provide an ssldump of the failed connection attempt? > > > Lukas > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Mon Jan 12 17:55:54 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 12 Jan 2015 18:55:54 +0100 Subject: Bug re: openssl-1.0.1 In-Reply-To: References: , , , , , Message-ID: > I did an ssldump and this is the conversation between both servers: This ssldump seems incomplete, there is no response. Please post the full ssldump. The bug is probably neither in openssl nor in nginx, but in the origin server (but we don't have the full handshake here). Since nginx 1.5.6, you can configure proxy_ssl_protocols and proxy_ssl_ciphers to configure backend ssl traffic, which may allows you to workaround certain backend bugs. Certainly a lot of bogus ciphers are enabled by default in your setup (NULL, EXPORT, etc). If you have nginx>= 1.5.6, you can probably workaround this by forcing SSLv3 (which I would not recommend at all): proxy_ssl_protocols SSLv3; But I would rather configure a sane cipher list with proxy_ssl_ciphers and see to get it working with it (see [1]). Try playing with "openssl s_client -cipher " to find a secure and working configuration. Regards, Lukas [1] https://wiki.mozilla.org/Security/Server_Side_TLS#Recommended_configurations From mdounin at mdounin.ru Mon Jan 12 18:22:29 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 12 Jan 2015 21:22:29 +0300 Subject: Adding timer in nginx.c main In-Reply-To: <3c8316abb68d88a8d7daf66c92d2ce42.NginxMailingListEnglish@forum.nginx.org> References: <3c8316abb68d88a8d7daf66c92d2ce42.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150112182229.GO47350@mdounin.ru> Hello! On Mon, Jan 12, 2015 at 11:34:12AM -0500, Nikhita wrote: > Hi, > > I am adding a timer in nginx's main loop..... > > if (counter == -1) { > ngx_log_error(NGX_LOG_EMERG, cycle->log, 0, "counter is null adding > imer"); > /* Registring Timer */ > ngx_ipc_event.data = &dumb; > ngx_ipc_event.handler = ngx_ipc_event_handler; > ngx_ipc_event.log = cycle->log; > if (!ngx_ipc_event.timer_set) { > ngx_log_error(NGX_LOG_EMERG, cycle->log, 0, "Addding timer"); > ngx_add_timer(&ngx_ipc_event, 3000); > } > } else { > ngx_log_error(NGX_LOG_EMERG, cycle->log, 0, "Counter is not null > %d",counter); > } > > static void > ngx_ipc_event_handler(ngx_event_t *ev) > { > ngx_log_error(NGX_LOG_EMERG, ev->log, 0, "Invoked event handler"); > } > > > My handler is not being triggered at all.......Although i get following logs > in error.log > > 2015/01/12 21:56:48 [emerg] 22399#0: counter is null adding imer > nginx: [emerg] counter is null adding imer > 2015/01/12 21:56:48 [emerg] 22399#0: Addding timer > nginx: [emerg] Addding timer It looks like you are adding your timer to init cycle. This won't work as the init cycle is only used to read a configuration file, and destroyed afterwards. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Jan 12 19:49:56 2015 From: nginx-forum at nginx.us (cubicdaiya) Date: Mon, 12 Jan 2015 14:49:56 -0500 Subject: A build of nginx with static-linked OpenSSL fails on Mac In-Reply-To: <20150112154316.GM47350@mdounin.ru> References: <20150112154316.GM47350@mdounin.ru> Message-ID: <0b610f9c8803633b6d73c2143af2d0ea.NginxMailingListEnglish@forum.nginx.org> Hello. Maxim Dounin Wrote: ------------------------------------------------------- > > Though the rough patch below fixes failure, is there a better > solution > > expect dynamic-linking OpenSSL? > > Defining KERNEL_BITS=64 in the environment will convince recent > enough OpenSSL to build 64-bit library instead. A build succeeded. Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256012,256071#msg-256071 From petros.fraser at gmail.com Mon Jan 12 21:18:51 2015 From: petros.fraser at gmail.com (Peter Fraser) Date: Mon, 12 Jan 2015 13:18:51 -0800 Subject: Bug re: openssl-1.0.1 In-Reply-To: References: Message-ID: You were absolutely correct. It is working now. I changed three things. I firstly forced TLS 1.0 then changed the directive ssl_protocols to proxy_ssl_protocols as you suggested. Finally, I restricted to Cipher list as you also mentioned. I had thought that I would leave all that out and tie things down when I got it working. I never thought being so liberal would prevent it from working in the first place. Thanks for your thoughts. Regards. On Mon, Jan 12, 2015 at 9:55 AM, Lukas Tribus wrote: > > I did an ssldump and this is the conversation between both servers: > > This ssldump seems incomplete, there is no response. Please post the > full ssldump. > > The bug is probably neither in openssl nor in nginx, but in the origin > server (but we don't have the full handshake here). > > > Since nginx 1.5.6, you can configure proxy_ssl_protocols and > proxy_ssl_ciphers to configure backend ssl traffic, which may > allows you to workaround certain backend bugs. > > Certainly a lot of bogus ciphers are enabled by default in your > setup (NULL, EXPORT, etc). > > If you have nginx>= 1.5.6, you can probably workaround this > by forcing SSLv3 (which I would not recommend at all): > proxy_ssl_protocols SSLv3; > > But I would rather configure a sane cipher list with > proxy_ssl_ciphers and see to get it working with it (see [1]). > > Try playing with "openssl s_client -cipher " to find > a secure and working configuration. > > > > > Regards, > > Lukas > > > [1] > https://wiki.mozilla.org/Security/Server_Side_TLS#Recommended_configurations > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kpariani at zimbra.com Mon Jan 12 21:48:32 2015 From: kpariani at zimbra.com (Kunal Pariani) Date: Mon, 12 Jan 2015 15:48:32 -0600 (CST) Subject: resolver directive doesn't fallback to the system DNS resolver In-Reply-To: References: <1347758115.1009694.1420499092638.JavaMail.zimbra@zimbra.com> <20150107231417.GX15670@daoine.org> <416657328.1147705.1420673842245.JavaMail.zimbra@zimbra.com> <20150108001520.GA15670@daoine.org> Message-ID: <143651448.1774074.1421099312789.JavaMail.zimbra@zimbra.com> Is there already a patch for this ? I am not completely sure of how to make the nginx resolver (in ngx_resolver.c) fallback to libresolv automatically and if this not trivial enough, i just might read the resolvers from /etc/resolv.conf and provide it to the 'resolver' directive. Any suggestions ? Thanks -Kunal ----- Original Message ----- From: "Yichun Zhang (agentzh)" To: nginx at nginx.org Sent: Wednesday, January 7, 2015 4:30:06 PM Subject: Re: resolver directive doesn't fallback to the system DNS resolver Hello! On Wed, Jan 7, 2015 at 4:15 PM, Francis Daly wrote: > (You could probably come up with a way to read /etc/resolv.conf when it > changes, and update the nginx config and reload it; but that's a "dynamic > reconfiguration" problem, not an "nginx dynamic reconfiguration" problem.) > Yeah, I think it's better for the nginx resolver to automatically use whatever is defined in /etc/resolv.conf when the user does not configure the "resolver" directive in her nginx.conf. I'm already tired of seeing all those user questions regarding the error message "no resolver defined to resolve ..." over the years. Alas. Regards, -agentzh _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Jan 12 21:56:01 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Mon, 12 Jan 2015 16:56:01 -0500 Subject: auth_request vs auth_pam_service_name Message-ID: <10c86cb367fc6c74679022a85338db09.NginxMailingListEnglish@forum.nginx.org> Hi, I am a newbie at nginx and looking at its authentication capabilities. It appears that when using auth_request, every client request would still require an invokation to the auth_request fastcgi or proxy_pass server. Looking at auth_pam, I am not clear on how it works: 1. How does nginx pass the user credentials to the PAM module? 2. Would nginx remember that a user has been authenticated? Perhaps via a cookie that'd be returned by PAM? I looked at the nginx pam source code and didn't see it returning any cookie to nginx ... perhaps PAM does it by storing it on some context that's returned to NGINX? 3. Is the auth_pam directive mandatory? When I used it with locate / { auth_pam "Login Banner"; auth_required_service_name "nginx"; } where the PAM nginx file had 'auth required pam_unix.so" a user/password login page popped up. But even after I entered a valid user/pwd and hit , the same login page would pop up again, prompting for a user/pwd. I got the same behavior even after removing the auth_required_service_name statement. Can someone explain the behavior I experienced? 4. Is there a way for us to provide our own Login html page to the user? If yes, how do we do it and how would we pass the credentials to NGINX? 5. NGINX chooses the authentication method (local vs ldap vs rsa etc) based on the server/uri. For example, /www.example.org users would be authenticated via LDAP: location /example { auth_pam_service_name "authFile" } and the authFile would contains "auth required ldap.so" Is there a way to configure nginx to base the authentication method on some user configuration outside of nginx? Thank you for any clarifications! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256075,256075#msg-256075 From agentzh at gmail.com Mon Jan 12 22:19:08 2015 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Mon, 12 Jan 2015 14:19:08 -0800 Subject: resolver directive doesn't fallback to the system DNS resolver In-Reply-To: <143651448.1774074.1421099312789.JavaMail.zimbra@zimbra.com> References: <1347758115.1009694.1420499092638.JavaMail.zimbra@zimbra.com> <20150107231417.GX15670@daoine.org> <416657328.1147705.1420673842245.JavaMail.zimbra@zimbra.com> <20150108001520.GA15670@daoine.org> <143651448.1774074.1421099312789.JavaMail.zimbra@zimbra.com> Message-ID: Hello! On Mon, Jan 12, 2015 at 1:48 PM, Kunal Pariani wrote: > Is there already a patch for this ? AFAIK, the Tengine fork has a patch for this. > I am not completely sure of how to make the nginx resolver (in ngx_resolver.c) fallback to libresolv automatically and if this not trivial enough, i just might read the resolvers from /etc/resolv.conf and provide it to the 'resolver' directive. Any suggestions ? > I was not talking about falling back to libresolv because it is very likely to block the nginx event loop or introduce extra OS threads for no good. I was talking about extracing nameserver addresses automatically from /etc/resolv.conf (or similar places in other exotic operating systems) and feed them into nginx's current nonblocking resolver. And yes, for now, the latter workaround should be the simplest for you :) Regards, -agentzh From kpariani at zimbra.com Mon Jan 12 22:53:38 2015 From: kpariani at zimbra.com (Kunal Pariani) Date: Mon, 12 Jan 2015 16:53:38 -0600 (CST) Subject: resolver directive doesn't fallback to the system DNS resolver In-Reply-To: References: <1347758115.1009694.1420499092638.JavaMail.zimbra@zimbra.com> <20150107231417.GX15670@daoine.org> <416657328.1147705.1420673842245.JavaMail.zimbra@zimbra.com> <20150108001520.GA15670@daoine.org> <143651448.1774074.1421099312789.JavaMail.zimbra@zimbra.com> Message-ID: <2078578090.1778207.1421103218285.JavaMail.zimbra@zimbra.com> Thanks Yichun Zhang.. ----- Original Message ----- From: "Yichun Zhang (agentzh)" To: nginx at nginx.org Sent: Monday, January 12, 2015 2:19:08 PM Subject: Re: resolver directive doesn't fallback to the system DNS resolver Hello! On Mon, Jan 12, 2015 at 1:48 PM, Kunal Pariani wrote: > Is there already a patch for this ? AFAIK, the Tengine fork has a patch for this. > I am not completely sure of how to make the nginx resolver (in ngx_resolver.c) fallback to libresolv automatically and if this not trivial enough, i just might read the resolvers from /etc/resolv.conf and provide it to the 'resolver' directive. Any suggestions ? > I was not talking about falling back to libresolv because it is very likely to block the nginx event loop or introduce extra OS threads for no good. I was talking about extracing nameserver addresses automatically from /etc/resolv.conf (or similar places in other exotic operating systems) and feed them into nginx's current nonblocking resolver. And yes, for now, the latter workaround should be the simplest for you :) Regards, -agentzh _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From sto at iti.upv.es Tue Jan 13 08:40:21 2015 From: sto at iti.upv.es (Sergio Talens-Oliag) Date: Tue, 13 Jan 2015 09:40:21 +0100 Subject: auth_request vs auth_pam_service_name In-Reply-To: <10c86cb367fc6c74679022a85338db09.NginxMailingListEnglish@forum.nginx.org> References: <10c86cb367fc6c74679022a85338db09.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150113084021.GB12357@ityrion.iti.upv.es> El Mon, Jan 12, 2015 at 04:56:01PM -0500, nginxuser100 va escriure: > Hi, I am a newbie at nginx and looking at its authentication capabilities. > It appears that when using auth_request, every client request would still > require an invokation to the auth_request fastcgi or proxy_pass server. > Looking at auth_pam, I am not clear on how it works: > > 1. How does nginx pass the user credentials to the PAM module? It gets them from the HTTP Basic Auth header and calls the PAM functions to pass them to the underlying modules in a non interactive mode. > 2. Would nginx remember that a user has been authenticated? Perhaps via a > cookie that'd be returned by PAM? I looked at the nginx pam source code and > didn't see it returning any cookie to nginx ... perhaps PAM does it by > storing it on some context that's returned to NGINX? When using HTTP Basic Auth the server does not remember users and passwords, usually the client does and the user and password are checked on each request... depending on the PAM modules you use they can do some caching, though. > 3. Is the auth_pam directive mandatory? When I used it with > locate / > { > auth_pam "Login Banner"; > auth_required_service_name "nginx"; > } if you want to use auth_pam you have to use the directive > where the PAM nginx file had 'auth required pam_unix.so" > a user/password login page popped up. But even after I entered a valid > user/pwd and hit , the same login page would pop up again, prompting for > a user/pwd. I got the same behavior even after removing the > auth_required_service_name statement. > Can someone explain the behavior I experienced? Yes, your problem is that the web server can't validate the users using pam_unix.so; quoting the ngx_http_auth_pam_module README: Note that the module runs as the web server user, so the PAM modules used must be able to authenticate the users without being root; that means that if you want to use the pam_unix.so module to autenticate users you need to let the web server user to read the /etc/shadow file if that does not scare you (on Debian like systems you can add the www-data user to the shadow group). I don't recomend you to let the webserver to read your shadow file, but that is your call (I usually use PAM to validate against LDAP or user databases that don't need root access) > 4. Is there a way for us to provide our own Login html page to the user? If > yes, how do we do it and how would we pass the credentials to NGINX? It depends on your application and the method you plan to use, nothing NGINX specific here, HTTP Basic Auth is really basic, you should use other authentication mechanisms if you want something more powerful (on NGINX you can look into the Pubcookie module or implementing something using the Lua Module) > 5. NGINX chooses the authentication method (local vs ldap vs rsa etc) based > on the server/uri. For example, /www.example.org users would be > authenticated via LDAP: location /example { auth_pam_service_name "authFile" > } and the authFile would contains "auth required ldap.so" > > Is there a way to configure nginx to base the authentication method on some > user configuration outside of nginx? If you want to handle HTTP basic auth with NGINX you have to configure it on the level you want (i. e. you can use a global auth method for a server and disable or change it on specific locations) or you can authenticate at the application level (not using nignx modules). That beeing said, you can implement a flexible authentication method with the PAM module using the pam_exec module and passing variables to it: http://web.iti.upv.es/~sto/nginx/ngx_http_auth_pam_module-1.3/README.html#pam_environment But that probably is not really a good idea for production environments (PAM is blocking and pam_exec.so can be dangerous and resource intensive, as it forks a process for each authentication request); if you want to do somenthing equivalent I'll rather do it using the auth_request module: http://nginx.org/en/docs/http/ngx_http_auth_request_module.html and an authentication web app that behaves as you want with the parameters you pass to it (i.e. it uses a different AUTH schema depending on the URL you are trying to validate and implements some kind of catching). > Thank you for any clarifications! You're welcome, hope it helps. Greetings, Sergio. -- Sergio Talens-Oliag Key fingerprint = FF77 A16B 9D09 FC7B 6656 CFAD 261D E19A 578A 36F2 El Mon, Jan 12, 2015 at 04:56:01PM -0500, nginxuser100 va escriure: > Hi, I am a newbie at nginx and looking at its authentication capabilities. > It appears that when using auth_request, every client request would still > require an invokation to the auth_request fastcgi or proxy_pass server. > Looking at auth_pam, I am not clear on how it works: > > 1. How does nginx pass the user credentials to the PAM module? > > 2. Would nginx remember that a user has been authenticated? Perhaps via a > cookie that'd be returned by PAM? I looked at the nginx pam source code and > didn't see it returning any cookie to nginx ... perhaps PAM does it by > storing it on some context that's returned to NGINX? > > 3. Is the auth_pam directive mandatory? When I used it with > locate / > { > auth_pam "Login Banner"; > auth_required_service_name "nginx"; > } > where the PAM nginx file had 'auth required pam_unix.so" > a user/password login page popped up. But even after I entered a valid > user/pwd and hit , the same login page would pop up again, prompting for > a user/pwd. I got the same behavior even after removing the > auth_required_service_name statement. > Can someone explain the behavior I experienced? > > 4. Is there a way for us to provide our own Login html page to the user? If > yes, how do we do it and how would we pass the credentials to NGINX? > > 5. NGINX chooses the authentication method (local vs ldap vs rsa etc) based > on the server/uri. For example, /www.example.org users would be > authenticated via LDAP: location /example { auth_pam_service_name "authFile" > } and the authFile would contains "auth required ldap.so" > > Is there a way to configure nginx to base the authentication method on some > user configuration outside of nginx? > > Thank you for any clarifications! > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256075,256075#msg-256075 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Sergio Talens-Oliag Key fingerprint = FF77 A16B 9D09 FC7B 6656 CFAD 261D E19A 578A 36F2 From nginx-forum at nginx.us Tue Jan 13 09:58:32 2015 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 13 Jan 2015 04:58:32 -0500 Subject: resolver directive doesn't fallback to the system DNS resolver In-Reply-To: <143651448.1774074.1421099312789.JavaMail.zimbra@zimbra.com> References: <143651448.1774074.1421099312789.JavaMail.zimbra@zimbra.com> Message-ID: <309940a4bd62612709ed25fff5f275ca.NginxMailingListEnglish@forum.nginx.org> kunalvjti Wrote: ------------------------------------------------------- > Is there already a patch for this ? > I am not completely sure of how to make the nginx resolver (in > ngx_resolver.c) fallback to libresolv automatically and if this not Have a look at a Lua solution, not everything works yet, patches/ideas/feedback welcome :) http://nginx-win.ecsds.eu/devtest/EBLB_upstream_dev1.zip Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255961,256081#msg-256081 From nginx-forum at nginx.us Wed Jan 14 09:34:34 2015 From: nginx-forum at nginx.us (ramsoft75) Date: Wed, 14 Jan 2015 04:34:34 -0500 Subject: Problem with wildcard of domain in nginx and in https Message-ID: I have a domain.com and i can redirecto to other subdomains but not domain.com in https, my configuration is the following : server { listen 80; server_name www.domain.com; rewrite ^/(.*) https://www.domain.com/$1 permanent; } server { listen 80; server_name m.domain.com; ## redirect http to https ## rewrite ^/(.*) https://m.domain.com/$1 permanent; } server { listen 443 ssl spdy; server_name www.domain.com; ... } server { listen 443 ssl spdy; server_name domain.com; ... } server { listen 443 ssl spdy; server_name www.domain.com; ... } server { listen 443 ssl spdy; server_name m.domain.com; ... } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256099,256099#msg-256099 From nginx-forum at nginx.us Wed Jan 14 09:43:12 2015 From: nginx-forum at nginx.us (ramsoft75) Date: Wed, 14 Jan 2015 04:43:12 -0500 Subject: Problem with wildcard of domain in nginx and in https In-Reply-To: References: Message-ID: <6e3dfe866b035362e22f68a1745e3f17.NginxMailingListEnglish@forum.nginx.org> The problem is https://domain.com is not accessible Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256099,256100#msg-256100 From ar at xlrs.de Wed Jan 14 09:46:34 2015 From: ar at xlrs.de (Axel) Date: Wed, 14 Jan 2015 10:46:34 +0100 Subject: Problem with wildcard of domain in nginx and in https In-Reply-To: References: Message-ID: <1498258.feEAHk072v@lxrs> Hello, Am Mittwoch, 14. Januar 2015, 04:34:34 schrieb ramsoft75: > I have a domain.com and i can redirecto to other subdomains but not > domain.com in https, my configuration is the following : > > server { > listen 80; > server_name www.domain.com; > rewrite ^/(.*) https://www.domain.com/$1 permanent; > } > > server { > listen 80; > server_name m.domain.com; > > ## redirect http to https ## > rewrite ^/(.*) https://m.domain.com/$1 permanent; > } > > server { > listen 443 ssl spdy; > > server_name www.domain.com; > > ... > } > > server { > listen 443 ssl spdy; > > server_name domain.com; > > ... > } > > server { > listen 443 ssl spdy; > > server_name www.domain.com; > > ... > } > > server { > listen 443 ssl spdy; > > server_name m.domain.com; > > ... > } I can't see any redirect to domain.com. Perhaps that's missing? regards, Axel > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256099,256099#msg-256099 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Jan 14 10:30:18 2015 From: nginx-forum at nginx.us (ramsoft75) Date: Wed, 14 Jan 2015 05:30:18 -0500 Subject: Problem with wildcard of domain in nginx and in https In-Reply-To: <1498258.feEAHk072v@lxrs> References: <1498258.feEAHk072v@lxrs> Message-ID: Isn't this ? > server { > listen 443 ssl spdy; > > server_name domain.com; > > ... > } With that configuration if a go https://domains.com it gives me a error "webpage not avaiable" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256099,256102#msg-256102 From lists-nginx at swsystem.co.uk Wed Jan 14 10:42:41 2015 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Wed, 14 Jan 2015 10:42:41 +0000 Subject: Problem with wildcard of domain in nginx and in https In-Reply-To: References: <1498258.feEAHk072v@lxrs> Message-ID: <842ef6792646a14a93407cb3b0f571f8@swsystem.co.uk> On 14/01/2015 10:30, ramsoft75 wrote: > Isn't this ? > >> server { >> listen 443 ssl spdy; >> >> server_name domain.com; >> >> ... >> } > > With that configuration if a go https://domains.com it gives me a error > "webpage not avaiable" > I'm guessing the above is a typo in the mail and that you're not actually trying domainS.com with domain.com configured. Does domain.com resolve the same as www.domain.com? When you ping domain.com does it return the same IP as when pinging www.domain.com, the ping actually working doesn't matter this it testing that both resolve correctly. Steve. From nginx-forum at nginx.us Wed Jan 14 10:46:37 2015 From: nginx-forum at nginx.us (ramsoft75) Date: Wed, 14 Jan 2015 05:46:37 -0500 Subject: Problem with wildcard of domain in nginx and in https In-Reply-To: <842ef6792646a14a93407cb3b0f571f8@swsystem.co.uk> References: <842ef6792646a14a93407cb3b0f571f8@swsystem.co.uk> Message-ID: <04e43aebbbe6ee1b0f60fede2741ef89.NginxMailingListEnglish@forum.nginx.org> I made a ping into www.domain.com and into domain.com, the Ip's are not the same. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256099,256104#msg-256104 From juriy.foboss at gmail.com Wed Jan 14 10:51:53 2015 From: juriy.foboss at gmail.com (Juriy Strashnov) Date: Wed, 14 Jan 2015 13:51:53 +0300 Subject: Problem with wildcard of domain in nginx and in https In-Reply-To: <04e43aebbbe6ee1b0f60fede2741ef89.NginxMailingListEnglish@forum.nginx.org> References: <842ef6792646a14a93407cb3b0f571f8@swsystem.co.uk> <04e43aebbbe6ee1b0f60fede2741ef89.NginxMailingListEnglish@forum.nginx.org> Message-ID: It is a DNS (not a Nginx) problem. It seems that you have different "IN A" records for domain.com & www.domain.com On Wed, Jan 14, 2015 at 1:46 PM, ramsoft75 wrote: > I made a ping into www.domain.com and into domain.com, the Ip's are not > the > same. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256099,256104#msg-256104 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Best regards, Juriy Strashnov Mob. +7 (953) 742-1550 E-mail: j.strashnov at me.com Please consider the environment before printing this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jan 14 18:12:44 2015 From: nginx-forum at nginx.us (churchmf) Date: Wed, 14 Jan 2015 13:12:44 -0500 Subject: Nginx Stalls between RTMP Streams Message-ID: Hey NGINX folks, I'm experiencing some troubles using NGINX as a RTMP media server. I wish to present a continuous video as a live stream (with up to 60 second latency). However, due to some hardware constraints, I am unable to stream directly from the device. Instead, I can save out X amount of seconds from the device's buffer as an MP4. My solution has been to save X seconds of video from the device then stream that X seconds, rise and repeat. This has been working mostly well, except for stalls (~20 seconds) in the stream between calls. I have searched far and wide for a solution to this however most of the people experiencing this problem have the collection of videos before starting the stream and can simply concatenate them. My running theory is that when a stream finishes, it does an unpublish event in NGINX followed by a timeout period. This prevents the NGINX server from receiving the next publish until the timeout period has expired. I have tried adjusting nginx.config values related to timeouts, respawns, restarts, and publish, but to no avail. Pseudocode: while true -> capture X seconds of video to "output.mp4" (this takes less than 300ms) -> stream the MP4 with FFMPEG (takes ~X seconds using -re) FFMPEG call: ffmpeg -re -i "output.mp4" -vcodec libx264 -preset veryfast -maxrate 2000k -bufsize 4000k -g 60 -acodec libmp3lame -b:a 128k -ac 2 -ar 44100 -f flv rtmp:/MYSERVER/live/output I am using JWPlayer client side to watch the video stream, however I experience similar issues using VLC. I have been trying to figure this out for a few days and I would appreciate any insight an expert to video streaming and NGINX can give. Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256108,256108#msg-256108 From nginx-forum at nginx.us Thu Jan 15 01:34:33 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Wed, 14 Jan 2015 20:34:33 -0500 Subject: auth_request vs auth_pam_service_name In-Reply-To: <10c86cb367fc6c74679022a85338db09.NginxMailingListEnglish@forum.nginx.org> References: <10c86cb367fc6c74679022a85338db09.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thanks Sergio, that was helpful! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256075,256109#msg-256109 From nginx-forum at nginx.us Thu Jan 15 08:11:23 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Thu, 15 Jan 2015 03:11:23 -0500 Subject: How to return a cookie to a client when auth_request is used? Message-ID: <06bc519452d69124c817676844cf030c.NginxMailingListEnglish@forum.nginx.org> Hi, Question 1: I would like to have an FastCGI authentication app assign a cookie to a client, and the Fast Auth app is called using auth_request. The steps are as follows: 1. Client sends a request 2. NGINX auth_request forwards the request to a FastCGI app to authenticate. 3. The authentication FastCGI app creates a cookie, using "Set-Cookie: name=value". I would like this value to be returned to the client. 4. Assuming the authentication was successful, NGINX then forwards the request to an upstream FastCGI app which sends a response to the client. The HTTP header should contain Set-Cookie: name=value How do I get NGINX to include the cookie in the header that gets forwarded to the upstream module so the final response to the client contains the cookie? I tried using auth_request_set but got location / { auth_request /auth; include fastcgi_params; fastcgi_param HTTP_COOKIE $http_cookie; #auth_request_set $http_cookie "test"; <======= I tried this just to see how auth_request_set works. NGINX j fastcgi_pass 127.0.0.1:9000; } # new fastcgi to set the cookie location /auth { include fastcgi_params; fastcgi_pass 127.0.0.1:9010; } Question 2. I also tried auth_request_set $http_cookie "test"; to see how auth_request_set works. NGINX gave me this error at start time nginx: [emerg] the duplicate "http_cookie" variable in /usr/local/nginx-1.7.9/conf/nginxWat.conf:25 Why did get such error? Question 3. Can someone give me a pointer to a list of NGINX FastCGI supported env variables such as $http_cookie / HTTP_COOKIE? Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256110,256110#msg-256110 From jacklinkers at gmail.com Thu Jan 15 10:57:19 2015 From: jacklinkers at gmail.com (JACK LINKERS) Date: Thu, 15 Jan 2015 11:57:19 +0100 Subject: Restrict URL access with pwd Message-ID: Hi all, How can I restrict access to my website with a password like with a .htaccess file please? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jan 15 11:20:02 2015 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 15 Jan 2015 06:20:02 -0500 Subject: Restrict URL access with pwd In-Reply-To: References: Message-ID: For example: http://nginx.org/en/docs/http/ngx_http_auth_basic_module.html Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256114,256115#msg-256115 From mdounin at mdounin.ru Thu Jan 15 13:16:11 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Jan 2015 16:16:11 +0300 Subject: How to return a cookie to a client when auth_request is used? In-Reply-To: <06bc519452d69124c817676844cf030c.NginxMailingListEnglish@forum.nginx.org> References: <06bc519452d69124c817676844cf030c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150115131611.GO79857@mdounin.ru> Hello! On Thu, Jan 15, 2015 at 03:11:23AM -0500, nginxuser100 wrote: > Hi, > > Question 1: > > I would like to have an FastCGI authentication app assign a cookie to a > client, and the Fast Auth app is called using auth_request. The steps are as > follows: > > 1. Client sends a request > 2. NGINX auth_request forwards the request to a FastCGI app to > authenticate. > 3. The authentication FastCGI app creates a cookie, using "Set-Cookie: > name=value". I would like this value to be returned to the client. > 4. Assuming the authentication was successful, NGINX then forwards the > request to an upstream FastCGI app which sends a response to the client. The > HTTP header should contain Set-Cookie: name=value > > How do I get NGINX to include the cookie in the header that gets forwarded > to the upstream module so the final response to the client contains the > cookie? I tried using auth_request_set but got You have to save the header value returned by the subrequest to a variable with auth_request_set, and then add the header to a response generated using the "add_header" directive. Something like this should work: location / { auth_request /auth; auth_request_set $saved_set_cookie $upstream_http_set_cookie; add_header Set-Cookie $saved_set_cookie; ... } [...] > Question 2. I also tried > auth_request_set $http_cookie "test"; > to see how auth_request_set works. NGINX gave me this error at start > time > > nginx: [emerg] the duplicate "http_cookie" variable in > /usr/local/nginx-1.7.9/conf/nginxWat.conf:25 > > Why did get such error? The $http_* variables are headers of a request, and you can't redefine them. Hence the error. > Question 3. Can someone give me a pointer to a list of NGINX FastCGI > supported env variables such as $http_cookie / HTTP_COOKIE? All HTTP request headers are passed to FastCGI application as HTTP_* params, and will be available to an application as coresponding environment variables. Additional params are passed as configured in your fastcgi_params file. -- Maxim Dounin http://nginx.org/ From Sebastian.Stabbert at heg.com Thu Jan 15 14:26:41 2015 From: Sebastian.Stabbert at heg.com (Sebastian Stabbert) Date: Thu, 15 Jan 2015 15:26:41 +0100 Subject: Rsync access to Nginx repositories Message-ID: <58FDF30E-993E-46A4-AE72-0D8DBDCEA3B3@heg.com> Hey everyone, We would like to use the debian repo from nginx.org on about 15.000 servers and so wanted to setup a mirror at ftp.hosteurope.de( ftp://ftp.hosteurope.de/mirror/ ), like we do with debian and many other FOSS-projects. Unfortunately there does not seem to be rsync access to any of the repos; Is there a way to get access for us? Please let me know if you need additional information. Thanks, Sebastian -- Sebastian Stabbert Systemadministrator Host Europe GmbH is a company of HEG Telefon: +49 2203 1045-7362 ----------------------------------------------------------------------- Host Europe GmbH - http://www.hosteurope.de Welserstra?e 14 - 51149 K?ln - Germany HRB 28495 Amtsgericht K?ln Gesch?ftsf?hrer: Tobias Mohr, Patrick Pulverm?ller -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 204 bytes Desc: Message signed with OpenPGP using GPGMail URL: From nginx-forum at nginx.us Thu Jan 15 15:53:40 2015 From: nginx-forum at nginx.us (Gona) Date: Thu, 15 Jan 2015 10:53:40 -0500 Subject: Upstream Keepalive connection close In-Reply-To: <20150112143756.GK47350@mdounin.ru> References: <20150112143756.GK47350@mdounin.ru> Message-ID: Hi Maxim, Thanks for the response. So my understanding from this is - the race condition is possible and when it happens with one server in the upstream block or "proxy_next_upstream" set to OFF, Nginx will return back an error without retrying. Is this right? Thanks, Gopala Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255966,256128#msg-256128 From nginx-forum at nginx.us Thu Jan 15 16:45:54 2015 From: nginx-forum at nginx.us (Nikhita) Date: Thu, 15 Jan 2015 11:45:54 -0500 Subject: Adding timer in nginx.c main In-Reply-To: <20150112182229.GO47350@mdounin.ru> References: <20150112182229.GO47350@mdounin.ru> Message-ID: Hi Maxim, I shifted the timer to ngx_epoll_module.c and called it from ngx_epoll_init. My handler is still not getting invoked... What would be the right way of adding a timer ? Writing a new module all together ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256066,256129#msg-256129 From mdounin at mdounin.ru Thu Jan 15 17:07:00 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Jan 2015 20:07:00 +0300 Subject: Upstream Keepalive connection close In-Reply-To: References: <20150112143756.GK47350@mdounin.ru> Message-ID: <20150115170659.GX79857@mdounin.ru> Hello! On Thu, Jan 15, 2015 at 10:53:40AM -0500, Gona wrote: > So my understanding from this is - the race condition is possible and when > it happens with one server in the upstream block or "proxy_next_upstream" > set to OFF, Nginx will return back an error without retrying. Is this > right? Yes, in theory this is possible. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Jan 15 17:16:29 2015 From: nginx-forum at nginx.us (sandeepkolla99) Date: Thu, 15 Jan 2015 12:16:29 -0500 Subject: Getting expiration date of client certificate Message-ID: Hi, I wanted to extract client certificate expiration date in nginx.conf. I have the below map command to extract CN name of client certificate. Do you know if any variables/directives nginx supports to extract client certificate expiration date? map $ssl_client_s_dn $ssl_client_s_dn_cn { default ""; ~/CN=(?[^/]+) $CN; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256133,256133#msg-256133 From gsomlo at gmail.com Thu Jan 15 19:50:02 2015 From: gsomlo at gmail.com (Gabriel L. Somlo) Date: Thu, 15 Jan 2015 14:50:02 -0500 Subject: Dynamic/Wildcard SSL certificates with SNI ? Message-ID: <20150115195002.GC1744@HEDWIG.INI.CMU.EDU> Hi, I'm working on a "Web simulator" designed to serve a large number of web sites on a private, self-contained network, where I'm also in control of issuing SSL certificates. The relevant bits of my nginx.conf look like this: server { listen 80 default_server; server_name $http_host; root /var/www/vservers/$http_host; index index.html index.htm; } ssl_certificate_key /var/www/vserver_certs/vserver.key; server { listen 443 default_server; ssl on; ssl_certificate /var/www/vserver_certs/vserver.cer; server_name $http_host; root /var/www/vservers/$http_host; index index_html index.htm; } There is no consistency across the set of vserver host names (and therefore not much to be gained by using wildcards in the certificate common or alt name fields). Right now, I'm trying to cram all of my vserver host names into the alt_names field of the "vserver.cer" certificate, but I'm bumping up against the 16k limit of the cert file size, after which browsers start rejecting it with an error. I'd like to generate per-vserver certs, and dynamically select the correct certificate file based on the SSI-negotiated server name, like so: server { listen 443 default_server; ssl on; ssl_certificate /var/www/vserver_certs/$ssl_server_name.cer; server_name $http_host; root /var/www/vservers/$http_host; index index_html index.htm; } but nginx doesn't seem to currently support this (it wants to open the certificate file at startup time, and doesn't appear to allow variable expansion in the cert file name :( The alternative would be to add an https server block for each vserver: server { listen 443; ssl_certificate /var/www/vserver_certs/vserver1.foo.com.cer; server_name vserver1.foo.com; root /var/www/vservers/vserver1.foo.com; index index_html index.htm; } server { listen 443; ssl_certificate /var/www/vserver_certs/vserver2.bar.org.cer; server_name vserver2.bar.org; root /var/www/vservers/vserver2.bar.org; index index_html index.htm; } ... and so on, relying on SNI to match the correct block. But this could get out of hand really fast, as I expect to be dealing with several *thousand* vservers. Am I missing something when attempting to dynamically use $ssl_server_name to locate the appropriate certificate file ? If that's not currently possible, is this something of interest to the rest of the community, and would it be worth bringing up on the development mailing list ? Thanks much for any help, pointers, ideas, etc! --Gabriel From rainer at ultra-secure.de Thu Jan 15 20:13:21 2015 From: rainer at ultra-secure.de (Rainer Duffner) Date: Thu, 15 Jan 2015 21:13:21 +0100 Subject: Dynamic/Wildcard SSL certificates with SNI ? In-Reply-To: <20150115195002.GC1744@HEDWIG.INI.CMU.EDU> References: <20150115195002.GC1744@HEDWIG.INI.CMU.EDU> Message-ID: <58CF46BE-3E19-4D8F-B7DF-B5C65B313FE7@ultra-secure.de> > Am 15.01.2015 um 20:50 schrieb Gabriel L. Somlo : > > Hi, > > I'm working on a "Web simulator" designed to serve a large number of > web sites on a private, self-contained network, where I'm also in > control of issuing SSL certificates. > > The relevant bits of my nginx.conf look like this: > > server { > listen 80 default_server; > server_name $http_host; > root /var/www/vservers/$http_host; > index index.html index.htm; > } > > ssl_certificate_key /var/www/vserver_certs/vserver.key; > > server { > listen 443 default_server; > ssl on; > ssl_certificate /var/www/vserver_certs/vserver.cer; > server_name $http_host; > root /var/www/vservers/$http_host; > index index_html index.htm; > } > > > There is no consistency across the set of vserver host names (and > therefore not much to be gained by using wildcards in the certificate > common or alt name fields). Just issue a certificate for *.*.* and always serve that. At least, until the CAB-forum decides this is a not a good idea and stops browsers from accepting it. I think the above certificate should still be legal, but I?m not 100% sure. From gsomlo at gmail.com Fri Jan 16 16:26:21 2015 From: gsomlo at gmail.com (Gabriel L. Somlo) Date: Fri, 16 Jan 2015 11:26:21 -0500 Subject: Dynamic/Wildcard SSL certificates with SNI ? In-Reply-To: <58CF46BE-3E19-4D8F-B7DF-B5C65B313FE7@ultra-secure.de> Message-ID: <20150116162620.GF1744@HEDWIG.INI.CMU.EDU> On Thu, 15 Jan 2015 21:13:21, Rainer Duffner wrote: > > Am 15.01.2015 um 20:50 schrieb Gabriel L. Somlo : > > > > There is no consistency across the set of vserver host names (and > > therefore not much to be gained by using wildcards in the certificate > > common or alt name fields). > > Just issue a certificate for *.*.* and always serve that. > > At least, until the CAB-forum decides this is a not a good idea and > stops browsers from accepting it. > I think the above certificate should still be legal, but I?m not 100% sure. I'm afraid it's already too late for that :( Since some of my vserver names look like "foo.com" and others like "foo.bar.org", I already tried (using alt_names): *.*, *.*.* and *.com, *.*.com, *.org, *.*.org, *.net, *.*.net both forms causing warning popups on any recent (windows7-era) browser. Apparently, the current policy in effect is not to accept tld-wide wildcards, much less wildcards across ALL tlds ([*.]*.*). Since I'm already mass-scripting the csr generation and cert signing for each vserver, it should be really simple to script generating the corresponding nginx config file, but allowing demand-driven, request-time loading of certificate files would work around that enormous ugliness :) Thanks, --Gabriel From luky-37 at hotmail.com Fri Jan 16 16:35:14 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 16 Jan 2015 17:35:14 +0100 Subject: Dynamic/Wildcard SSL certificates with SNI ? In-Reply-To: <20150116162620.GF1744@HEDWIG.INI.CMU.EDU> References: <58CF46BE-3E19-4D8F-B7DF-B5C65B313FE7@ultra-secure.de>, <20150116162620.GF1744@HEDWIG.INI.CMU.EDU> Message-ID: > allowing demand-driven, request-time?loading of certificate?files I don't think thats possible with openssl, especially in a event-driven application like nginx. That having said, haproxy has a nice functionality: you can just point to one or more directories and haproxy will load every single certificate in that directory for you (at startup), and it will handle those certificates based on SNI. Lukas From kpariani at zimbra.com Fri Jan 16 19:17:23 2015 From: kpariani at zimbra.com (Kunal Pariani) Date: Fri, 16 Jan 2015 13:17:23 -0600 (CST) Subject: monitor upstream IP addr changes without using nginx's resolver In-Reply-To: <309940a4bd62612709ed25fff5f275ca.NginxMailingListEnglish@forum.nginx.org> References: <143651448.1774074.1421099312789.JavaMail.zimbra@zimbra.com> <309940a4bd62612709ed25fff5f275ca.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1243690343.2168388.1421435843707.JavaMail.zimbra@zimbra.com> Sorry am new to Lua but can you plz explain how this upstream status & control will help with this issue. How can i query for the upstream ip after every certain time interval and reconfigure nginx if there's a change detected ? The reason for not using nginx's resolver here is that i have to parse the resolv.conf to get the nameservers which is not a clean solution as there are still some linux distros that don't use /etc/resolv.conf. This should be OS & implementation independent. How to make this work everywhere ? Thanks -Kunal From: "itpp2012" To: nginx at nginx.org Sent: Tuesday, January 13, 2015 1:58:32 AM Subject: Re: resolver directive doesn't fallback to the system DNS resolver kunalvjti Wrote: ------------------------------------------------------- > Is there already a patch for this ? > I am not completely sure of how to make the nginx resolver (in > ngx_resolver.c) fallback to libresolv automatically and if this not Have a look at a Lua solution, not everything works yet, patches/ideas/feedback welcome :) http://nginx-win.ecsds.eu/devtest/EBLB_upstream_dev1.zip Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255961,256081#msg-256081 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jan 16 19:59:35 2015 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 16 Jan 2015 14:59:35 -0500 Subject: monitor upstream IP addr changes without using nginx's resolver In-Reply-To: <1243690343.2168388.1421435843707.JavaMail.zimbra@zimbra.com> References: <1243690343.2168388.1421435843707.JavaMail.zimbra@zimbra.com> Message-ID: <847f772f553e0f9db91aaf0b9f5ca3dc.NginxMailingListEnglish@forum.nginx.org> kunalvjti Wrote: ------------------------------------------------------- > Sorry am new to Lua but can you plz explain how this upstream status & > control will help with this issue. How can i query for the upstream ip Use the ngxlua --add option (https://github.com/chaoslawful/lua-nginx-module) or openresty, then add https://github.com/agentzh/lua-upstream-nginx-module replace the .c file from my archive and compile. In the same archive you will find an example full working nginx config file and 2 lua files giving you upstream access via a GUI or via Curl. > after every certain time interval and reconfigure nginx if there's a > change detected ? This can be done in Lua or you can let an external monitoring tool trigger a script firing a curl command. > This should be OS & implementation independent. How to make this work > everywhere ? What I've made so far works for any OS. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255961,256150#msg-256150 From petros.fraser at gmail.com Fri Jan 16 20:38:42 2015 From: petros.fraser at gmail.com (Peter Fraser) Date: Fri, 16 Jan 2015 12:38:42 -0800 Subject: Ipad autodiscovery Message-ID: Hi All I have my owa reverse proxy working. For some strange reason, all ipads cannot now sync. Android is fine. I am seeing an error that they are trying to connect using autodiscovery and I am not seeing a way to disable this in IOS. Is there a way then to proxy the autodiscovery attempt in nginx? Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jan 16 21:15:52 2015 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 16 Jan 2015 16:15:52 -0500 Subject: Ipad autodiscovery In-Reply-To: References: Message-ID: <769ba41a1c6e32a88b3870815bd146c6.NginxMailingListEnglish@forum.nginx.org> See here: http://www.experts-exchange.com/Security/Operating_Systems_Security/Windows_Security/Q_28538115.html Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256151,256152#msg-256152 From nginx-forum at nginx.us Fri Jan 16 23:56:39 2015 From: nginx-forum at nginx.us (tommygunner) Date: Fri, 16 Jan 2015 18:56:39 -0500 Subject: Limiting gzip_static to two directories. Message-ID: I have gzip enabled in Nginx as well as gzip_static. I am trying to limit gzip_static to just one or two sections. There are pre-compressed files inside the directory: media/po_compressor/ along with sub directories of this such as: media/po_compressor/4/js media/po_compressor/4/css Here is what I have below in nginx. What is the best way to look inside directory and sub-directories using location entry? ## Gzip Static module to compress CSS, JS location /media/po_compressor/ { gzip_static on; expires 365d; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } ## Compression gzip on; gzip_buffers 16 8k; gzip_comp_level 4; gzip_http_version 1.0; gzip_min_length 1280; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/x-icon image/bmp; gzip_vary on; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256154,256154#msg-256154 From nginx-forum at nginx.us Sat Jan 17 00:14:10 2015 From: nginx-forum at nginx.us (tommygunner) Date: Fri, 16 Jan 2015 19:14:10 -0500 Subject: Limiting gzip_static to two directories. In-Reply-To: References: Message-ID: <7781b64304d1d22a4654ce2a5711dd12.NginxMailingListEnglish@forum.nginx.org> changing location line to: location ^~ /media/po_compressor/ { should allow it to look for all files within this directory and sub-directories, right? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256154,256155#msg-256155 From reallfqq-nginx at yahoo.fr Sat Jan 17 10:38:56 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 17 Jan 2015 11:38:56 +0100 Subject: Limiting gzip_static to two directories. In-Reply-To: <7781b64304d1d22a4654ce2a5711dd12.NginxMailingListEnglish@forum.nginx.org> References: <7781b64304d1d22a4654ce2a5711dd12.NginxMailingListEnglish@forum.nginx.org> Message-ID: According to the location directive documentation, nginx will match one location block only. nginx will first match the longest prefix location to your request. If that longest prefix is the one you provided, then regular expression won't be checked, as you used the special modifier doing that. If that longest prefix is another location rule, then nginx will remember it, search for regex locations matches, stop at the first one matching or revert back to the longest prefix lcoation found earlier. So, if you have no longer prefix location matching those URIs, then the files will be served by it. --- *B. R.* On Sat, Jan 17, 2015 at 1:14 AM, tommygunner wrote: > changing location line to: > > location ^~ /media/po_compressor/ { > > should allow it to look for all files within this directory and > sub-directories, right? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256154,256155#msg-256155 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Jan 17 16:23:42 2015 From: nginx-forum at nginx.us (ntamblyn) Date: Sat, 17 Jan 2015 11:23:42 -0500 Subject: Nginx auth users Message-ID: Hello i am slowly migrating from apache to Nginx So far everything is running smoothly and i have to say from all the benchmark tests Nginx has improved performance by 60% that is incredible and i havent even drived into performance tuning. Anyway we have numerous websites that require a list of specific users to be able to access, normally we handle this with a htpasswd file and require user on apache. We have hundreds of different users, and based on the other solutions i have seen such as having separate files for each website with those users in those files. This seems highly impracticable to me as some users have access to multiple websites so these users would have to be duplicated within each of these separate user files. So i was wondering is there another work around? where i dont have to strip every user out of the htpasswd file and enter them into separate files, sure i can script this in perl but there must be an easier option and just sticking with one user file. Any response would be appreciated Thank you for your time in reading this post. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256161,256161#msg-256161 From nginx-forum at nginx.us Sat Jan 17 18:27:57 2015 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 17 Jan 2015 13:27:57 -0500 Subject: [ANN] Windows nginx 1.7.10.1 Gryphon Message-ID: 19:14 17-1-2015 nginx 1.7.10.1 Gryphon Based on nginx 1.7.10 (15-1-2015, last changeset 5964:0a198a517eaf) with; + reverted changeset 5962:727177743c3c (causing segfaults) + set-misc-nginx-module v0.27 (upgraded 14-1-2015) + HttpSubsModule v0.6.4 (upgraded 14-1-2015) + lua-nginx-module v0.9.13 (upgraded 14-1-2015) + prove05.zip (onsite), a Windows Test_Suite (updated 16-1-2015) + See http://nginx-win.ecsds.eu/devtest/EBLB_upstream_dev1.zip for a partly working example of managing backends + reverted changesets 5960:e9effef98874 and 5959:f7584d7c0ccb (breaks too many things, needs re-engineering) + Openssl-1.0.1l (CVE-2014-3571, CVE-2015-0206, CVE-2014-3569, CVE-2014-3572, CVE-2015-0204, CVE-2015-0205, CVE-2014-8275, CVE-2014-3570) + cache_purge v2.3 (upgraded 30-12-2014) + Naxsi WAF v0.53-3 (upgraded 30-12-2014) + ngx_signal_process, http://forum.nginx.org/read.php?29,255612 + Source changes back ported + Source changes add-on's back ported + Changes for nginx_basic: Source changes back ported * Scheduled release: yes * Additional specifications: see 'Feature list' Builds can be found here: http://nginx-win.ecsds.eu/ Follow releases https://twitter.com/nginx4Windows Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256162,256162#msg-256162 From petros.fraser at gmail.com Sat Jan 17 20:45:52 2015 From: petros.fraser at gmail.com (Peter Fraser) Date: Sat, 17 Jan 2015 15:45:52 -0500 Subject: Ipad autodiscovery In-Reply-To: <769ba41a1c6e32a88b3870815bd146c6.NginxMailingListEnglish@forum.nginx.org> References: <769ba41a1c6e32a88b3870815bd146c6.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thanks for the pointer. I took a look at the solution. From what I gather though, that user used his solution for all clients. My problem that I am trying to figure out is that the Android Phones work just fine. I guess I was just trying to see if anyone could help me figure why only the later model iphones and ipads refused to work. Even the iphone 4 works just great. On Fri, Jan 16, 2015 at 4:15 PM, itpp2012 wrote: > See here: > > http://www.experts-exchange.com/Security/Operating_Systems_Security/Windows_Security/Q_28538115.html > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256151,256152#msg-256152 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Jan 17 20:56:10 2015 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 17 Jan 2015 15:56:10 -0500 Subject: Ipad autodiscovery In-Reply-To: References: Message-ID: <8aff8fec25dcd067907de7e095e021d5.NginxMailingListEnglish@forum.nginx.org> I might have read it wrong but it seems that you could create fake files the device is looking for or just return a 200 for such requests. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256151,256165#msg-256165 From nginx-forum at nginx.us Sun Jan 18 04:15:34 2015 From: nginx-forum at nginx.us (cxfcxf) Date: Sat, 17 Jan 2015 23:15:34 -0500 Subject: strange behavior for cache manager Message-ID: <7131f249e636925c63e69bdc7c4f6187.NginxMailingListEnglish@forum.nginx.org> Hi, we are current running nginx version 1.7.6, we use nginx primarily as a reverse proxy on linux. we have encountered a strange behavior for nginx cache manager, everything is fine after restart nginx, the cache manage periodically spawn new process to check the meta data and honor the max cache size we are setting. but after running for like 6 hours, it stopped honor the max cache size we are setting and started to go over it and eventually reach full disk size. no matter what we do (reduce the cache size to half of disk, reduce the active time for the cache) as long as it go over it, it will just keep growing. i did some strace to the cache manager, and it just showing some normal epoll_wait, but nothing will even get unlinked. the process spawn cache manager perfectly fine. PS. each time i restart nginx, after cache loader process completed, strace to cache manage will show it starts to unlink file, and everything goes back to normal. cache manage also starts to control cache and keep total cache size under max cache size we set. after certain period of time. it will fail again. What could potentially cause this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256168,256168#msg-256168 From nginx-forum at nginx.us Sun Jan 18 14:38:23 2015 From: nginx-forum at nginx.us (Vetsolution.be) Date: Sun, 18 Jan 2015 09:38:23 -0500 Subject: udp log graylog Message-ID: <1fad6d541a741b290b9beb77818866e1.NginxMailingListEnglish@forum.nginx.org> Hi, I'm setting up logging from my Nginx server to a Graylog server. I folow this short guide https://www.graylog2.org/content-packs/547b5021e4b0a06d87eea01e . But nothing works... My iptables are all accept policies, and when I make a udp tcpdump both on nginx server and graylog nothing appear... Ant idea? This is what i change on http section of /etc/nginx.conf log_format graylog2_format '$remote_addr - $remote_user [$time_local] $ # replace the hostnames with the IP or hostname of your Graylog2 server access_log syslog:server=192.168.15.225:12301 graylog2_format; error_log syslog:server=192.168.15.225:12302; # access_log /var/log/nginx/access.log; # error_log /var/log/nginx/error.log; types { text/plain log; } Regards, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256177,256177#msg-256177 From nginx-forum at nginx.us Sun Jan 18 16:57:06 2015 From: nginx-forum at nginx.us (Vetsolution.be) Date: Sun, 18 Jan 2015 11:57:06 -0500 Subject: udp log graylog In-Reply-To: <1fad6d541a741b290b9beb77818866e1.NginxMailingListEnglish@forum.nginx.org> References: <1fad6d541a741b290b9beb77818866e1.NginxMailingListEnglish@forum.nginx.org> Message-ID: Fixed ! Debian install nginx 1.2.1 , i had to change packtage list and add nginx main... repo Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256177,256184#msg-256184 From nginx-forum at nginx.us Mon Jan 19 01:50:05 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Sun, 18 Jan 2015 20:50:05 -0500 Subject: How to return a cookie to a client when auth_request is used? In-Reply-To: <20150115131611.GO79857@mdounin.ru> References: <20150115131611.GO79857@mdounin.ru> Message-ID: Thank you Maxim, it is much better in the sense that I am not getting an error at NGINX start time, but the FastCGI back-end server listening at port 9000 does not seem to get the cookie set by the FastCGI auth server, nor any data from a POST request body or data generated by FastCGI auth app. On a separate note, GET request would get a response, but a POST request would get an Internal error. Also, after a few successful GET requests, I sometimes would get an incomplete response, as if it was waiting for some input. Any idea what I might be missing? Note that I verified the auth fastcgi app on its own, and it printed the cookie. I verified the fastcgi back-end server on its own, and it returns a complete POST response. Below is the code and curl requests/responses. Thanks much! http { server { listen 80; server_name localhost; location / { auth_request /auth; fastcgi_param HTTP_COOKIE $http_cookie; include fastcgi_params; auth_request_set $saved_set_cookie $upstream_http_set_cookie; add_header Set-Cookie $saved_set_cookie; fastcgi_pass 127.0.0.1:9000; } location = /auth { include fastcgi_params; fastcgi_param HTTP_COOKIE $http_cookie; fastcgi_pass 127.0.0.1:9010; } } } The FCGI auth server's code sets the cookie as follows: int main(int argc, char **argv) { int count = 0; while(FCGI_Accept() >= 0) { ... printf("Content-type: text/html\n\n" "Set-Cookie: name=AuthCookie\r\n" "FastCGI 9010: Hello!\n" "

FastCGI 9010: Hello!

\n" "Request number %d running on host %s\n", ++count, getenv("SERVER_NAME")); /* code to print the env variables */ .... FCGI_Finish(); } return 0; } ------------------------------------------------------------------------- The FCGI back-end server's code is as follows: #include "fcgi_stdio.h" #include extern char **environ; int main(int argc, char **argv) { int count = 0; while(FCGI_Accept() >= 0){ char *contentLength = getenv("CONTENT_LENGTH"); int packetRead = 0; int done = 0; int len; int idx; if (contentLength != NULL) { len = strtol(contentLength, NULL, 10); } else { len = 0; } /* Create a file to put output */ FCGI_FILE * fileOut = FCGI_fopen("/tmp/fcgi.out", "w"); if (fileOut) { while(done < len) { char buffer[1024]; int i; packetRead = FCGI_fread(buffer, 1, sizeof(buffer), stdin); if (packetRead < 0) { break; } if (packetRead > 0) { FCGI_fwrite(buffer, 1, packetRead, fileOut); done += packetRead; } } FCGI_fclose(fileOut); } printf("Content-type: text/html\n\n" "FastCGI 9000: Hello!\n" "

FastCGI 9000: Hello!

\n" "Request number=%d lenrx=%d pktRead=%d uri=%s reqMethod=%s cookie=%s host= %s\n", ++count, len, packetRead, getenv("REQUEST_URI"), getenv("REQUEST_METHOD"), getenv("HTTP_COOKIE"), getenv("SERVER_NAME")); /* Print the received environment variables */ if ( !environ || environ[0] == NULL ){ printf ("Null environment \n"); return 0; } for (idx = 0; environ[idx] != NULL; idx++) { printf("

%s

\n", environ[idx]); } printf ("\n"); FCGI_Finish(); } return 0; } --------------------------------------------------------------- GET request/response, but no cookie in the response from the FastCGI back-end server: curl -v 'http://localhost:80/' * About to connect() to localhost port 80 (#0) * Trying ::1... Connection refused * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.19.7 (i386-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.2.3 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > Host: localhost > Accept: */* > < HTTP/1.1 200 OK < Server: nginx/1.7.9 < Date: Sun, 18 Jan 2015 17:20:23 GMT < Content-Type: text/html < Transfer-Encoding: chunked < Connection: keep-alive < FastCGI 9000: Hello!

FastCGI 9000: Hello!

Request number=2 lenrx=0 pktRead=0 uri=/ reqMethod=GET cookie= host= localhost

FCGI_ROLE=RESPONDER

HTTP_COOKIE=

QUERY_STRING=

REQUEST_METHOD=GET

CONTENT_TYPE=

CONTENT_LENGTH=

SCRIPT_NAME=/

REQUEST_URI=/

DOCUMENT_URI=/

DOCUMENT_ROOT=/usr/local/nginx-1.7.9/html

SERVER_PROTOCOL=HTTP/1.1

GATEWAY_INTERFACE=CGI/1.1

SERVER_SOFTWARE=nginx/1.7.9

REMOTE_ADDR=127.0.0.1

REMOTE_PORT=41122

SERVER_ADDR=127.0.0.1

SERVER_PORT=80

SERVER_NAME=localhost

REDIRECT_STATUS=200

HTTP_USER_AGENT=curl/7.19.7 (i386-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.2.3 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2

HTTP_HOST=localhost

HTTP_ACCEPT=*/*

* Connection #0 to host localhost left intact * Closing connection #0 -------------------------------------------- POST request failure: curl -d "name=Rafael%20Sagula&phone=3320780" -v 'http://localhost:80/' * About to connect() to localhost port 80 (#0) * Trying ::1... Connection refused * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 80 (#0) > POST / HTTP/1.1 > User-Agent: curl/7.19.7 (i386-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.2.3 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > Host: localhost > Accept: */* > Content-Length: 34 > Content-Type: application/x-www-form-urlencoded > < HTTP/1.1 500 Internal Server Error < Server: nginx/1.7.9 < Date: Sun, 18 Jan 2015 16:56:35 GMT < Content-Type: text/html < Content-Length: 192 < Connection: close < 500 Internal Server Error

500 Internal Server Error


nginx/1.7.9
* Closing connection #0 --------------------------------------------- A curl response that seems to be waiting for input: curl -d "name=Rafael%20Sagula&phone=3320780" -v 'http://localhost:80/' * About to connect() to localhost port 80 (#0) * Trying ::1... Connection refused * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 80 (#0) > POST / HTTP/1.1 > User-Agent: curl/7.19.7 (i386-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.2.3 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > Host: localhost > Accept: */* > Content-Length: 34 > Content-Type: application/x-www-form-urlencoded > ^C Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256110,256187#msg-256187 From nginx-forum at nginx.us Mon Jan 19 09:22:31 2015 From: nginx-forum at nginx.us (ntamblyn) Date: Mon, 19 Jan 2015 04:22:31 -0500 Subject: Perl Fastcgi on Solaris 11 Message-ID: <2a4a4daaa361f0bd85829ca79f47d266.NginxMailingListEnglish@forum.nginx.org> Has anyone been able to get perl fastcgi working on the Solairs 11 OS if so can you point me in the right direction Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256189,256189#msg-256189 From nginx-forum at nginx.us Mon Jan 19 09:56:35 2015 From: nginx-forum at nginx.us (srinumar) Date: Mon, 19 Jan 2015 04:56:35 -0500 Subject: Running Remote using Perl CGI Message-ID: <19c0eaecf456920c297b7fbc9fe2f3d1.NginxMailingListEnglish@forum.nginx.org> Hello All, I have Perl CGI Script which is running the command on the Remote command and display output on the webpage.Script is failing with below error..... cannot connect to filer 192.168.xxx.xxx at /var/www/cgi-bin/export.cgi line 14. Same Script works fine when i run the script from the command prompt... Apache running with the "deamon" user.As we have standard apache configuration,It's not possible for me change the apache configuration(apache user,path change). I need to run the similar command on 1000+ storages.already ssh keys enabled for the root user.So i would like use "root" user to run the remote commands. I have given the complete script below.. use CGI; use CGI::Carp qw(warningsToBrowser fatalsToBrowser); use Net::OpenSSH; my $hostname="192.168.xxx.xxx"; my %opts = ( user => "root", key_path => "/root/.ssh/id_rsa", strict_mode => 0 ); $obj=new CGI; my $ssh=Net::OpenSSH->new($hostname,%opts); $ssh->error and die "cannot connect to filer $hostname "; $test=$ssh->capture("version"); print $obj->header(), $obj->start_html(-title=>'Export Script'), $obj->center($obj->h2('Export list')), $obj->start_html(-title=>'Export Script'), $obj->center($obj->h2('Export Script')), $obj->i("$test"), $obj->end_html(); it's completly blocking my work.Please help me asap. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256190,256190#msg-256190 From nginx-forum at nginx.us Mon Jan 19 09:58:16 2015 From: nginx-forum at nginx.us (srinumar) Date: Mon, 19 Jan 2015 04:58:16 -0500 Subject: Running Remote command using Perl CGI In-Reply-To: <19c0eaecf456920c297b7fbc9fe2f3d1.NginxMailingListEnglish@forum.nginx.org> References: <19c0eaecf456920c297b7fbc9fe2f3d1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <794e6e255e4273ca7761e9f278659116.NginxMailingListEnglish@forum.nginx.org> Hello All, I have Perl CGI Script which is running the command on the Remote Server and display output on the webpage.Script is failing with below error..... cannot connect to filer 192.168.xxx.xxx at /var/www/cgi-bin/export.cgi line 14. Same Script works fine when i run the script from the command prompt... Apache running with the "deamon" user.As we have standard apache configuration,It's not possible for me change the apache configuration(apache user,path change). I need to run the similar command on 1000+ storages.already ssh keys enabled for the root user.So i would like use "root" user to run the remote commands. I have given the complete script below.. use CGI; use CGI::Carp qw(warningsToBrowser fatalsToBrowser); use Net::OpenSSH; my $hostname="192.168.xxx.xxx"; my %opts = ( user => "root", key_path => "/root/.ssh/id_rsa", strict_mode => 0 ); $obj=new CGI; my $ssh=Net::OpenSSH->new($hostname,%opts); $ssh->error and die "cannot connect to filer $hostname "; $test=$ssh->capture("version"); print $obj->header(), $obj->start_html(-title=>'Export Script'), $obj->center($obj->h2('Export list')), $obj->start_html(-title=>'Export Script'), $obj->center($obj->h2('Export Script')), $obj->i("$test"), $obj->end_html(); it's completly blocking my work.Please help me asap. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256190,256191#msg-256191 From jan.algermissen at nordsc.com Mon Jan 19 19:44:33 2015 From: jan.algermissen at nordsc.com (Jan Algermissen) Date: Mon, 19 Jan 2015 20:44:33 +0100 Subject: How to adjust Cache-Control for SSI-including entities Message-ID: Hi, [apologies if this has been asked lots of times before, but searches really did not turn up anything] I am using nginx with SSI enabled to assemble pages and page-fragments from upstream servers. upstream U1 produces the main page (the one containing SSI include directives) and wants nginx to cache its response (the page with the unresolved SSI include directives). Thus U1 sends the main page with Cache-Control: max-age=60. The includes come from upstream U2 and are also cacheable from the POV of U2, hence U2 adds a max-age for those. The max-age could be less than 60 or more than 60. What I want nginx to do is to cache the upstream response for the main page and also cache the upstream responses for the includes - which nginx does as the debug log suggests. In addition I of course want nginx to strip/adjust the Cache-Control / max-age when the assembled page is sent to the client. Unfortunately nginx seems to simply copy the max-age of the main page upstream response and send it to the client - which is obviously misleading information since the cacheabilty of the response is unknown or the lowest max-age of all includes. Can anyone help me on how I get nginx to at least remove the misleading Cache-Control header? ssi_last_modified is related and yields the correct default behavior: http://nginx.org/en/docs/http/ngx_http_ssi_module.html#ssi_last_modified Jan From francis at daoine.org Mon Jan 19 20:38:32 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 19 Jan 2015 20:38:32 +0000 Subject: How to adjust Cache-Control for SSI-including entities In-Reply-To: References: Message-ID: <20150119203832.GF15670@daoine.org> On Mon, Jan 19, 2015 at 08:44:33PM +0100, Jan Algermissen wrote: Hi there, > upstream U1 produces the main page (the one containing SSI include directives) and wants nginx to cache its response (the page with the unresolved SSI include directives). Thus U1 sends the main page with Cache-Control: max-age=60. > In addition I of course want nginx to strip/adjust the Cache-Control / max-age when the assembled page is sent to the client. Completely untested, but, depending on how you connect to upstream, does "proxy_hide_header" or "proxy_ignore_headers" do what you want? f -- Francis Daly francis at daoine.org From jan.algermissen at nordsc.com Mon Jan 19 21:20:55 2015 From: jan.algermissen at nordsc.com (Jan Algermissen) Date: Mon, 19 Jan 2015 22:20:55 +0100 Subject: How to adjust Cache-Control for SSI-including entities In-Reply-To: <20150119203832.GF15670@daoine.org> References: <20150119203832.GF15670@daoine.org> Message-ID: Francis, thanks a lot; > proxy_hide_header does the trick. However, it still feels a bit brute-force. Hence, if there are other solutions, I would be curious to know. Meanwhile, ?hide? gets me going. Jan On 19 Jan 2015, at 21:38, Francis Daly wrote: > On Mon, Jan 19, 2015 at 08:44:33PM +0100, Jan Algermissen wrote: > > Hi there, > >> upstream U1 produces the main page (the one containing SSI include directives) and wants nginx to cache its response (the page with the unresolved SSI include directives). Thus U1 sends the main page with Cache-Control: max-age=60. > >> In addition I of course want nginx to strip/adjust the Cache-Control / max-age when the assembled page is sent to the client. > > Completely untested, but, depending on how you connect to upstream, does > "proxy_hide_header" or "proxy_ignore_headers" do what you want? > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Jan 19 21:39:23 2015 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 19 Jan 2015 16:39:23 -0500 Subject: How to adjust Cache-Control for SSI-including entities In-Reply-To: References: Message-ID: <3d0a3fef713ec94c675a3c9546a99296.NginxMailingListEnglish@forum.nginx.org> Jan Algermissen Wrote: ------------------------------------------------------- > Francis, > > thanks a lot; > > > proxy_hide_header > > does the trick. However, it still feels a bit brute-force. > > Hence, if there are other solutions, I would be curious to know. With Lua you could test to see which header has the least expiry time and pass only that one. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256201,256204#msg-256204 From kayasaman at gmail.com Mon Jan 19 22:38:22 2015 From: kayasaman at gmail.com (Kaya Saman) Date: Mon, 19 Jan 2015 22:38:22 +0000 Subject: phpBB3.1 not working with oauth under nginx Message-ID: <54BD875E.1090907@gmail.com> Hi, I wonder if anyone is running phpBB3.1 with php56? Currently I am trying to setup oauth to work with Google and Facebook, but getting a white screen response after authentication. The logs show this: 2015/01/19 22:32:08 [error] 28354#0: *3 FastCGI sent in stderr: "PHP message: PHP Fatal error: Uncaught exception 'OAuth\Common\Http\Exception\TokenResponseException' with message 'Failed to request resource.' in /usr/local/www/dxb_users_forum/vendor/lusitanian/oauth/src/OAuth/Common/Http/Client/StreamClient.php:54 Stack trace: #0 /usr/local/www/dxb_users_forum/vendor/lusitanian/oauth/src/OAuth/OAuth2/Service/AbstractService.php(97): OAuth\Common\Http\Client\StreamClient->retrieveResponse(Object(OAuth\Common\Http\Uri\Uri), Array, Array) #1 /usr/local/www/dxb_users_forum/phpbb/auth/provider/oauth/service/facebook.php(69): OAuth\OAuth2\Service\AbstractService->requestAccessToken('AQDzs7GN9ZIsOLX...') #2 /usr/local/www/dxb_users_forum/phpbb/auth/provider/oauth/oauth.php(198): phpbb\auth\provider\oauth\service\facebook->perform_auth_login() #3 /usr/local/www/dxb_users_forum/phpbb/auth/auth.php(937): phpbb\auth\provider\oauth\oauth->login('', '') #4 /usr/local/www/dxb_users_forum/includes/functions.php(2831): phpbb\auth\auth->login('', '', false, 1, 0) #5 /usr/local/www/dx" while reading response header from upstream, client: , server: , request: "GET /ucp.php?mode=login&login=external&oauth_service=facebook&code=AQDzs7GN9ZIsOLXkg5X8t_UwrQf8aI2tysLgBesvkM_53e4PalEtToIEWwhPGYAGCJutxDSAsrc2GqFACPcPqY0BmJkRFzJiZPISxSj6Et2EsaTZ0BOTGv4nmqNTI_ZHNzG6HqV6cp_uhiRgKA-qSmF0g-XnlBz2WsYJ1PZB6V5E95AZkt9TIrrNETlZkzD4FHRUAHyDUlxJUD_cYOhT8A4QIk5pgxLwwNSUS2YKVsTdq76EXKIOVt4sgVw9vAaiM-gtqfKfro27JBRYFhlqIRH3vDgtzZSIT9E-zwMzzwck8RlUdbiYTm3np1hQQU2QZsG9-tZBN6WuhZopv77yFpgT HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "", referrer: "http:///ucp.php?mode=login&sid=bb0c84259129a944fa4e78cab45e31c2" I am not sure where the issue is related to; is php, nginx, or simply phpBB3?? The sample nginx.conf was taken from here: https://raw.githubusercontent.com/phpbb/phpbb3/master/phpBB/docs/nginx.sample.conf and modified to my needs. If anyone has any experience in using phpBB3 under nginx would you be able to help or suggest something? Many thanks! Regards, Kaya From petros.fraser at gmail.com Mon Jan 19 22:50:57 2015 From: petros.fraser at gmail.com (Peter Fraser) Date: Mon, 19 Jan 2015 17:50:57 -0500 Subject: Ipad autodiscovery In-Reply-To: <8aff8fec25dcd067907de7e095e021d5.NginxMailingListEnglish@forum.nginx.org> References: <8aff8fec25dcd067907de7e095e021d5.NginxMailingListEnglish@forum.nginx.org> Message-ID: Well I have been testing this further. What I have found is that the issue is really not autodiscovery. When the device initially tries to connect it uses autodiscovery but when that fails, you are presented with several boxes to type in the server name, domain name etc. At this point, when I put in all this information in the iphone, and click next, I get the error : Unable to verify Account. What bugs me is the Android works fine. I have added now about 15 android phones and they all work just fine with nginx as a reverse proxy to exchange. I can't imagine what the iphone is doing differently. I have been scouring through logs to try and figure it out but nothing yet. On Sat, Jan 17, 2015 at 3:56 PM, itpp2012 wrote: > I might have read it wrong but it seems that you could create fake files > the > device is looking for or just return a 200 for such requests. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256151,256165#msg-256165 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hewanxiang at gmail.com Tue Jan 20 02:44:14 2015 From: hewanxiang at gmail.com (Andy) Date: Tue, 20 Jan 2015 10:44:14 +0800 Subject: how to limit the total header size of a request? Message-ID: Hi guys, I'm looking for a configuration to limit the summarized size for the request line and all header fields in a request? It looks " *client_header_buffer_size* *size"* is to limit the single header field. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jan 20 10:44:16 2015 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 20 Jan 2015 05:44:16 -0500 Subject: Ipad autodiscovery In-Reply-To: References: Message-ID: Setup an accesspoint with raw logging and let an iphone go trough that, see what it's doing. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256151,256210#msg-256210 From jan.algermissen at nordsc.com Tue Jan 20 10:46:36 2015 From: jan.algermissen at nordsc.com (Jan Algermissen) Date: Tue, 20 Jan 2015 11:46:36 +0100 Subject: How to adjust Cache-Control for SSI-including entities In-Reply-To: <20150119203832.GF15670@daoine.org> References: <20150119203832.GF15670@daoine.org> Message-ID: <5D98D0B4-0FA6-415B-BC1C-2BFF0BB4EF12@nordsc.com> On 19 Jan 2015, at 21:38, Francis Daly wrote: > On Mon, Jan 19, 2015 at 08:44:33PM +0100, Jan Algermissen wrote: > > Hi there, > >> upstream U1 produces the main page (the one containing SSI include directives) and wants nginx to cache its response (the page with the unresolved SSI include directives). Thus U1 sends the main page with Cache-Control: max-age=60. > >> In addition I of course want nginx to strip/adjust the Cache-Control / max-age when the assembled page is sent to the client. > > Completely untested, but, depending on how you connect to upstream, does > "proxy_hide_header" or "proxy_ignore_headers" do what you want? Do you have any idea, how I can then best re-add the header? The case came up that clients need an explicit Cache-Control: no-cache (to ensure reload of the page when hitting the back button ?) Jan > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Tue Jan 20 12:27:59 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 20 Jan 2015 12:27:59 +0000 Subject: How to adjust Cache-Control for SSI-including entities In-Reply-To: <5D98D0B4-0FA6-415B-BC1C-2BFF0BB4EF12@nordsc.com> References: <20150119203832.GF15670@daoine.org> <5D98D0B4-0FA6-415B-BC1C-2BFF0BB4EF12@nordsc.com> Message-ID: <20150120122759.GG15670@daoine.org> On Tue, Jan 20, 2015 at 11:46:36AM +0100, Jan Algermissen wrote: > On 19 Jan 2015, at 21:38, Francis Daly wrote: > > Completely untested, but, depending on how you connect to upstream, does > > "proxy_hide_header" or "proxy_ignore_headers" do what you want? > > Do you have any idea, how I can then best re-add the header? The case came up that clients need an explicit Cache-Control: no-cache (to ensure reload of the page when hitting the back button ?) > In general, http://nginx.org/r/add_header But for this specific header, perhaps "expires" on the same page is more useful. f -- Francis Daly francis at daoine.org From vbart at nginx.com Tue Jan 20 13:20:43 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 20 Jan 2015 16:20:43 +0300 Subject: how to limit the total header size of a request? In-Reply-To: References: Message-ID: <3724876.PWUuksBzfd@vbart-workstation> On Tuesday 20 January 2015 10:44:14 Andy wrote: > Hi guys, > > I'm looking for a configuration to limit the summarized size for the > request line and all header fields in a request? http://nginx.org/r/large_client_header_buffers > It looks " > *client_header_buffer_size* *size"* is to limit the single header field. Actually, no. Please, see the documentation: http://nginx.org/r/client_header_buffer_size wbr, Valentin V. Bartenev From nginx-forum at nginx.us Tue Jan 20 13:54:39 2015 From: nginx-forum at nginx.us (locojohn) Date: Tue, 20 Jan 2015 08:54:39 -0500 Subject: Custom settings with PHP In-Reply-To: <20110712163613.GF42265@mdounin.ru> References: <20110712163613.GF42265@mdounin.ru> Message-ID: Hello Maxim, Maxim Dounin Wrote: ------------------------------------------------------- > As already replied in russian list, currently (going to be fixed) > this may be done only with a hack like > > geo $x { > default "${include_path}:/my/other/include/path"; > } > > fastcgi_param PHP_VALUE $x; > > which relies on the fact that geo module doesn't support > variables. Has this been fixed in 1.7.x? Eg can I now use PHP variables in the fastcgi_param arguments, e.g. maybe with escaping character? fastcgi_param PHP_ADMIN_VALUE "open_basedir=\${open_basedir}"; Andrejs Posted at Nginx Forum: http://forum.nginx.org/read.php?2,22556,256215#msg-256215 From shahzaib.cb at gmail.com Tue Jan 20 18:38:21 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 20 Jan 2015 23:38:21 +0500 Subject: Geoip issue with nginx in front of varnish and apache ! Message-ID: Hi, We've compile varnish with geoip module in order to cache country based hashes, so far varnish<-> apache structure is working fine with geoip module and caching requests based on countries but when we add another Nginx proxy layer in front of varnish i.e nginx -> varnish - apache, the geoip module stop tracking Country hashes and varnish shows following logs : TxHeader b X-GeoIP: Unknown nginx : port 80 Varnish : port 6081 Apache : port 7172 So far, nginx is forwarding client ips to varnish but it looks like varnish sessionstart value in varnishlog still showing ip : 127.0.0.1 due to which it is unable to track client's country. Only if someone can point me to right direction. varnishlog : 15 BackendOpen b default 127.0.0.1 45806 127.0.0.1 7172 15 BackendXID b 1609403517 15 TxRequest b GET 15 TxURL b /video/5708047/jeena-jeena-video-song-badlapur-atif-aslam 15 TxProtocol b HTTP/1.1 15 TxHeader b Referer: http://beta2.domain.com/videos/ 15 TxHeader b X-Real-IP: 39.49.89.134 15 TxHeader b X-Forwarded-Host: beta2.domain.com 15 TxHeader b X-Forwarded-Server: beta2.domain.com 15 TxHeader b Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 15 TxHeader b User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36 15 TxHeader b Accept-Language: en-GB,en-US;q=0.8,en;q=0.6 15 TxHeader b X-Forwarded-For: 39.49.89.134, 127.0.0.1 15 TxHeader b host: default 15 TxHeader b X-GeoIP: Unknown 15 TxHeader b X-Varnish: 1609403517 15 TxHeader b Accept-Encoding: gzip 15 RxProtocol b HTTP/1.1 15 RxStatus b 200 15 RxResponse b OK 15 RxHeader b Date: Tue, 20 Jan 2015 18:26:06 GMT 15 RxHeader b Server: Apache 15 RxHeader b Set-Cookie: PHPSESSID=pcl9rkh58s39fgjti139bgn6n1; expires=Wed, 21-Jan-2015 18:26:06 GMT; path=/ 15 RxHeader b Expires: Thu, 19 Nov 1981 08:52:00 GMT 15 RxHeader b Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 15 RxHeader b Pragma: no-cache 15 RxHeader b Set-Cookie: fb_239452059417627_state=42cba63d4821f3964426e14b2833e8d0; expires=Tue, 20-Jan-2015 19:26:06 GMT; path=/ 15 RxHeader b Set-Cookie: pageredir=http%3A%2F%2Fbeta2.domain.com%2Fvideo%2F5708047%2Fjeena-jeena-video-song-badlapur-atif-aslam; expires=Tue, 20-Jan-2015 20:26:06 GMT; path=/ 15 RxHeader b Connection: close 15 RxHeader b Transfer-Encoding: chunked 15 RxHeader b Content-Type: text/html; charset=utf-8 15 Fetch_Body b 3(chunked) cls 0 mklen 1 15 Length b 127024 15 BackendClose b default 12 SessionOpen c 127.0.0.1 51675 :6081 12 ReqStart c 127.0.0.1 51675 1609403517 12 RxRequest c GET 12 RxURL c /video/5708047/jeena-jeena-video-song-badlapur-atif-aslam 12 RxProtocol c HTTP/1.0 12 RxHeader c Referer: http://beta2.domain.com/videos/ 12 RxHeader c Host: beta2.domain.com 12 RxHeader c Cookie: __qca=P0-993092579-1421436407272; __qca=P0-1309575897-1421485050924; __utma=198843324.254214983.1421436407.1421439435.1421777481.2; __utmb=198843324.5.10.1421777481; __utmc=198843324; __utmz=198843324.1421439435.1.1.utmcsr=(direct)|utmccn=(direct) 12 RxHeader c X-Real-IP: 39.49.89.134 12 RxHeader c X-Forwarded-Host: beta2.domain.com 12 RxHeader c X-Forwarded-Server: beta2.domain.com 12 RxHeader c X-Forwarded-For: 39.49.89.134 12 RxHeader c Connection: close 12 RxHeader c Cache-Control: max-age=0 12 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 12 RxHeader c User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36 12 RxHeader c Accept-Language: en-GB,en-US;q=0.8,en;q=0.6 12 VCL_call c recv lookup 12 VCL_call c hash 12 Hash c /video/5708047/jeena-jeena-video-song-badlapur-atif-aslam 12 Hash c default 12 Hash c Unknown 12 VCL_return c hash 12 VCL_call c miss fetch 12 Backend c 15 default default 12 TTL c 1609403517 RFC 0 -1 -1 1421778367 0 1421778366 375007920 0 12 VCL_call c fetch 12 TTL c 1609403517 VCL 3600 -1 -1 1421778367 -0 12 VCL_return c deliver 12 ObjProtocol c HTTP/1.1 12 ObjResponse c OK 12 ObjHeader c Date: Tue, 20 Jan 2015 18:26:06 GMT 12 ObjHeader c Server: Apache 12 ObjHeader c Set-Cookie: PHPSESSID=pcl9rkh58s39fgjti139bgn6n1; expires=Wed, 21-Jan-2015 18:26:06 GMT; path=/ 12 ObjHeader c Expires: Thu, 19 Nov 1981 08:52:00 GMT 12 ObjHeader c Pragma: no-cache 12 ObjHeader c Set-Cookie: fb_239452059417627_state=42cba63d4821f3964426e14b2833e8d0; expires=Tue, 20-Jan-2015 19:26:06 GMT; path=/ 12 ObjHeader c Set-Cookie: pageredir=http%3A%2F%2Fbeta2.domain.com%2Fvideo%2F5708047%2Fjeena-jeena-video-song-badlapur-atif-aslam; expires=Tue, 20-Jan-2015 20:26:06 GMT; path=/ 12 ObjHeader c Content-Type: text/html; charset=utf-8 12 VCL_call c deliver deliver 12 TxProtocol c HTTP/1.1 12 TxStatus c 200 12 TxResponse c OK 12 TxHeader c Set-Cookie: PHPSESSID=pcl9rkh58s39fgjti139bgn6n1; expires=Wed, 21-Jan-2015 18:26:06 GMT; path=/ 12 TxHeader c Expires: Thu, 19 Nov 1981 08:52:00 GMT 12 TxHeader c Pragma: no-cache 12 TxHeader c Set-Cookie: fb_239452059417627_state=42cba63d4821f3964426e14b2833e8d0; expires=Tue, 20-Jan-2015 19:26:06 GMT; path=/ 12 TxHeader c Set-Cookie: pageredir=http%3A%2F%2Fbeta2.domain.com%2Fvideo%2F5708047%2Fjeena-jeena-video-song-badlapur-atif-aslam; expires=Tue, 20-Jan-2015 20:26:06 GMT; path=/ 12 TxHeader c Content-Type: text/html; charset=utf-8 12 TxHeader c Content-Length: 127024 12 TxHeader c Accept-Ranges: bytes 12 TxHeader c Date: Tue, 20 Jan 2015 18:26:06 GMT 12 TxHeader c Age: 0 12 TxHeader c Connection: close 12 Length c 127024 12 ReqEnd c 1609403517 1421778366.722367764 1421778366.841626406 0.000178814 0.119145393 0.000113249 12 SessionClose c Connection: close 12 StatSess c 127.0.0.1 51675 0 1 1 0 0 1 602 127024 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1421778367 1.0 15 BackendOpen b default 127.0.0.1 45814 127.0.0.1 7172 Nginx proxy.inc : proxy_redirect off; proxy_hide_header Vary; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; As you can see from proxy.inc file that nginx is forwarding client's real ip to varnish but still varnish is unable to track client's GeoIP. Maybe i am missing some nginx settings because varnish:80 <-> apache:7172 structure working fine but nginx -> varnish is not. Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Jan 20 19:06:42 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 20 Jan 2015 19:06:42 +0000 Subject: Geoip issue with nginx in front of varnish and apache ! In-Reply-To: References: Message-ID: <20150120190642.GH15670@daoine.org> On Tue, Jan 20, 2015 at 11:38:21PM +0500, shahzaib shahzaib wrote: Hi there, > We've compile varnish with geoip module in order to cache country based > hashes, so far varnish<-> apache structure is working fine with geoip > module and caching requests based on countries but when we add another > Nginx proxy layer in front of varnish i.e nginx -> varnish - apache, the > geoip module stop tracking Country hashes and varnish shows following logs : It sounds like you need to do whatever it takes to convince varnish's geoip module to use the IP address in the X-Real-IP header, and not the actual client address. Check the varnish geoip module documentation. f -- Francis Daly francis at daoine.org From shahzaib.cb at gmail.com Tue Jan 20 20:05:49 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Wed, 21 Jan 2015 01:05:49 +0500 Subject: Geoip issue with nginx in front of varnish and apache ! In-Reply-To: <20150120190642.GH15670@daoine.org> References: <20150120190642.GH15670@daoine.org> Message-ID: Thanks for reply Francis, adding following did the trick :) set req.http.X-Forwarded-For = req.http.X-Forwarded-For; set req.http.X-GeoIP = geoip.country_code(req.http.X-Forwarded-For); Regards. Shahzaib On Wed, Jan 21, 2015 at 12:06 AM, Francis Daly wrote: > On Tue, Jan 20, 2015 at 11:38:21PM +0500, shahzaib shahzaib wrote: > > Hi there, > > > We've compile varnish with geoip module in order to cache country > based > > hashes, so far varnish<-> apache structure is working fine with geoip > > module and caching requests based on countries but when we add another > > Nginx proxy layer in front of varnish i.e nginx -> varnish - apache, the > > geoip module stop tracking Country hashes and varnish shows following > logs : > > It sounds like you need to do whatever it takes to convince varnish's > geoip module to use the IP address in the X-Real-IP header, and not the > actual client address. > > Check the varnish geoip module documentation. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dewanggaba at xtremenitro.org Wed Jan 21 00:38:03 2015 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Wed, 21 Jan 2015 07:38:03 +0700 Subject: Need best practice on GeoIP/GeoDNS Message-ID: <54BEF4EB.5030204@xtremenitro.org> Hi, I have project that will be used multilocation webserver, but still confuse about implementing GeoDNS or GeoIP. Which method are powerfull? I want to separate user between Country A to WebServer A, Country B to Webserver B. Each webserver are located on each country. From qiangzhang at qiyi.com Wed Jan 21 09:49:37 2015 From: qiangzhang at qiyi.com (=?gb2312?B?1cXHv6OocWlhbmd6aGFuZ6Op?=) Date: Wed, 21 Jan 2015 09:49:37 +0000 Subject: How to disable creating tmpfile when using nginx as a cache Message-ID: Hi community, I am using nginx as L1 Cache for small static files(10~100k) and use xfs as low level file system. 450G SSD is used and when files number stored to 25000000 and disk usage up to 85%, system load is very high (>15, on 32 CPUs ). After basic debug/detect it using perf, I located the hot code at creating tmpfiles while receiving upstream data, showed as below: [cid:image002.jpg at 01D035A2.9EF5B020] Can any can guide me how to disable creating tmpfile for small (<200k) files and write to target cache dir while finished directly? I have tried some configure item like: proxy_max_temp_file_size, from official document for proxy_max_temp_file_size I found: ?The zero value disables buffering of responses to temporary files.? But why tmpfile still be created? It seems that meta operation (inode alloc/free) is the bottleneck while using nginx to cache small files, could anyone give some way to optimize it? For example: - switch to other filesystem like ext4 ? - adjust some keys_zone polices? - others? Thanks Qiang -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 77740 bytes Desc: image002.jpg URL: From nginx-forum at nginx.us Wed Jan 21 10:47:05 2015 From: nginx-forum at nginx.us (abhinanda) Date: Wed, 21 Jan 2015 05:47:05 -0500 Subject: Modify request body before sending to upstream Message-ID: <69f4691d87e66a881200eb8e1e7e48ac.NginxMailingListEnglish@forum.nginx.org> Hi, I am new to nginx module development and I'm working on my first ever module. I've read up Evan Miller's post besides others, and I've experimented tweaking some simple modules. >From what I understand, proxy_pass module is a handler and we can effectively have just one handler run on a request. What I need is to do some work with the content before I send a request to the upstream servers. I have been able to achieve the reverse via filter modules, but not this. Is there an way to achieve this without touching proxy_pass? The requirement comes from a server rewrite we are doing to improve performance. We have nginx load balancing requests to a bunch of servers running python. We decided to rewrite some of the python pre-processing in C/C++ and write an nginx module to wrap around it. Please lead me the right way :). Abhishek Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256226,256226#msg-256226 From kaushik06101992 at gmail.com Wed Jan 21 12:20:19 2015 From: kaushik06101992 at gmail.com (Swarna Koushik Debroy) Date: Wed, 21 Jan 2015 17:50:19 +0530 Subject: issues with nginx-gridfs 3rd party module Message-ID: Hi, I've compiled and installed nginx with gridfs module as given by the instuctions and it got installed successfully. But then when I configure the nginx.conf with gridfs directive and restart the nginx server it fails giving the error as 'nginx: [emerg] unknown directive "gridfs" in /etc/nginx/nginx.conf:76' . Anyone who can help me fix this issue? environment: ubuntu 14.04, nginx version-1.4.6 Thanks, Swarna -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jan 21 14:03:39 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 21 Jan 2015 09:03:39 -0500 Subject: issues with nginx-gridfs 3rd party module In-Reply-To: References: Message-ID: Switch to openresty with mongodb, https://www.google.nl/#q=openresty+lua+MongoDB Much easier. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256227,256228#msg-256228 From nginx-forum at nginx.us Wed Jan 21 19:47:40 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Wed, 21 Jan 2015 14:47:40 -0500 Subject: How to return a cookie to a client when auth_request is used? In-Reply-To: References: <20150115131611.GO79857@mdounin.ru> Message-ID: <563e663208aeeec78ad7ce7ae758e4ac.NginxMailingListEnglish@forum.nginx.org> In case it will help someone else, the problem turned out to be in the FastCGI auth server's printf, the last "statement" of the HTTP header should end with \n\n instead of \r\n. The following was wrong: printf("Content-type: text/html\n\n" "Set-Cookie: name=AuthCookie\r\n" "FastCGI 9010: Hello!\n" ...); This did the trick: printf("Content-type: text/html\r\n" "Set-Cookie: name=AuthCookie\n\n" "FastCGI 9010: Hello!\n" ...); Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256110,256233#msg-256233 From nginx-forum at nginx.us Thu Jan 22 00:30:48 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Wed, 21 Jan 2015 19:30:48 -0500 Subject: How to pass fastcgi custom variables in C? Message-ID: <92356cc1a74acebb54b0b7def82b9108.NginxMailingListEnglish@forum.nginx.org> Hi, I would like to have the auth_request fastcgi auth server to send some custom variables to the fastcgi back-end server. For example, the Radius server returned some parameters which the fastcgi auth server needs to send to the fastcgi back-end server. locate / { auth_request /auth; fastcgi_pass ; <--- would like this server to see the custom param variable } locate /auth { fastcgi_param CUSTOM_PARAM custom_param; fastcgi_pass ; <---- returns a custom param value to be used by the back-end server } Could someone give me a pointer on how to this in the nginx.conf and the auth and back-end servers in C? I saw many examples for PHP but none for C. In the auth server app, I defined "int custom_param=100" for example, and would like the back-end server to see get this variable and value. Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256237,256237#msg-256237 From nginx-forum at nginx.us Thu Jan 22 06:43:10 2015 From: nginx-forum at nginx.us (jamesgan) Date: Thu, 22 Jan 2015 01:43:10 -0500 Subject: Proxy without buffering In-Reply-To: References: Message-ID: Hi, all Is there any progress in this area so far? It would be great if this did/will become a standard feature of nginx. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236568,256238#msg-256238 From nginx-forum at nginx.us Thu Jan 22 06:59:43 2015 From: nginx-forum at nginx.us (badtzhou) Date: Thu, 22 Jan 2015 01:59:43 -0500 Subject: Modify subrequest header Message-ID: <847c381a8e26ef8c6aab381c3b0b742f.NginxMailingListEnglish@forum.nginx.org> I am trying to use ngx_http_subrequest in my customize nginx module. I can see from the code that the subrequest share the same request header with the main request(sr->headers_in = r->headers_in). Is there a way to modify, add or delete request header for subrequest without affecting the request header of the main request? I tried ngx_list_init(&sr->headers_in.headers and use ngx_list_push to push new header in. And it is giving me a runtime error. Can someone point me to the right direction? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256239,256239#msg-256239 From nginx-forum at nginx.us Fri Jan 23 15:11:50 2015 From: nginx-forum at nginx.us (173279834462) Date: Fri, 23 Jan 2015 10:11:50 -0500 Subject: smtps mail proxy Message-ID: Hello, I seek advice on configuring nginx as a mail proxy. PREMISSES The existing system is based upon postfix and dovecot. The system delivers "n" virtual domains, say, mx.example_1.org, mx.example_2.org, ..., mx.example_n.org, all behind a single IP. There is no "shared" (Subject Alternative Name) certificate, because adding or releasing a domain would require a new shared certificate, revoquing the old one, and taxing the other domains for the novelty.---I refer to SAN certs as "condocerts" (condominium certificates): feel free to use the term yourself.--- We are not a condo, and therefore, each domain carries its own set of TLS certificates, managed autonomously. Dovecot manages nicely its side of things, with - per-domain "mail_location", - per-domain password database, - per-domain TLS certificates, - SNI [http://wiki2.dovecot.org/SSL/SNIClientSupport]. Client authentication is entirely delegated to dovecot; postfix uses SASL to dovecot's unix socket. PROBLEM Postfix does not support SNI. OUR AIM Our aim is to add SNI to port 465 (postfix) using nginx as transparent mail proxy. The following is a mock-up configuration. mail { proxy on; proxy_pass_error_message on; proxy_buffer 4k; # 4k|8k proxy_timeout 24h; xclient on; # http://www.postfix.org/XCLIENT_README.html ssl_dhparam /etc/vmail/dh2048; ssl_protocols TLSv1.2 TLSv1.1 TLSv1; # SNI supported ssl_ciphers DHE-RSA-AES256-SHA; ssl_prefer_server_ciphers on; ssl_session_cache shared:MAIL:10m; #ssl_session_timeout = #smtp_capabilities ...; # pass through wanted <------- #smtp_auth ...; # pass through wanted <------- server { listen 465; protocol smtp; ssl on; timeout 5s; server_name mx.example_1.org; #ssl_password_file /etc/vmail/example_1.org/passdb_keys; # to read .key certificates ssl_certificate /etc/vmail/example_1.org/ssl/mx.crt; ssl_certificate_key /etc/vmail/example_1.org/ssl/mx.key; } server { listen 465; protocol smtp; ssl on; timeout 5s; server_name mx.example_2.org; #ssl_password_file /etc/vmail/example_2.org/passdb_keys; ssl_certificate /etc/vmail/example_2.org/ssl/mx.crt; ssl_certificate_key /etc/vmail/example_2.org/ssl/mx.key; } # ... server { listen 465; protocol smtp; ssl on; timeout 5s; server_name mx.example_n.org; #ssl_password_file /etc/vmail/example_n.org/passdb_keys; ssl_certificate /etc/vmail/example_n.com/ssl/mx.crt; ssl_certificate_key /etc/vmail/example_n.com/ssl/mx.key; } } OPEN QUESTIONS 1. It is not clear how nginx would talk to postfix. One would expect the proxy to serve on port, say, 4650, being the port exposed by the router, masking postfix on port 465, but nginx does not seem to have a relevant configuration clause. 2. Nginx refuses to start-up, demanding "auth_http". However, we do not need to move authentication to nginx. What we need is a transparent proxy: nginx should listen to dovecot's unix socket, just like postfix does. Thank you for your advice, if any. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256248,256248#msg-256248 From nginx-forum at nginx.us Sat Jan 24 05:07:22 2015 From: nginx-forum at nginx.us (Kurogane) Date: Sat, 24 Jan 2015 00:07:22 -0500 Subject: Redirect problem Message-ID: <7e368bc088bfec68b1972b6144bb9f4a.NginxMailingListEnglish@forum.nginx.org> I've a problem with a redirect http https and using non-www Can you tell me what is wrong? sometimes i have redirect loop. server { listen 80; listen [::1]:80; server_name domain.com; return 301 https://www.domain.com$request_uri; } server { listen 80; listen [::1]:80; server_name www.domain.com; return 301 https://www.domain.com$request_uri; } server { listen 443 ssl spdy; listen [::1]:443 ssl spdy; server_name www.domain.com; ...... } Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256256,256256#msg-256256 From francis at daoine.org Sat Jan 24 13:30:34 2015 From: francis at daoine.org (Francis Daly) Date: Sat, 24 Jan 2015 13:30:34 +0000 Subject: Redirect problem In-Reply-To: <7e368bc088bfec68b1972b6144bb9f4a.NginxMailingListEnglish@forum.nginx.org> References: <7e368bc088bfec68b1972b6144bb9f4a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150124133034.GI15670@daoine.org> On Sat, Jan 24, 2015 at 12:07:22AM -0500, Kurogane wrote: Hi there, > Can you tell me what is wrong? sometimes i have redirect loop. The config you show looks ok to me. What do the logs say, when you have a redirect loop? Does something in your https site redirect back to http? f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Jan 24 13:39:45 2015 From: francis at daoine.org (Francis Daly) Date: Sat, 24 Jan 2015 13:39:45 +0000 Subject: How to pass fastcgi custom variables in C? In-Reply-To: <92356cc1a74acebb54b0b7def82b9108.NginxMailingListEnglish@forum.nginx.org> References: <92356cc1a74acebb54b0b7def82b9108.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150124133945.GJ15670@daoine.org> On Wed, Jan 21, 2015 at 07:30:48PM -0500, nginxuser100 wrote: Hi there, > Hi, I would like to have the auth_request fastcgi auth server to send some > custom variables to the fastcgi back-end server. You have the nginx config to send a cookie from the auth_request server to the fastcgi upstream, I think. The same config pattern should work for anything else returned. (set a variable based on what "auth" returns, then send that as a fastcgi_param.) > Could someone give me a pointer on how to this in the nginx.conf and the > auth and back-end servers in C? I saw many examples for PHP but none for C. The nginx conf should match what you already have; the rest is presumably according to the http or fastcgi specs (there should be nothing nginx-specific about it). > In the auth server app, I defined "int custom_param=100" for example, and > would like the back-end server to see get this variable and value. Thanks! Return it as a http response header, just like you would for a Set-Cookie:. f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Jan 24 13:49:39 2015 From: francis at daoine.org (Francis Daly) Date: Sat, 24 Jan 2015 13:49:39 +0000 Subject: issues with nginx-gridfs 3rd party module In-Reply-To: References: Message-ID: <20150124134938.GK15670@daoine.org> On Wed, Jan 21, 2015 at 05:50:19PM +0530, Swarna Koushik Debroy wrote: Hi there, > I've compiled and installed nginx with gridfs module as given by the > instuctions and it got installed successfully. But then when I configure > the nginx.conf with gridfs directive and restart the nginx server it fails > giving the error as 'nginx: [emerg] unknown directive "gridfs" in > /etc/nginx/nginx.conf:76' . Anyone who can help me fix this issue? If "nginx -t" gives that error message, then the nginx that you called does not have the gridfs modules included. What command or commands do you run when you "restart the nginx server"? f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Jan 24 14:01:18 2015 From: francis at daoine.org (Francis Daly) Date: Sat, 24 Jan 2015 14:01:18 +0000 Subject: smtps mail proxy In-Reply-To: References: Message-ID: <20150124140118.GL15670@daoine.org> On Fri, Jan 23, 2015 at 10:11:50AM -0500, 173279834462 wrote: Hi there, > I seek advice on configuring nginx as a mail proxy. http://nginx.org/r/mail > Our aim is to add SNI to port 465 (postfix) using nginx as transparent mail > proxy. I do not know that TLS SNI is supported in nginx mail proxy. Have you any documentation saying that it is? > 1. It is not clear how nginx would talk to postfix. One would expect the > proxy to serve > on port, say, 4650, being the port exposed by the router, masking postfix on > port 465, but nginx does not seem to have a relevant configuration clause. "listen" tells nginx where to listen. "auth_http" tells nginx (eventually) where the upstream for this connection is. http://nginx.org/r/auth_http > 2. Nginx refuses to start-up, demanding "auth_http". However, we do not need > to move authentication to nginx. That's not (just) what auth_http is for. nginx may not be the right tool for this job. f -- Francis Daly francis at daoine.org From kaushik06101992 at gmail.com Sat Jan 24 14:21:17 2015 From: kaushik06101992 at gmail.com (Swarna Koushik Debroy) Date: Sat, 24 Jan 2015 19:51:17 +0530 Subject: issues with nginx-gridfs 3rd party module In-Reply-To: <20150124134938.GK15670@daoine.org> References: <20150124134938.GK15670@daoine.org> Message-ID: Its solved..... apparently installing nginx from its source code it gets installed in /usr/local/nginx instead of /etc/nginx. Thanks anyways. On Sat, Jan 24, 2015 at 7:19 PM, Francis Daly wrote: > On Wed, Jan 21, 2015 at 05:50:19PM +0530, Swarna Koushik Debroy wrote: > > Hi there, > > > I've compiled and installed nginx with gridfs module as given by the > > instuctions and it got installed successfully. But then when I configure > > the nginx.conf with gridfs directive and restart the nginx server it > fails > > giving the error as 'nginx: [emerg] unknown directive "gridfs" in > > /etc/nginx/nginx.conf:76' . Anyone who can help me fix this issue? > > If "nginx -t" gives that error message, then the nginx that you called > does not have the gridfs modules included. > > What command or commands do you run when you "restart the nginx server"? > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xeoncross at gmail.com Mon Jan 26 01:06:13 2015 From: xeoncross at gmail.com (David) Date: Sun, 25 Jan 2015 19:06:13 -0600 Subject: Danger to Nginx from raw unicode in paths? Message-ID: I was recently wondering if I should filter URL's by characters to only allow what is standard in applications. Words, Numbers, and couple characters [.-_/\]. We know the list of supported URL's and Domains is really just a subset of ASCII . However, I'm not totally sure what nginx does when I pass "?" to it. I came up with a simple regular expression to match something that isn't one of those: location ~* "(*UTF8)([^\p{L}\p{N}/\.\-\%\\\]+)" ) { if ($uri ~* "(*UTF8)([^\p{L}\p{N}/\.\-\%\\\]+)" ) { However, I'm wondering if I actually need to use the UTF-8 matching since clients should default to URL encoding (%20) or hex encoding (\x23) the bytes and the actual transfer should be binary anyway. Here is an example test where I piped almost all 65,000 unicode points to nginx via curl: https://gist.github.com/Xeoncross/acca3f09c5aeddac8c9f For example: $ curl -v http://localhost/? Basically, is there any point to watching URL's for non-standard sequences looking for possible attacks? ( FYI: I posted more details that led to this question here: http://stackoverflow.com/questions/28055909/does-nginx-support-raw-unicode-in-paths ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jan 26 11:19:54 2015 From: nginx-forum at nginx.us (okamzol) Date: Mon, 26 Jan 2015 06:19:54 -0500 Subject: Behavior of security headers Message-ID: <32585d14ab9fb17c2a993d8a7ac1234b.NginxMailingListEnglish@forum.nginx.org> Hi, I've a question regarding the different security headers (Content-Security-Policy, etc.) which can be set via add_header. In the docs it is mentioned that "add_header" can be set on every level (http, server, location). So i tried to set some security related header in the server block related to one domain. But this did not work as expected - in detail it did not work at all. Even the "Strict-Transport-Security" header did not work on server level... My first guess was that the used nginx version (1.6.2 stable) may have some problems.. So I've updated to 1.7.9 from mainline repo. But nothing changed... After some resultless googling for this problem I tried a lot of combinations and found that all headers work on only on location level - which confused me. In my opinion these headers shall work on server level as well or do I misunderstand something in these mechanisms? config of my first try (NOT working) server { add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload;"; add_header X-Frame-Options SAMEORIGIN; add_header X-Content-Type-Options "nosniff"; add_header X-XSS-Protection "1; mode=block"; add_header Content-Security-Policy "default-src 'none'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https:; connect-src 'self' https:; img-src 'self' https:; style-src 'self' 'unsafe-inline' https:; font-src 'self' https:; frame-src 'self' https:; object-src 'none';"; ... location / .... } config of confused last try (WORKS) server { ... location / { add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload;"; add_header X-Frame-Options SAMEORIGIN; add_header X-Content-Type-Options "nosniff"; add_header X-XSS-Protection "1; mode=block"; add_header Content-Security-Policy "default-src 'none'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https:; connect-src 'self' https:; img-src 'self' https:; style-src 'self' 'unsafe-inline' https:; font-src 'self' https:; frame-src 'self' https:; object-src 'none';"; } } And btw. yes - I've restarted nginx after each config change and also emptied my browser cache before inspecting the headers. Thanks for help and enlightenment :-) Oliver Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256270,256270#msg-256270 From nhadie at gmail.com Mon Jan 26 13:29:17 2015 From: nhadie at gmail.com (ron ramos) Date: Mon, 26 Jan 2015 21:29:17 +0800 Subject: remote_addr not set using x-real-ip Message-ID: Hi All, I would just like to check what mistake i did on implementing real-ip module. Im using nginx 1.6.2 with real_ip_module enabled: nginx -V nginx version: nginx/1.6.2 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module* --with-http_realip_module* i have the following entry on nginx.conf real_ip_header X-Forwarded-For; set_real_ip_from 0.0.0.0/0; real_ip_recursive on; and i added the following to format my logs: log_format custom_logs '"$geoip_country_code" - "$http_x_forwarded_for" - "$remote_addr" - in which i get this results: "-" - "172.16.8.39, 102.103.104.105" - "172.16.8.39" - "-" - "172.16.23.72, 203.204.205.206" - "172.16.23.72" "-" - "172.16.163.36, 13.14.15.16" - "172.16.163.36" the first column does not match any country code on the geoip database since it is detected as the private IP ( in which this country's ISP seems to have proxy sending the private IP ) if using real_ip modules i should be seeing the source IP on $remote_addr in the logs, is that correct? please advise if anyone has encountered the same issue. thank you in advanced. Regards, Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon Jan 26 13:29:20 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 26 Jan 2015 16:29:20 +0300 Subject: Behavior of security headers In-Reply-To: <32585d14ab9fb17c2a993d8a7ac1234b.NginxMailingListEnglish@forum.nginx.org> References: <32585d14ab9fb17c2a993d8a7ac1234b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3061806.d9ajY9Tc2s@vbart-laptop> On Monday 26 January 2015 06:19:54 okamzol wrote: > Hi, > > I've a question regarding the different security headers > (Content-Security-Policy, etc.) which can be set via add_header. > In the docs it is mentioned that "add_header" can be set on every level > (http, server, location). So i tried to set some security related header in > the server block related to one domain. But this did not work as expected - > in detail it did not work at all. Even the "Strict-Transport-Security" > header did not work on server level... > > My first guess was that the used nginx version (1.6.2 stable) may have some > problems.. So I've updated to 1.7.9 from mainline repo. But nothing > changed... > > After some resultless googling for this problem I tried a lot of > combinations and found that all headers work on only on location level - > which confused me. In my opinion these headers shall work on server level as > well or do I misunderstand something in these mechanisms? [..] I guess this sentence from the documentation can shed light on your problem: | These directives are inherited from the previous level if and only if | there are no add_header directives defined on the current level. http://nginx.org/r/add_header wbr, Valentin V. Bartenev From nginx-forum at nginx.us Mon Jan 26 13:38:08 2015 From: nginx-forum at nginx.us (okamzol) Date: Mon, 26 Jan 2015 08:38:08 -0500 Subject: Behavior of security headers In-Reply-To: <3061806.d9ajY9Tc2s@vbart-laptop> References: <3061806.d9ajY9Tc2s@vbart-laptop> Message-ID: <79fdfcfd865569e24578580cba658fce.NginxMailingListEnglish@forum.nginx.org> That's exactly the point - I wanted to set these headers on server level to become valid for the whole domain and all inherent location blocks. This avoids the need to repeat all headers in each location... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256270,256273#msg-256273 From vbart at nginx.com Mon Jan 26 13:48:18 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 26 Jan 2015 16:48:18 +0300 Subject: Behavior of security headers In-Reply-To: <79fdfcfd865569e24578580cba658fce.NginxMailingListEnglish@forum.nginx.org> References: <3061806.d9ajY9Tc2s@vbart-laptop> <79fdfcfd865569e24578580cba658fce.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1975791.RL2NF9xrh2@vbart-laptop> On Monday 26 January 2015 08:38:08 okamzol wrote: > That's exactly the point - I wanted to set these headers on server level to > become valid for the whole domain and all inherent location blocks. This > avoids the need to repeat all headers in each location... > But are you sure, that you don't have add_header directives in your location blocks at the same time? Please note: server { add_header X-Header-One one; add_header X-Header-Two two; location / { add_header X-Header-Three three; } } in the configuration above only the X-Header-Three will be added to response. wbr, Valentin V. Bartenev From pasik at iki.fi Mon Jan 26 14:12:34 2015 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Mon, 26 Jan 2015 16:12:34 +0200 Subject: Proxy without buffering In-Reply-To: References: Message-ID: <20150126141234.GD5962@reaktio.net> On Thu, Jan 22, 2015 at 01:43:10AM -0500, jamesgan wrote: > Hi, all > > Is there any progress in this area so far? It would be great if this > did/will become a standard feature of nginx. > +1 -- Pasi > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,236568,256238#msg-256238 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Jan 26 14:35:54 2015 From: nginx-forum at nginx.us (okamzol) Date: Mon, 26 Jan 2015 09:35:54 -0500 Subject: Behavior of security headers In-Reply-To: <1975791.RL2NF9xrh2@vbart-laptop> References: <1975791.RL2NF9xrh2@vbart-laptop> Message-ID: <173da873b88ecf16a34c6099d27ef8f1.NginxMailingListEnglish@forum.nginx.org> OK, if I understand this right - in my original config I have 2 additional add_header (cache-control) directives in /image location. And these 2 directives prevent that the security headers will be applied on server level? It seems so as this will explain why it works when I apply the sec.headers on location level... But how to handle domain-wide headers like those security headers and location specific ones like cache-control? I mean, without repeating all securty headers in each location? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256270,256276#msg-256276 From jgehrcke at googlemail.com Mon Jan 26 14:41:55 2015 From: jgehrcke at googlemail.com (Jan-Philip Gehrcke) Date: Mon, 26 Jan 2015 15:41:55 +0100 Subject: Danger to Nginx from raw unicode in paths? In-Reply-To: References: Message-ID: <54C65233.6090908@googlemail.com> Hello! In reference to your mail subject, one should note that "raw unicode" does not exist. You should really understand what the term "unicode" means, what the abstract meaning of unicode code points is, and what UTF-8, for example, really is: it is just one of many possible ways to encode characters into a raw byte representation. Again; there is no such thing as "raw unicode". Other than that, you have already received a good answer on Stack Overflow. So, what is your question, exactly? As stated on SO, for nginx, a location is just a sequence of bytes. You surely understand that the space of byte sequences (given a certain length) is larger than just the 65.000 items that you have worked with. From my naive point of view I would say: no, there definitely is no point in looking out for "non-standard" sequences in the most general sense, because there are just too many of them. Having a proper white list approach (specify those locations that *should* work in a certain way, and reject all other requests) is a very safe concept. Cheers, Jan-Philip -- http://gehrcke.de From reallfqq-nginx at yahoo.fr Mon Jan 26 16:03:06 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 26 Jan 2015 17:03:06 +0100 Subject: Behavior of security headers In-Reply-To: <173da873b88ecf16a34c6099d27ef8f1.NginxMailingListEnglish@forum.nginx.org> References: <1975791.RL2NF9xrh2@vbart-laptop> <173da873b88ecf16a34c6099d27ef8f1.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, I guess the 'problem' you are struggling with is one you seem to inflict to yourself. As Valentin explained, and as it is the case with other directives as well ( fastcgi_param immediately comes to my mind), if you specify some add_header directives at a certain level, it will cut-off the default inheritance property, effectively *not* applying add_header directives defined at upper levels. The real question here is: Why do you wanna avoid duplicating the common add_header fields over all the locations? The obvious answer being the 'ease' of maintenance is maybe flawed: 1. Two years later, to know the configuration applied to a location by a certain directive, you would need to look at several places. If you forgot you put some at server levels, you might end up with 'strange' behaviors. Even more true if the maintenance is done by someone else... 2. If you want to replace the configuration of a directive amongst all locations where it is defined, standard Linux (UNIX?) commands such as grep, sed, cut, awk, etc. are there to handle such repetitive job. 3. Finally, generating similar or identical copies of the same blocks on high volumes is generally not done by hand, but rather with tools such as configuration management ones. I suggest you watch the video 'Scalable configuration' from Igor Sysoev , recorded during the nginx user conference from last year: that would maybe help you understand better what I attempted to explain here. What you sometimes think is a problem might actually save you from actually getting into trouble without even noticing it... What is 'unefficient' to human eyes might be 'irrelevant' machine-wise... reverse might also be true. :o) --- *B. R.* On Mon, Jan 26, 2015 at 3:35 PM, okamzol wrote: > OK, if I understand this right - in my original config I have 2 additional > add_header (cache-control) directives in /image location. And these 2 > directives prevent that the security headers will be applied on server > level? It seems so as this will explain why it works when I apply the > sec.headers on location level... > > But how to handle domain-wide headers like those security headers and > location specific ones like cache-control? I mean, without repeating all > securty headers in each location? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256270,256276#msg-256276 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Jan 26 20:11:17 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 26 Jan 2015 20:11:17 +0000 Subject: remote_addr not set using x-real-ip In-Reply-To: References: Message-ID: <20150126201117.GM15670@daoine.org> On Mon, Jan 26, 2015 at 09:29:17PM +0800, ron ramos wrote: Hi there, > I would just like to check what mistake i did on implementing real-ip > module. > real_ip_recursive on; That says "tell me the last untrusted address from the list". http://nginx.org/r/real_ip_recursive > set_real_ip_from 0.0.0.0/0; But that says "no address on the list is untrusted". So nginx will do something else -- probably tell you the first address from the list. It's no wronger than anything else, given what you have configured it to do. Either turn off recursive, or configure your trusted addresses correctly. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Jan 27 03:08:54 2015 From: nginx-forum at nginx.us (abhinanda) Date: Mon, 26 Jan 2015 22:08:54 -0500 Subject: Modify request body before sending to upstream In-Reply-To: <69f4691d87e66a881200eb8e1e7e48ac.NginxMailingListEnglish@forum.nginx.org> References: <69f4691d87e66a881200eb8e1e7e48ac.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, Any info on this? I've been trying a ton of ways to achieve this but it seems like I'm really lost. To repeat with clarity, I need to operate on the request body first, modify it and THEN send it off to upstream servers with the modified content. Any pointers must help. Please :) Abhishek Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256226,256281#msg-256281 From nginx-forum at nginx.us Tue Jan 27 06:10:55 2015 From: nginx-forum at nginx.us (mex) Date: Tue, 27 Jan 2015 01:10:55 -0500 Subject: Modify request body before sending to upstream In-Reply-To: <69f4691d87e66a881200eb8e1e7e48ac.NginxMailingListEnglish@forum.nginx.org> References: <69f4691d87e66a881200eb8e1e7e48ac.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi Abhishek, i'm not 100% sure to understand exactrly what you'd like to do, especially the request-body-manipulation-part. nginx_lua is usually quite handy when you have the need to manipulate a request: http://wiki.nginx.org/HttpLuaModule#access_by_lua you can jump into the acces- or rewrite-phase, make your processing and pass the result to your upstream-servers using proxy_pass and all the upstream {} - goodies cheers, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256226,256282#msg-256282 From nginx-forum at nginx.us Tue Jan 27 06:20:54 2015 From: nginx-forum at nginx.us (abhinanda) Date: Tue, 27 Jan 2015 01:20:54 -0500 Subject: Modify request body before sending to upstream In-Reply-To: References: <69f4691d87e66a881200eb8e1e7e48ac.NginxMailingListEnglish@forum.nginx.org> Message-ID: <831f98b531b963c9a0be82fd6b2b9b7e.NginxMailingListEnglish@forum.nginx.org> Thanks! I tried ngx_lua but I might've been doing something wrong. It complained that I am not allowed to use "proxy_pass" following a content rewrite. To make it even simpler, here's a simplified example: - curl -X POST --data "ABCD" localhost:8080 - an NGINX module that calls a custom C function to alter the string, say "a[1]+=5", so now we have "AGCD" - send "AGBC" to upstream app - respond with whatever the upstream responds (no filters beyond this) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256226,256283#msg-256283 From nginx-forum at nginx.us Tue Jan 27 06:29:01 2015 From: nginx-forum at nginx.us (mex) Date: Tue, 27 Jan 2015 01:29:01 -0500 Subject: Modify request body before sending to upstream In-Reply-To: <831f98b531b963c9a0be82fd6b2b9b7e.NginxMailingListEnglish@forum.nginx.org> References: <69f4691d87e66a881200eb8e1e7e48ac.NginxMailingListEnglish@forum.nginx.org> <831f98b531b963c9a0be82fd6b2b9b7e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5dd83773d963d840ed1f7d3123dc4d03.NginxMailingListEnglish@forum.nginx.org> Hi, > I tried ngx_lua but I might've been doing something wrong. It > complained that I am not allowed to use "proxy_pass" following a > content rewrite. you should read the documentatrion carefully: http://wiki.nginx.org/HttpLuaModule#content_by_lua "Do not use this directive and other content handler directives in the same location. For example, this directive and the proxy_pass directive should not be used in the same location." what you can do is use the access_by_lua or rewrite_by_lua - directive cheers, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256226,256284#msg-256284 From nginx-forum at nginx.us Tue Jan 27 07:06:25 2015 From: nginx-forum at nginx.us (abhinanda) Date: Tue, 27 Jan 2015 02:06:25 -0500 Subject: Modify request body before sending to upstream In-Reply-To: <5dd83773d963d840ed1f7d3123dc4d03.NginxMailingListEnglish@forum.nginx.org> References: <69f4691d87e66a881200eb8e1e7e48ac.NginxMailingListEnglish@forum.nginx.org> <831f98b531b963c9a0be82fd6b2b9b7e.NginxMailingListEnglish@forum.nginx.org> <5dd83773d963d840ed1f7d3123dc4d03.NginxMailingListEnglish@forum.nginx.org> Message-ID: <210a2b6554aed8f2605c1fd0428f01b5.NginxMailingListEnglish@forum.nginx.org> Still no luck. Here's my config: upstream wservers { server localhost:8001 max_fails=3 fail_timeout=2s weight=100; server localhost:8002 max_fails=3 fail_timeout=2s weight=100; } server { location /foo { rewrite_by_lua ' ngx.print("yay") '; proxy_pass http://wservers; } location /bar { proxy_pass http://wservers; } } Here are the curl commands: [vm ~]$ curl localhost:8080/bar -X POST --data 'hello' UPSTREAM: hello :UPSTREAM [vm ~]$ curl localhost:8080/foo -X POST --data 'hello' yay What I need is for the second curl command to output: UPSTREAM: yay :UPSTREAM Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256226,256285#msg-256285 From nginx-forum at nginx.us Tue Jan 27 07:14:15 2015 From: nginx-forum at nginx.us (abhinanda) Date: Tue, 27 Jan 2015 02:14:15 -0500 Subject: Modify request body before sending to upstream In-Reply-To: <210a2b6554aed8f2605c1fd0428f01b5.NginxMailingListEnglish@forum.nginx.org> References: <69f4691d87e66a881200eb8e1e7e48ac.NginxMailingListEnglish@forum.nginx.org> <831f98b531b963c9a0be82fd6b2b9b7e.NginxMailingListEnglish@forum.nginx.org> <5dd83773d963d840ed1f7d3123dc4d03.NginxMailingListEnglish@forum.nginx.org> <210a2b6554aed8f2605c1fd0428f01b5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <522f95ab6dbe34a106f8ad2a8bb3d4d7.NginxMailingListEnglish@forum.nginx.org> Nevermind my previous post. I solved it finally :) location /foo { rewrite_by_lua ' res = ngx.location.capture("/bar", {method = ngx.HTTP_POST, body = "jjj"}) res = ngx.location.capture("/bar", {method = ngx.HTTP_POST, body = res.body}) ngx.print(res.body) '; } location /bar { proxy_pass http://wservers; } [vm ~]$ curl localhost:8080/bar -X POST --data 'hello' UPSTREAM: hello :UPSTREAM [vm ~]$ curl localhost:8080/foo -X POST --data 'hello' UPSTREAM: UPSTREAM: jjj :UPSTREAM :UPSTREAM Thank you so much!! You saved me a lot of time. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256226,256286#msg-256286 From nginx-forum at nginx.us Wed Jan 28 09:17:17 2015 From: nginx-forum at nginx.us (kipras) Date: Wed, 28 Jan 2015 04:17:17 -0500 Subject: HttpLuaModule - SPDY seems fully supported now? Message-ID: <90a8f509822c1b90a47e5286c9717d4a.NginxMailingListEnglish@forum.nginx.org> Hi, in the HttpLuaModule docs it is written that SPDY mode is not fully supported yet: http://wiki.nginx.org/HttpLuaModule#SPDY_Mode_Not_Fully_Supported Specifically, that "ngx.location.capture()" does not work yet. However, i ran some code that uses ngx.location.capture(), with SPDY and everything worked (both SPDY and the Lua code). So is it possible that SPDY mode now works fully under ngx_lua, only the documentation is not updated? nginx version: openresty/1.7.4.1 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256308,256308#msg-256308 From al-nginx at none.at Wed Jan 28 09:43:30 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 28 Jan 2015 10:43:30 +0100 Subject: Bug or feature Message-ID: <3b40944f9b136da5f22561666ac8bc74@none.at> Dear Reader. I have setup ed only mod_proxy http://nginx.org/en/docs/http/ngx_http_proxy_module.html .... proxy_pass $my_upstream; ... no mod_upstream. http://nginx.org/en/docs/http/ngx_http_upstream_module.html My logformat looks like this. ############ log_format upstream_log '$remote_addr [$time_local] ' '"$request" $status $body_bytes_sent ' 'up_resp_leng $upstream_response_length up_stat $upstream_status ' 'up_resp_time $upstream_response_time request_time $request_time'; ############ Is this a expected behavior ;-)? Cheers Aleks From francis at daoine.org Wed Jan 28 19:02:28 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 28 Jan 2015 19:02:28 +0000 Subject: Bug or feature In-Reply-To: <3b40944f9b136da5f22561666ac8bc74@none.at> References: <3b40944f9b136da5f22561666ac8bc74@none.at> Message-ID: <20150128190228.GA3125@daoine.org> On Wed, Jan 28, 2015 at 10:43:30AM +0100, Aleksandar Lazic wrote: Hi there, Feature. > I have setup ed only mod_proxy > > http://nginx.org/en/docs/http/ngx_http_proxy_module.html > > .... > proxy_pass $my_upstream; > ... > > > no mod_upstream. > > http://nginx.org/en/docs/http/ngx_http_upstream_module.html You are using the "upstream" module. You are not using any specific directives from the module, so they all take their default values (which happens to be "unset"). > Is this a expected behavior ;-)? That it works, is expected. f -- Francis Daly francis at daoine.org From MeiKen.Tan at itelligence.com.my Thu Jan 29 07:42:40 2015 From: MeiKen.Tan at itelligence.com.my (MeiKen.Tan at itelligence.com.my) Date: Thu, 29 Jan 2015 15:42:40 +0800 Subject: Nginx Supports SLES 11? Message-ID: Hi, According to http://nginx.org/en/linux_packages.html, Nginx only supports SLES12. Can Nginx runs on SLES 11 also? Thanks, Mei Ken -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Thu Jan 29 07:47:34 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 29 Jan 2015 10:47:34 +0300 Subject: Nginx Supports SLES 11? In-Reply-To: References: Message-ID: <54C9E596.1060000@nginx.com> Hello, On 1/29/15 10:42 AM, MeiKen.Tan at itelligence.com.my wrote: > Hi, > > According to _http://nginx.org/en/linux_packages.html,_Nginx only > supports SLES12. Can Nginx runs on SLES 11 also? > While we don't provide binary nginx packages for SLES 11 you can compile and run nginx on this platform. It works just well there. -- Maxim Konovalov http://nginx.com From nginx-forum at nginx.us Thu Jan 29 07:59:45 2015 From: nginx-forum at nginx.us (mex) Date: Thu, 29 Jan 2015 02:59:45 -0500 Subject: Nginx Supports SLES 11? In-Reply-To: <54C9E596.1060000@nginx.com> References: <54C9E596.1060000@nginx.com> Message-ID: <28ca0c2dce8a54ecef46da0cee9d18cc.NginxMailingListEnglish@forum.nginx.org> you'll need a lot of packages from the SDK-DVDs. IIRC those are not available as online-repos, but situation might have changed. mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256323,256325#msg-256325 From nginx-forum at nginx.us Thu Jan 29 10:52:54 2015 From: nginx-forum at nginx.us (bongtv) Date: Thu, 29 Jan 2015 05:52:54 -0500 Subject: Proxy cache of X-Accel-Redirect, how? Message-ID: Hi! Tried to cache the X-Accel-Redirect responses from Phusion Passenger application server with the use of a second layer without success (followed the hint on http://forum.nginx.org/read.php?2,241734,241948#msg-241948). Configuration: 1) Application server (Phusion Passenger) adds X-Accel-Redirect header to response sends to 2) NGINX server >> tries to cache << proxy_ignore_headers X-Accel-Redirect; proxy_pass_header X-Accel-Redirect; passenger_pass_header X-Accel-Redirect; sends to 3) NGINX server delivers file But caching of the reuest on server (2) does not work. Any idea? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256327,256327#msg-256327 From juriy.foboss at gmail.com Thu Jan 29 11:32:08 2015 From: juriy.foboss at gmail.com (Juriy Strashnov) Date: Thu, 29 Jan 2015 14:32:08 +0300 Subject: Nginx Supports SLES 11? In-Reply-To: References: Message-ID: There are some precompiled packages for SLE 11 SP2, SP3 from SuSE community: http://software.opensuse.org/package/nginx On Thu, Jan 29, 2015 at 10:42 AM, wrote: > Hi, > > According to *http://nginx.org/en/linux_packages.html,* > Nginx only supports SLES12. > Can Nginx runs on SLES 11 also? > > Thanks, > Mei Ken > > > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Best regards, Juriy Strashnov Mob. +7 (953) 742-1550 E-mail: j.strashnov at me.com Please consider the environment before printing this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From multiformeingegno at gmail.com Thu Jan 29 11:50:41 2015 From: multiformeingegno at gmail.com (Lorenzo Raffio) Date: Thu, 29 Jan 2015 12:50:41 +0100 Subject: default_server directive not respected Message-ID: I have multiple files each with a config for a different vhost. On one of these config files (included in the main nginx config file) I set the default_server directive: server { listen 80; listen 443 ssl default_server spdy; server_name 188.166.X.XXX; root /var/www/default; index index.php index.html; ... } ... but it's not respected. If I point the A record of a domain I didn't add in a nginx server block, the first server block in alphabetical order is picked up (instead of the default_server). Why? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jan 29 12:42:45 2015 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 29 Jan 2015 07:42:45 -0500 Subject: default_server directive not respected In-Reply-To: References: Message-ID: Does this one help? http://wiki.nginx.org/ServerBlockExample Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256329,256333#msg-256333 From francis at daoine.org Thu Jan 29 12:48:14 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 29 Jan 2015 12:48:14 +0000 Subject: default_server directive not respected In-Reply-To: References: Message-ID: <20150129124814.GB3125@daoine.org> On Thu, Jan 29, 2015 at 12:50:41PM +0100, Lorenzo Raffio wrote: Hi there, > listen 80; > listen 443 ssl default_server spdy; > ... but it's not respected. If I point the A record of a domain I didn't > add in a nginx server block, the first server block in alphabetical order > is picked up (instead of the default_server). > Why? If your test request is https, it is worth further investigation. If your test request is http, you should be aware that default_server refers to the listen address:port, and you don't have one on port 80. http://nginx.org/r/listen The usual "what request do you make / what response do you get / what response do you expect" would help here. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Jan 29 14:20:36 2015 From: nginx-forum at nginx.us (c0nw0nk) Date: Thu, 29 Jan 2015 09:20:36 -0500 Subject: Nginx with php configuration how to block all requests/urls other than two? Message-ID: <98ef92ad43b47137092e76c4e1ef9ded.NginxMailingListEnglish@forum.nginx.org> So i use nginx with PHP and i have the following two urls i want to allow access on the subdomain. The full url would be sub1.domain.com/index.php?option=com_hwdmediashare&task=addmedia.upload&base64encryptedstring if ( $args ~ 'option=com_hwdmediashare&task=addmedia.upload([a-zA-Z0-9-_=&])' ) { } And sub1.domain.com/media/com_hwdmediashare/assets/swf/Swiff.Uploader.swf But i cant figure out in nginx how to block all other traffic/requests on the subdomain apart from those two urls can anyone help me get a understanding of the location block of nginx so i can block access to all links apart from those two. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256336,256336#msg-256336 From nginx-forum at nginx.us Thu Jan 29 15:09:24 2015 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 29 Jan 2015 10:09:24 -0500 Subject: Nginx with php configuration how to block all requests/urls other than two? In-Reply-To: <98ef92ad43b47137092e76c4e1ef9ded.NginxMailingListEnglish@forum.nginx.org> References: <98ef92ad43b47137092e76c4e1ef9ded.NginxMailingListEnglish@forum.nginx.org> Message-ID: Use map, map $request $allowonly { default 1; ~*addmedia\.upload([a-zA-Z0-9-_=&]) 0; 1; } inside location {} if ($allowonly) { return 404; } Untested but should give you enough to test with. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256336,256337#msg-256337 From nginx-forum at nginx.us Thu Jan 29 15:55:21 2015 From: nginx-forum at nginx.us (c0nw0nk) Date: Thu, 29 Jan 2015 10:55:21 -0500 Subject: Nginx with php configuration how to block all requests/urls other than two? In-Reply-To: References: <98ef92ad43b47137092e76c4e1ef9ded.NginxMailingListEnglish@forum.nginx.org> Message-ID: <161fdb7ed0376b73e5254af84930d0e3.NginxMailingListEnglish@forum.nginx.org> map $request $allowonly { default 1; ~*addmedia\.upload([a-zA-Z0-9-_=&]) 0; } location / { if ($allowonly) { try_files $uri $uri/ /index.php?$args; } } location ~ \.php$ { ##fastcgi pass etc here } That would be my location block to deny all requests except for that single php url but i cant add the static file to the map request since it would be handled by PHP when its a static file. How should make this url be accepted "sub1.domain.com/media/com_hwdmediashare/assets/swf/Swiff.Uploader.swf" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256336,256338#msg-256338 From nginx-forum at nginx.us Thu Jan 29 17:29:27 2015 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 29 Jan 2015 12:29:27 -0500 Subject: Nginx with php configuration how to block all requests/urls other than two? In-Reply-To: <161fdb7ed0376b73e5254af84930d0e3.NginxMailingListEnglish@forum.nginx.org> References: <98ef92ad43b47137092e76c4e1ef9ded.NginxMailingListEnglish@forum.nginx.org> <161fdb7ed0376b73e5254af84930d0e3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <738c614338037a7a18bfd3414cab2b9b.NginxMailingListEnglish@forum.nginx.org> In the map flip the 1 and 0 around, if $allowonly=1 then the IF is true (unless that's what you want). General rule for IF's; only use it to return a state. if ..... return .... continue with complex configuration items. Don't do: 'if ..... do complex things ....' (unless proceeded with Lua finishing with an nginx if....return) If you want to expand the logic what is ok and what not, have a look at my conf\nginx-simple-WAF.conf where 3 maps are combined into 1 result map. In your case you could use 2 mappings, 1 for normal requests and 1 for passed-on php requests. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256336,256339#msg-256339 From reallfqq-nginx at yahoo.fr Thu Jan 29 17:56:30 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 29 Jan 2015 18:56:30 +0100 Subject: Nginx with php configuration how to block all requests/urls other than two? In-Reply-To: <738c614338037a7a18bfd3414cab2b9b.NginxMailingListEnglish@forum.nginx.org> References: <98ef92ad43b47137092e76c4e1ef9ded.NginxMailingListEnglish@forum.nginx.org> <161fdb7ed0376b73e5254af84930d0e3.NginxMailingListEnglish@forum.nginx.org> <738c614338037a7a18bfd3414cab2b9b.NginxMailingListEnglish@forum.nginx.org> Message-ID: ?Chained maps maybe?? http { map $arg_option $step2 { default 1; com_hwdmediashare $arg_task; } map $step2 $step3 { default 1; addmedia.upload $request; } map $step3 $blocked { default 1; ~*(?:\?|&)?base64encryptedstring 0; } server { location / { return 404; } location /index.php { if ($blocked) { return 404; } } location /media/com_hwdmediashare/assets/swf/Swiff.Uploader.swf { } } } --- *B. R.* On Thu, Jan 29, 2015 at 6:29 PM, itpp2012 wrote: > In the map flip the 1 and 0 around, if $allowonly=1 then the IF is true > (unless that's what you want). > > General rule for IF's; only use it to return a state. > > if ..... return .... > continue with complex configuration items. > > Don't do: 'if ..... do complex things ....' (unless proceeded with Lua > finishing with an nginx if....return) > > If you want to expand the logic what is ok and what not, have a look at my > conf\nginx-simple-WAF.conf > where 3 maps are combined into 1 result map. > > In your case you could use 2 mappings, 1 for normal requests and 1 for > passed-on php requests. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256336,256339#msg-256339 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jan 29 18:22:46 2015 From: nginx-forum at nginx.us (squonk) Date: Thu, 29 Jan 2015 13:22:46 -0500 Subject: rbtree in ngx_http_upstream_fair_module.c Message-ID: <960376f2cc08fabc26965ce440b99590.NginxMailingListEnglish@forum.nginx.org> hi.. Just wanted to ensure my understanding of rbtree usage in Grzegorz Nosek's upstream fair load balancer is correct. I believe the rbtree is necessary because when nginx.conf is reloaded workers may continue to reference upstream server metadata from earlier versions aka generations of the nginx.conf file. The rbtree stores the metadata until none of the workers reference it. The extra complexity is needed because this load balancer tracks server load across requests and nginx.conf reloads. Does this seem accurate? If so, is this currently considered a recommended way to handle this situation? thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256341,256341#msg-256341 From francis at daoine.org Thu Jan 29 18:46:54 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 29 Jan 2015 18:46:54 +0000 Subject: Nginx with php configuration how to block all requests/urls other than two? In-Reply-To: <98ef92ad43b47137092e76c4e1ef9ded.NginxMailingListEnglish@forum.nginx.org> References: <98ef92ad43b47137092e76c4e1ef9ded.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150129184654.GC3125@daoine.org> On Thu, Jan 29, 2015 at 09:20:36AM -0500, c0nw0nk wrote: > So i use nginx with PHP and i have the following two urls i want to allow > access on the subdomain. > > The full url would be > sub1.domain.com/index.php?option=com_hwdmediashare&task=addmedia.upload&base64encryptedstring Usually you don't want to match $args, because the order is not fixed. But if you are happy that it is in your case, you can just do: server { server_name sub1.domain.com; location / { return 404; } location = /index.php { if ( $args !~ 'option=com_hwdmediashare&task=addmedia.upload' ) { return 404; } # do whatever } } Change "404" to whatever you want "block" to mean. "# do whatever" will probably involve fastcgi_pass or something similar. Note that this does not restrict access to exactly this query string; if it matters, you can tighten things. But it is probably simpler for your index.php to check that arguments are exactly what is expected or else to fail. > And > > sub1.domain.com/media/com_hwdmediashare/assets/swf/Swiff.Uploader.swf location = /media/com_hwdmediashare/assets/swf/Swiff.Uploader.swf {} > But i cant figure out in nginx how to block all other traffic/requests on > the subdomain apart from those two urls location / matches any normal request that does not match any other location. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Jan 30 02:39:12 2015 From: nginx-forum at nginx.us (squonk) Date: Thu, 29 Jan 2015 21:39:12 -0500 Subject: rbtree in ngx_http_upstream_fair_module.c In-Reply-To: <960376f2cc08fabc26965ce440b99590.NginxMailingListEnglish@forum.nginx.org> References: <960376f2cc08fabc26965ce440b99590.NginxMailingListEnglish@forum.nginx.org> Message-ID: I think i underatand a bit better now. The tree is storing metadata for potentially multiple upstream groups per generation. It seems like a reasonable implementation given the expected short duration of threads referencing data from older generations (hence a shallow tree) and the fact there is only one read from the tree per request. Anyway.. i asked the question so i'll fill in what i find out. I may well have missed something.. any help appreciated. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256341,256350#msg-256350 From nginx-forum at nginx.us Fri Jan 30 06:35:30 2015 From: nginx-forum at nginx.us (justink101) Date: Fri, 30 Jan 2015 01:35:30 -0500 Subject: Google QUIC support in nginx Message-ID: <829d371d1e545cf278fcd05c96c63a7f.NginxMailingListEnglish@forum.nginx.org> Any plans to support Google QUIC[1] in nginx? [1] http://en.wikipedia.org/wiki/QUIC Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256352,256352#msg-256352 From admin at grails.asia Fri Jan 30 07:28:39 2015 From: admin at grails.asia (jtan) Date: Fri, 30 Jan 2015 15:28:39 +0800 Subject: Google QUIC support in nginx In-Reply-To: <829d371d1e545cf278fcd05c96c63a7f.NginxMailingListEnglish@forum.nginx.org> References: <829d371d1e545cf278fcd05c96c63a7f.NginxMailingListEnglish@forum.nginx.org> Message-ID: This would be interesting. But I guess we would need to wait. On Fri, Jan 30, 2015 at 2:35 PM, justink101 wrote: > Any plans to support Google QUIC[1] in nginx? > > [1] http://en.wikipedia.org/wiki/QUIC > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256352,256352#msg-256352 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Freelance Grails and Java developer -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jan 30 07:51:50 2015 From: nginx-forum at nginx.us (c0nw0nk) Date: Fri, 30 Jan 2015 02:51:50 -0500 Subject: Nginx with php configuration how to block all requests/urls other than two? In-Reply-To: <20150129184654.GC3125@daoine.org> References: <20150129184654.GC3125@daoine.org> Message-ID: Thanks for the help guys i have it working but i am not sure what config i should be using out of these two what one would be better. itpp2012's config : map $request $allowonly { default 1; ~*addmedia\.upload([a-zA-Z0-9-_=&]) 0; } server { listen 80; listen [::]:80; server_name sub1.domain.com; index index.php index.html index.htm default.html default.htm; location / { return 404; } location /media/com_hwdmediashare/assets/swf/Swiff.Uploader.swf { root z:/public_www; expires max; } location ~ \.php$ { if ($allowonly) { return 403; } try_files $uri =404; ##fastcgi stuff here } } And then the config Francis recommends : server { listen 80; listen [::]:80; server_name sub1.domain.com; location / { return 404; } location = /index.php { if ( $args !~ 'option=com_hwdmediashare&task=addmedia.upload' ) { return 404; } try_files $uri =404; # do whatever (So fastcgi stuff here) } location = /media/com_hwdmediashare/assets/swf/Swiff.Uploader.swf { root z:/public_www; expires max; } } itp2012's config is the one i am currently using and works well should i change anything or just stick with it :) ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256336,256354#msg-256354 From shahzaib.cb at gmail.com Fri Jan 30 11:33:16 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Fri, 30 Jan 2015 16:33:16 +0500 Subject: Limit incoming bandwith with nginx !! Message-ID: Hi, is there a way we can limit incoming bandwidth (from Remote to linux box) using nginx ? Nginx is forwarding user requests to different URL and downloading videos locally due to which server's incoming port is choking on 1Gbps for large number of concurrent users. If we can lower incoming bandwidth to 500Mbps it'll surely help us. Regards. shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists-nginx at swsystem.co.uk Fri Jan 30 11:57:17 2015 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Fri, 30 Jan 2015 11:57:17 +0000 Subject: Will this work, is it the best way? Message-ID: Hi, Slightly complicated setup with 2 nginx servers. server1 has a public ipv4 address using proxy_pass to server2 over ipv6 which only has a public ipv6, this then has various upstreams for each subdomain. ipv6 capable browsers connect directly to server2, those with only ipv4 will connect via server1. I'm currently considering something like the below config. server1 - proxy all subdomain requests to upstream ipv6 server: http { server_name *.example.com; location / { proxy_pass http://fe80::1337; } } server2: http { server_name ~^(?\w+)\.example\.com$; location / { proxy_pass http://$subdomain } upstream subdomain1 { server 127.0.0.1:1234; } } The theory here is that each subdomain and upstream would match, meaning that when adding another upstream it would just need the upstream{} block configuring and automatically work. I realise there's dns stuff etc but that's out of scope for this list and I can deal with that. Does this seem sound? It's not going to see major usage but hopefully this will reduce work when adding new upstreams. If you've a better way to achieve this please let me know. Steve. From vbart at nginx.com Fri Jan 30 13:54:39 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 30 Jan 2015 16:54:39 +0300 Subject: Limit incoming bandwith with nginx !! In-Reply-To: References: Message-ID: <2161557.jvIEiDqLyV@vbart-workstation> On Friday 30 January 2015 16:33:16 shahzaib shahzaib wrote: > Hi, > > is there a way we can limit incoming bandwidth (from Remote to linux > box) using nginx ? Nginx is forwarding user requests to different URL and > downloading videos locally due to which server's incoming port is choking > on 1Gbps for large number of concurrent users. If we can lower incoming > bandwidth to 500Mbps it'll surely help us. > http://nginx.org/r/proxy_limit_rate wbr, Valentin V. Bartenev From nginx-forum at nginx.us Sat Jan 31 15:26:50 2015 From: nginx-forum at nginx.us (Olaf van der Spek) Date: Sat, 31 Jan 2015 10:26:50 -0500 Subject: Why does fastcgi_keep_conn default to off? Message-ID: Why does fastcgi_keep_conn default to off? On seems to be the faster option. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256371,256371#msg-256371 From lloydchang at gmail.com Sat Jan 31 16:21:55 2015 From: lloydchang at gmail.com (Lloyd Chang) Date: Sat, 31 Jan 2015 08:21:55 -0800 Subject: Will this work, is it the best way? In-Reply-To: References: Message-ID: Hello Steve, ? Best answer is try and see if it meets your expectations; thanks ? While reading your snippet, my initial questions are ? Why 2 servers? Why not simplify? ? In your proposal: server1, listen to ?? TCP port(s) on public IPv4, and IPv6 to proxy_pass to server2, then server2 listen on public IPv6, and IPv4 to proxy_pass to subdomain, with upstream (perhaps for load balance and/or failover?) ? As you agree, this is slightly complicated ? Why not simplify? ? Reconfigure DNS for cname-server1 to server2, for IPv4 and IPv6 ? In your snippet, server2 supports IPv4 and IPv6 if you expect it to upstream via private IPv4 127.0.0.1:[?] ? I don't fully understand why server2 upstream isn't IPv6 ::1:[?] considering your primary intent for server2 is IPv6 usage ? Perhaps you meant upstream localhost:[?] to try both IPv4 and IPv6? Thanks Cheers, Lloyd On Friday, January 30, 2015, Steve Wilson wrote: > Hi, > > Slightly complicated setup with 2 nginx servers. > > server1 has a public ipv4 address using proxy_pass to server2 over ipv6 > which only has a public ipv6, this then has various upstreams for each > subdomain. > > ipv6 capable browsers connect directly to server2, those with only ipv4 > will connect via server1. > > I'm currently considering something like the below config. > > > server1 - proxy all subdomain requests to upstream ipv6 server: > > http { > server_name *.example.com; > location / { > proxy_pass http://fe80::1337; > } > } > > server2: > > http { > server_name ~^(?\w+)\.example\.com$; > location / { > proxy_pass http://$subdomain > } > > upstream subdomain1 { > server 127.0.0.1:1234; > } > } > > The theory here is that each subdomain and upstream would match, meaning > that when adding another upstream it would just need the upstream{} block > configuring and automatically work. > > I realise there's dns stuff etc but that's out of scope for this list and > I can deal with that. > > Does this seem sound? It's not going to see major usage but hopefully this > will reduce work when adding new upstreams. > > If you've a better way to achieve this please let me know. > > Steve. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erafaloff at gmail.com Sat Jan 31 18:05:18 2015 From: erafaloff at gmail.com (Eric R.) Date: Sat, 31 Jan 2015 13:05:18 -0500 Subject: Intermittent SSL Handshake Errors Message-ID: Hi, We are using round-robin DNS to distribute requests to three servers all running identically configured nginx. Connections then go upstream to HAProxy and then to our Rails app. About two weeks ago, users began to experience intermittent SSL handshake errors. Users reported that these appeared as "ssl_error_no_cypher_overlap" in the browser. Most of our reports have come from Firefox users, although we have seen reports from Safari and stock Android browser users as well. In our nginx error logs, we began to see consistent errors across all three servers. They started at around the same time and no recent modifications were made to hardware or software: ... 2015/01/13 12:22:59 [crit] 11871#0: *140260577 SSL_do_handshake() failed (SSL: error:1408A0D7:SSL routines:SSL3_GET_CLIENT_HELLO:required cipher missing) while SSL handshaking, client: *.*.*.*, server: 0.0.0.0:443 2015/01/13 12:23:09 [crit] 11874#0: *140266246 SSL_do_handshake() failed (SSL: error:1408A0D7:SSL routines:SSL3_GET_CLIENT_HELLO:required cipher missing) while SSL handshaking, client: *.*.*.*, server: 0.0.0.0:443 2015/01/13 12:23:54 [crit] 11862#0: *140293705 SSL_do_handshake() failed (SSL: error:1408A0D7:SSL routines:SSL3_GET_CLIENT_HELLO:required cipher missing) while SSL handshaking, client: *.*.*.*, server: 0.0.0.0:443 2015/01/13 12:23:54 [crit] 11862#0: *140293708 SSL_do_handshake() failed (SSL: error:1408A0D7:SSL routines:SSL3_GET_CLIENT_HELLO:required cipher missing) while SSL handshaking, client: *.*.*.*, server: 0.0.0.0:443 2015/01/13 12:25:18 [crit] 11870#0: *140342155 SSL_do_handshake() failed (SSL: error:1408A0D7:SSL routines:SSL3_GET_CLIENT_HELLO:required cipher missing) while SSL handshaking, client: *.*.*.*, server: 0.0.0.0:443 ... Suspecting that this may be related to our SSL configuration in nginx and a recent update to a major browser, I decided to get us up to date. Previously we were on CentOS5 and could only use an older version of OpenSSL with the latest security patches. This meant we could only support TLSv1.0 and a few of the secure recommended ciphers. After upgrading to CentOS6 and implementing Mozilla's recommended configurations for TLSv1.0, TLSv1.1, and TLSv1.2 support, I am confident that we are following best practices for SSL browser compatibility and security. Unfortunately this did not fix the issue. Users began to report a new error in their browser: "ssl_error_inappropriate_fallback_alert", and this is currently reflected in our nginx error logs across all three servers: ... 2015/01/31 03:24:33 [crit] 30658#0: *57298755 SSL_do_handshake() failed (SSL: error:140A1175:SSL routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback) while SSL handshaking, client: *.*.*.*, server: 0.0.0.0:443 2015/01/31 03:24:35 [crit] 30661#0: *57299105 SSL_do_handshake() failed (SSL: error:140A1175:SSL routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback) while SSL handshaking, client: *.*.*.*, server: 0.0.0.0:443 2015/01/31 03:24:41 [crit] 30657#0: *57300774 SSL_do_handshake() failed (SSL: error:140A1175:SSL routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback) while SSL handshaking, client: *.*.*.*, server: 0.0.0.0:443 2015/01/31 03:24:41 [crit] 30657#0: *57300783 SSL_do_handshake() failed (SSL: error:140A1175:SSL routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback) while SSL handshaking, client: *.*.*.*, server: 0.0.0.0:443 2015/01/31 03:24:41 [crit] 30661#0: *57300785 SSL_do_handshake() failed (SSL: error:140A1175:SSL routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback) while SSL handshaking, client: *.*.*.*, server: 0.0.0.0:443 ... Thinking that I had ruled out a faulty SSL stack or nginx configuration, I focused on monitoring the network connections on these servers. ESTABLISHED connections are currently at 13k and TIME_WAIT is at 94k on one server, if that gives any indication to the type of connections we are dealing with. The other two have very similar stats. This is typical for peak hours of traffic. I tried tuning kernel params: lowering tcp_fin_timeout, increasing tcp_max_syn_backlog, increasing the range of ip_local_port_range, turning on tcp_tw_reuse, and other popular tuning practices. Nothing has helped so far and more users continue to contact us about issues using our site. I've exhausted my ideas and I'm not quite sure what's gone wrong. I would be extremely appreciative of any guidance list members could provide. Below are more technical details about our installation and configuration of nginx. nginx -V output: nginx version: nginx/1.6.2 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-http_spdy_module --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' nginx config files: --- /etc/nginx/nginx.conf --- user nginx; worker_processes 12; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 50000; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format with_cookie '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" "$cookie_FL"'; access_log /var/log/nginx/access.log; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript application/json; gzip_vary on; server_names_hash_bucket_size 64; set_real_ip_from *.*.*.*; real_ip_header X-Forwarded-For; include /etc/nginx/upstreams.conf; include /etc/nginx/sites-enabled/*; } --- /etc/nginx/sites-enabled/fl-ssl.conf --- server { root /var/www/fl/current/public; listen 443; ssl on; ssl_certificate /etc/nginx/ssl/wildcard.fl.pem; ssl_certificate_key /etc/nginx/ssl/wildcard.fl.key; ssl_session_timeout 5m; ssl_session_cache shared:SSL:50m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_prefer_server_ciphers on; server_name **********.com; access_log /var/log/nginx/fl.ssl.access.log with_cookie; client_max_body_size 400M; index index.html index.htm; if (-f $document_root/system/maintenance.html) { return 503; } # Google Analytics if ($request_filename ~* ga.js$) { rewrite .* http://www.google-analytics.com/ga.js permanent; break; } if ($request_filename ~* /adgear.js/current/adgear_standard.js) { rewrite .* http://**********.com/adgear/adgear_standard.js permanent; break; } if ($request_filename ~* /adgear.js/current/adgear.js) { rewrite .* http://**********.com/adgear/adgear_standard.js permanent; break; } if ($request_filename ~* __utm.gif$) { rewrite .* http://www.google-analytics.com/__utm.gif permanent; break; } if ($host ~* "www") { rewrite ^(.*)$ http://*********.com$1 permanent; break; } location / { location ~* \.(eot|ttf|woff)$ { add_header Access-Control-Allow-Origin *; } if ($request_uri ~* ".(ico|css|js|gif|jpe?g|png)\?[0-9]+$") { expires max; break; } # needed to forward user's IP address to rails proxy_set_header X-Real-IP $remote_addr; # needed for HTTPS proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-FORWARDED_PROTO https; proxy_redirect off; proxy_max_temp_file_size 0; if ($request_uri ~* /polling) { proxy_pass http://ssl_polling_upstream; break; } if ($request_uri = /upload) { proxy_pass http://rest_stop_upstream; break; } if ($request_uri = /crossdomain.xml) { proxy_pass http://rest_stop_upstream; break; } if (-f $request_filename/index.html) { rewrite (.*) $1/index.html break; } # Rails 3 is for old testing stuff... We don't need this anymore #if ($http_cookie ~ "rails3=true") { # set $request_type '3'; #} if ($request_uri ~* /polling) { set $request_type '${request_type}P'; } if ($request_type = '3P') { proxy_pass http://rails3_upstream; break; } if ($request_uri ~* /polling) { set $request_type '${request_type}P'; } if ($request_type = '3P') { proxy_pass http://rails3_upstream; break; } if ($request_type = 'P') { proxy_pass http://ssl_polling_upstream; break; } if (!-f $request_filename) { set $request_type '${request_type}D'; } if ($request_type = 'D') { proxy_pass http://ssl_fl_upstream; break; } if ($request_type = '3D') { proxy_pass http://rails3_upstream; break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } From r1ch+nginx at teamliquid.net Sat Jan 31 19:01:51 2015 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Sat, 31 Jan 2015 20:01:51 +0100 Subject: Intermittent SSL Handshake Errors In-Reply-To: References: Message-ID: > > ... > 2015/01/13 12:22:59 [crit] 11871#0: *140260577 SSL_do_handshake() > failed (SSL: error:1408A0D7:SSL > routines:SSL3_GET_CLIENT_HELLO:required cipher missing) while SSL > handshaking, client: *.*.*.*, server: 0.0.0.0:443 > > According to the openssl code, this occurs when a client attempts to resume a session that had made use of previously-enabled ciphers. If you're changing your allowed ciphers frequently this could be why, otherwise a full cycle of nginx to empty out the session cache seems like it should resolve this. -------------- next part -------------- An HTML attachment was scrubbed... URL: From champetier.etienne at gmail.com Sat Jan 31 20:24:44 2015 From: champetier.etienne at gmail.com (Etienne Champetier) Date: Sat, 31 Jan 2015 21:24:44 +0100 Subject: Intermittent SSL Handshake Errors In-Reply-To: References: Message-ID: Hi Le 31 janv. 2015 20:02, "Richard Stanway" a ?crit : >> >> ... >> 2015/01/13 12:22:59 [crit] 11871#0: *140260577 SSL_do_handshake() >> failed (SSL: error:1408A0D7:SSL >> routines:SSL3_GET_CLIENT_HELLO:required cipher missing) while SSL >> handshaking, client: *.*.*.*, server: 0.0.0.0:443 >> > > According to the openssl code, this occurs when a client attempts to resume a session that had made use of previously-enabled ciphers. If you're changing your allowed ciphers frequently this could be why, otherwise a full cycle of nginx to empty out the session cache seems like it should resolve this. > Reading Richard reply, maybe the client try to resume the session on a different server? (If you can check the logs to see where were the client before the error) -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sat Jan 31 20:37:46 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 31 Jan 2015 21:37:46 +0100 Subject: Why does fastcgi_keep_conn default to off? In-Reply-To: References: Message-ID: It depends on the backend, really, but you cannot assume it will support multiple sessions on the same connection. Maybe the backend needs the '1 connection = 1 request' relationship? A backend supporting multiplexing won't have trouble with 1 request/connection, however, backend not supporting it wil have trouble dealing with several requests/connection. Conclusion: the most compatible way is not to multiplex. You know, even popular FastCGI backends may not support that correctly. Did you know for instance that PHP-FPM had bugs related to that? https://bugs.php.net/bug.php?id=67583 I ran into it by simply installing the most recent release from MediaWiki somewhere... trouble stopped when multiplexing got deactivated... IMHO default configuration should be the safest. It is then up to you to tweak your system according to your needs. --- *B. R.* On Sat, Jan 31, 2015 at 4:26 PM, Olaf van der Spek wrote: > Why does fastcgi_keep_conn default to off? > On seems to be the faster option. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256371,256371#msg-256371 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: