From n.shagdar at dvz-mv.de Wed Feb 1 08:24:44 2017 From: n.shagdar at dvz-mv.de (Natsagdorj, Shagdar) Date: Wed, 1 Feb 2017 08:24:44 +0000 Subject: No Source RPMs for rhel package Message-ID: <8B36239D92A9A547B7B8B970327C30DE39C66A3C@DVZSN-RA0325.bk.dvz-mv.net> Hi all, Is there a particular reason why there are no Source RPMs under http://nginx.org/packages/rhel/7/SRPMS/ for the stable versions 1.10.2 and 1.10.3 ? Best Regards, Nagi -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseveendorj at gmail.com Wed Feb 1 08:59:51 2017 From: tseveendorj at gmail.com (Tseveendorj Ochirlantuu) Date: Wed, 01 Feb 2017 08:59:51 +0000 Subject: No Source RPMs for rhel package In-Reply-To: <8B36239D92A9A547B7B8B970327C30DE39C66A3C@DVZSN-RA0325.bk.dvz-mv.net> References: <8B36239D92A9A547B7B8B970327C30DE39C66A3C@DVZSN-RA0325.bk.dvz-mv.net> Message-ID: Hi Source oos ni compile hiiheer amar yum shig sanagddag. You can compile from source http://nginx.org/download/nginx-1.10.3.tar.gz On Wed, Feb 1, 2017 at 4:25 PM Natsagdorj, Shagdar wrote: > Hi all, > > > > Is there a particular reason why there are no Source RPMs under > http://nginx.org/packages/rhel/7/SRPMS/ for the stable versions 1.10.2 > and 1.10.3 ? > > > > Best Regards, > > Nagi > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Wed Feb 1 16:02:54 2017 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 1 Feb 2017 11:02:54 -0500 Subject: [nginx-announce] nginx-1.10.3 In-Reply-To: <20170131151249.GG46625@mdounin.ru> References: <20170131151249.GG46625@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.10.3 for Windows https://kevinworthington.com/nginxwin1103 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Jan 31, 2017 at 10:12 AM, Maxim Dounin wrote: > Changes with nginx 1.10.3 31 Jan > 2017 > > *) Bugfix: in the "add_after_body" directive when used with the > "sub_filter" directive. > > *) Bugfix: unix domain listen sockets might not be inherited during > binary upgrade on Linux. > > *) Bugfix: graceful shutdown of old worker processes might require > infinite time when using HTTP/2. > > *) Bugfix: when using HTTP/2 and the "limit_req" or "auth_request" > directives client request body might be corrupted; the bug had > appeared in 1.10.2. > > *) Bugfix: a segmentation fault might occur in a worker process when > using HTTP/2; the bug had appeared in 1.10.2. > > *) Bugfix: an incorrect response might be returned when using the > "sendfile" directive on FreeBSD and macOS; the bug had appeared in > 1.7.8. > > *) Bugfix: a truncated response might be stored in cache when using the > "aio_write" directive. > > *) Bugfix: a socket leak might occur when using the "aio_write" > directive. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nmilas at noa.gr Wed Feb 1 16:53:23 2017 From: nmilas at noa.gr (Nikolaos Milas) Date: Wed, 1 Feb 2017 18:53:23 +0200 Subject: php not working from aliased subdir In-Reply-To: <20170131201335.GZ2958@daoine.org> References: <569E002A.5090909@noa.gr> <20160119205814.GT19381@daoine.org> <20170131201335.GZ2958@daoine.org> Message-ID: On 31/1/2017 10:13 ??, Francis Daly wrote: > Replace the line with > > fastcgi_param SCRIPT_FILENAME $request_filename; Thank you Francis. Your suggestion has indeed solved the issue! You rock! > Comparehttp://nginx.org/r/$fastcgi_script_name with > http://nginx.org/r/$request_filename to see why you probably want the > latter in each case you use "alias". I still have a long way to go until I feel confident with nginx configuration, I am afraid... Unfortunately, we are one of those one-man-shops, where we cannot devote sufficient time in learning all technologies we deal with in the required depth. That's why we need to disturb from time to time polite people like you. :-) By the way, is there some method to define a block of code which can be included at various places? For example, we might have a block like: location ~ \.php$ { fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass unix:/tmp/php-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } Can we define it as a block and include it where needed as a reference? Again, thanks a lot! Nick From donatas.abraitis at gmail.com Thu Feb 2 07:40:56 2017 From: donatas.abraitis at gmail.com (Donatas Abraitis) Date: Thu, 2 Feb 2017 09:40:56 +0200 Subject: SO_BUSY_POLL for latency critical cases Message-ID: Hi, I do not cover any tests, just putting this patchset and wanna ask if someone had tried this? I tried this for Redis and changes are really reasonably visible like here: https://github.com/antirez/redis/pull/3773 # HG changeset patch # User Donatas Abraitis # Date 1485982537 -7200 # Wed Feb 01 22:55:37 2017 +0200 # Node ID aa29306b9ff2ef6a72919d7cc8ace72e3dd3a3aa # Parent d2b2ff157da53260b2b1c414792100ff0cd1377d Set SO_BUSY_POLL option for socket if specified diff -r d2b2ff157da5 -r aa29306b9ff2 src/core/ngx_connection.c --- a/src/core/ngx_connection.c Tue Jan 31 21:19:58 2017 +0300 +++ b/src/core/ngx_connection.c Wed Feb 01 22:55:37 2017 +0200 @@ -497,6 +497,25 @@ } #endif + if (ls[i].busypoll) { + if (setsockopt(s, SOL_SOCKET, SO_BUSY_POLL, + &ls[i].busypoll, sizeof(ls[i].busypoll)) + == -1) + { + ngx_log_error(NGX_LOG_EMERG, log, ngx_socket_errno, + "setsockopt(SO_BUSY_POLL) %V failed, ignored", + &ls[i].addr_text); + + if (ngx_close_socket(s) == -1) { + ngx_log_error(NGX_LOG_EMERG, log, ngx_socket_errno, + ngx_close_socket_n " %V failed", + &ls[i].addr_text); + } + + return NGX_ERROR; + } + } + #if (NGX_HAVE_INET6 && defined IPV6_V6ONLY) if (ls[i].sockaddr->sa_family == AF_INET6) { diff -r d2b2ff157da5 -r aa29306b9ff2 src/core/ngx_connection.h --- a/src/core/ngx_connection.h Tue Jan 31 21:19:58 2017 +0300 +++ b/src/core/ngx_connection.h Wed Feb 01 22:55:37 2017 +0200 @@ -70,6 +70,7 @@ unsigned ipv6only:1; #endif unsigned reuseport:1; + int busypoll; unsigned add_reuseport:1; unsigned keepalive:2; diff -r d2b2ff157da5 -r aa29306b9ff2 src/http/ngx_http.c --- a/src/http/ngx_http.c Tue Jan 31 21:19:58 2017 +0300 +++ b/src/http/ngx_http.c Wed Feb 01 22:55:37 2017 +0200 @@ -1741,6 +1741,7 @@ #endif ls->backlog = addr->opt.backlog; + ls->busypoll = addr->opt.busypoll; ls->rcvbuf = addr->opt.rcvbuf; ls->sndbuf = addr->opt.sndbuf; diff -r d2b2ff157da5 -r aa29306b9ff2 src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c Tue Jan 31 21:19:58 2017 +0300 +++ b/src/http/ngx_http_core_module.c Wed Feb 01 22:55:37 2017 +0200 @@ -2984,6 +2984,7 @@ lsopt.socklen = sizeof(struct sockaddr_in); lsopt.backlog = NGX_LISTEN_BACKLOG; + lsopt.busypoll = 0; lsopt.rcvbuf = -1; lsopt.sndbuf = -1; #if (NGX_HAVE_SETFIB) @@ -3946,6 +3947,7 @@ lsopt.socklen = u.socklen; lsopt.backlog = NGX_LISTEN_BACKLOG; + lsopt.busypoll = 0; lsopt.rcvbuf = -1; lsopt.sndbuf = -1; #if (NGX_HAVE_SETFIB) @@ -4009,6 +4011,19 @@ } #endif + if (ngx_strncmp(value[n].data, "busypoll=", 9) == 0) { + lsopt.busypoll = ngx_atoi(value[n].data + 9, value[n].len - 9); + lsopt.set = 1; + lsopt.bind = 1; + + if (lsopt.busypoll == NGX_ERROR || lsopt.busypoll < 0) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "invalid busypoll value \"%V\"", &value[n]); + return NGX_CONF_ERROR; + } + continue; + } + if (ngx_strncmp(value[n].data, "backlog=", 8) == 0) { lsopt.backlog = ngx_atoi(value[n].data + 8, value[n].len - 8); lsopt.set = 1; diff -r d2b2ff157da5 -r aa29306b9ff2 src/http/ngx_http_core_module.h --- a/src/http/ngx_http_core_module.h Tue Jan 31 21:19:58 2017 +0300 +++ b/src/http/ngx_http_core_module.h Wed Feb 01 22:55:37 2017 +0200 @@ -83,6 +83,7 @@ unsigned proxy_protocol:1; int backlog; + int busypoll; int rcvbuf; int sndbuf; #if (NGX_HAVE_SETFIB) Waiting for comments, Donatas. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Feb 2 08:20:15 2017 From: francis at daoine.org (Francis Daly) Date: Thu, 2 Feb 2017 08:20:15 +0000 Subject: php not working from aliased subdir In-Reply-To: References: <569E002A.5090909@noa.gr> <20160119205814.GT19381@daoine.org> <20170131201335.GZ2958@daoine.org> Message-ID: <20170202082015.GA2958@daoine.org> On Wed, Feb 01, 2017 at 06:53:23PM +0200, Nikolaos Milas wrote: > On 31/1/2017 10:13 ??, Francis Daly wrote: Hi there, Good that you got it working for you. > By the way, is there some method to define a block of code which can > be included at various places? In stock nginx.conf? Not, not as as block. In general, you can store it in a separate file and "include" it. Or in general, you can use your favourite macro processor and write your own pre-nginx.conf file which does whatever substitution you want to create your active nginx.conf. But in this specific case... > location ~ \.php$ { > > fastcgi_param SCRIPT_FILENAME $request_filename; > fastcgi_param QUERY_STRING $query_string; > fastcgi_param REQUEST_METHOD $request_method; > fastcgi_param CONTENT_TYPE $content_type; > fastcgi_param CONTENT_LENGTH $content_length; All of those things could probably go into your /etc/nginx/fastcgi_params, which you include anyway below. > fastcgi_pass unix:/tmp/php-fpm.sock; That should stay within the location. > fastcgi_index index.php; That can be removed, as it does nothing useful here. (It says: if the request ends in /, then use the index.php file. But it is in a location{} where the request must end in p.) > include /etc/nginx/fastcgi_params; > } In fact, you could possibly have all of the fastcgi_param stuff written once, at server{} level, so that your repeated content is just location ~ \.php$ { fastcgi_pass unix:/tmp/php-fpm.sock; } with maybe an "include /etc/nginx/fastcgi_params;" in there too. > Can we define it as a block and include it where needed as a reference? By the time it is 3 or 4 lines, (or just 1, since the newlines above are optional) it is possibly worth just copying the content instead of copying an "include" line. It's not the (general) answer that you want, but it might be enough for now. Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Thu Feb 2 12:47:43 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 2 Feb 2017 15:47:43 +0300 Subject: SO_BUSY_POLL for latency critical cases In-Reply-To: References: Message-ID: <20170202124743.GQ46625@mdounin.ru> Hello! On Thu, Feb 02, 2017 at 09:40:56AM +0200, Donatas Abraitis wrote: > I do not cover any tests, just putting this patchset and wanna ask if > someone had tried this? I tried this for Redis and changes are really > reasonably visible like here: https://github.com/antirez/redis/pull/3773 Just a side note: The data provided in the pull request is not really convincing. Ministat shows no difference proven at 95% confidence for both min and max latency values: $ ministat with.min without.min x with.min + without.min +------------------------------------------------------------------------------+ |x x x x+ + + x + +| | |________________________AM____|____________M__A__|___________| | +------------------------------------------------------------------------------+ N Min Max Median Avg Stddev x 5 458752 702022 587727 585944.6 86500.993 + 5 605029 722469 649593 659764.4 53243.639 No difference proven at 95.0% confidence $ ministat with.max without.max x with.max + without.max +------------------------------------------------------------------------------+ |x + x xx + + + *| | |__________________________A_M|________________A_____M|________| | +------------------------------------------------------------------------------+ N Min Max Median Avg Stddev x 5 26625442 2.0884278e+08 1.2690286e+08 1.2130263e+08 64656095 + 5 1.1113228e+08 2.0951387e+08 1.8428094e+08 1.6806744e+08 40111502 No difference proven at 95.0% confidence -- Maxim Dounin http://nginx.org/ From pnickerson at cashstar.com Thu Feb 2 22:02:43 2017 From: pnickerson at cashstar.com (Paul Nickerson) Date: Thu, 2 Feb 2017 17:02:43 -0500 Subject: Where does $remote_addr come from? Message-ID: According to NGINX documentation, $remote_addr is an embedded variable in the ngx_http_core_module module. I have searched all around, including the source code, trying to figure out exactly how NGINX generates this variable, but I have been unable to find anything beyond the description "client address". Currently, my best guess is that it's the source address field in the incoming TCP/IP packet's IPv4 internet header. Is this correct? Or, does it come from somewhere else? Relevant documentation: http://nginx.org/en/docs/http/ngx_http_core_module.html#variables Thank you, -- Paul Nickerson -- *CONFIDENTIALITY NOTICE* The attached information is PRIVILEGED AND CONFIDENTIAL and is intended only for the use of the addressee named above. If the reader of this message is not the intended recipient or the employee or agent responsible for delivering the message to the intended recipient, please be aware that any dissemination, distribution or duplication of this communication is strictly prohibited. If you receive this communication in error, please notify us immediately by telephone, delete the message and destroy any printed copy of the message. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Feb 3 08:35:03 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 3 Feb 2017 08:35:03 +0000 Subject: Where does $remote_addr come from? In-Reply-To: References: Message-ID: <20170203083503.GB2958@daoine.org> On Thu, Feb 02, 2017 at 05:02:43PM -0500, Paul Nickerson wrote: Hi there, > Currently, my best guess is that it's the source address > field in the incoming TCP/IP packet's IPv4 internet header. Is this > correct? Or, does it come from somewhere else? grep -lr remote_addr . in the source tree shows a handful of files where it is used, "grep -r" shows only one or two where it is probably set. For http, it is set in ./src/http/ngx_http_variables.c Roughly, it is the IP address of the incoming connection, unless the realip module has replaced it with something else. Exactly, it is what the source code says: v->data = r->connection->addr_text.data; and then you can track where that addr_text.data value is set. Cheers, f -- Francis Daly francis at daoine.org From donatas.abraitis at gmail.com Fri Feb 3 10:47:45 2017 From: donatas.abraitis at gmail.com (Donatas Abraitis) Date: Fri, 3 Feb 2017 12:47:45 +0200 Subject: SO_BUSY_POLL for latency critical cases In-Reply-To: <20170202124743.GQ46625@mdounin.ru> References: <20170202124743.GQ46625@mdounin.ru> Message-ID: Sorry, what is `with.min` and `without.min` in this case? On Thu, Feb 2, 2017 at 2:47 PM, Maxim Dounin wrote: > Hello! > > On Thu, Feb 02, 2017 at 09:40:56AM +0200, Donatas Abraitis wrote: > > > I do not cover any tests, just putting this patchset and wanna ask if > > someone had tried this? I tried this for Redis and changes are really > > reasonably visible like here: https://github.com/antirez/redis/pull/3773 > > Just a side note: > > The data provided in the pull request is not really convincing. > Ministat shows no difference proven at 95% confidence for both min > and max latency values: > > $ ministat with.min without.min > x with.min > + without.min > +----------------------------------------------------------- > -------------------+ > |x x x x+ + + x > + +| > | |________________________AM____|____________M__A__|___________| > | > +----------------------------------------------------------- > -------------------+ > N Min Max Median Avg Stddev > x 5 458752 702022 587727 585944.6 86500.993 > + 5 605029 722469 649593 659764.4 53243.639 > No difference proven at 95.0% confidence > $ ministat with.max without.max > x with.max > + without.max > +----------------------------------------------------------- > -------------------+ > |x + x xx + + + > *| > | |__________________________A_M|________________A_____M|________| > | > +----------------------------------------------------------- > -------------------+ > N Min Max Median Avg Stddev > x 5 26625442 2.0884278e+08 1.2690286e+08 1.2130263e+08 64656095 > + 5 1.1113228e+08 2.0951387e+08 1.8428094e+08 1.6806744e+08 40111502 > No difference proven at 95.0% confidence > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Donatas -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Feb 3 11:50:03 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 3 Feb 2017 11:50:03 +0000 Subject: SO_BUSY_POLL for latency critical cases In-Reply-To: References: <20170202124743.GQ46625@mdounin.ru> Message-ID: <20170203115003.GC2958@daoine.org> On Fri, Feb 03, 2017 at 12:47:45PM +0200, Donatas Abraitis wrote: Hi there, > Sorry, what is `with.min` and `without.min` in this case? Without trying to speak for Maxim: The github link provided says "Benchmarks example. Port 6379 is without busy polling, 6380 with:" and provides some lists of "min" and "max" values. I imagine that with.min is "the list of min values, with busy polling"; and the other filenames correspond to the other three sets. And the "ministat" analysis suggests that the "with" and "without" sets of values are statistically identical. So the evidence provided suggests that there is no benefit from adding SO_BUSY_POLL. "changes are really reasonably visible" is incorrect. There may well be benefits from adding SO_BUSY_POLL. But the list of numbers provided does not show those benefits, according to this specific analysis. Maybe it can be explained why the analysis is wrong; maybe some other analysis can be suggested that will show that there are benefits. But the justification for the change will probably have to come from the proposer. Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Fri Feb 3 12:12:08 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 3 Feb 2017 15:12:08 +0300 Subject: SO_BUSY_POLL for latency critical cases In-Reply-To: References: <20170202124743.GQ46625@mdounin.ru> Message-ID: <20170203121208.GS46625@mdounin.ru> Hello! On Fri, Feb 03, 2017 at 12:47:45PM +0200, Donatas Abraitis wrote: > Sorry, what is `with.min` and `without.min` in this case? These are series of data from the pull request in question: with.min - min latency with busy polling without.min - min latency without busy polling with.max - max latency with busy polling without.max - max latency without busy polling -- Maxim Dounin http://nginx.org/ From nginx+list at olstad.com Fri Feb 3 12:17:30 2017 From: nginx+list at olstad.com (Kai Stian Olstad) Date: Fri, 03 Feb 2017 13:17:30 +0100 Subject: Nginx only sends hostname to syslog. Message-ID: <937e8492e8d8097bb10adef1b9e59552@olstad.com> Hi To get the post data from Kibana I use Nginx as a proxy and have the following log format, log_format audit '$time_iso8601 "$remote_user" "$request_uri" "$request_body"'; and to send the log to syslog I use the following. access_log syslog:server=192.168.1.1,tag=audit audit; But Nginx only sends hostname to syslog server, is it possible to configure Nginx to send fully qualified domain name? -- Kai Stian Olstad From mdounin at mdounin.ru Fri Feb 3 12:58:20 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 3 Feb 2017 15:58:20 +0300 Subject: Nginx only sends hostname to syslog. In-Reply-To: <937e8492e8d8097bb10adef1b9e59552@olstad.com> References: <937e8492e8d8097bb10adef1b9e59552@olstad.com> Message-ID: <20170203125820.GU46625@mdounin.ru> Hello! On Fri, Feb 03, 2017 at 01:17:30PM +0100, Kai Stian Olstad wrote: > To get the post data from Kibana I use Nginx as a proxy and have the > following log format, > > log_format audit '$time_iso8601 "$remote_user" "$request_uri" > "$request_body"'; > > > and to send the log to syslog I use the following. > > access_log syslog:server=192.168.1.1,tag=audit audit; > > > But Nginx only sends hostname to syslog server, is it possible to > configure Nginx to send fully qualified domain name? Much like for the $hostname variable, for syslog logging nginx uses whatever is returned by the gethostname() call. It's up to you to configure fully qualified domain name as a hostname of a particular server. -- Maxim Dounin http://nginx.org/ From pnickerson at cashstar.com Fri Feb 3 20:36:53 2017 From: pnickerson at cashstar.com (Paul Nickerson) Date: Fri, 3 Feb 2017 15:36:53 -0500 Subject: Where does $remote_addr come from? Message-ID: I accidentally turned on digest mode for myself on this mailing list (now turned off), so this might not be threaded. Sorry. Francis Daly francis at daoine.org > Exactly, it is what the source code says: > v->data = r->connection->addr_text.data; > and then you can track where that addr_text.data value is set. I thought it might be coming from addr_text in the code, but my experience with C is dated and limited. I wasn't able to figure out where addr_text.data is set. ~ Paul Nickerson -- *CONFIDENTIALITY NOTICE* The attached information is PRIVILEGED AND CONFIDENTIAL and is intended only for the use of the addressee named above. If the reader of this message is not the intended recipient or the employee or agent responsible for delivering the message to the intended recipient, please be aware that any dissemination, distribution or duplication of this communication is strictly prohibited. If you receive this communication in error, please notify us immediately by telephone, delete the message and destroy any printed copy of the message. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Feb 3 21:13:26 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 3 Feb 2017 21:13:26 +0000 Subject: Where does $remote_addr come from? In-Reply-To: References: Message-ID: <20170203211326.GD2958@daoine.org> On Fri, Feb 03, 2017 at 03:36:53PM -0500, Paul Nickerson wrote: Hi there, > > Exactly, it is what the source code says: > > v->data = r->connection->addr_text.data; > > and then you can track where that addr_text.data value is set. > > I thought it might be coming from addr_text in the code, but my experience > with C is dated and limited. I wasn't able to figure out where > addr_text.data is set. I have the 1.11.5 source easily to hand, so this is based on that. $ grep -r addr_text.data . | wc -l 28 The first few of those are in ./src/event/ngx_event_accept.c, which includes a call to ngx_pnalloc, which is probably related to alloc'ing storage for some content. Reading that file, the next likely looking line is: c->addr_text.len = ngx_sock_ntop(c->sockaddr, c->socklen, c->addr_text.data, ls->addr_text_max_len, 0); where the ".len" is the function return value, and ".data" is an address passed in as an argument. How is that function defined? $ grep -r ngx_sock_ntop . | grep h: ./src/core/ngx_inet.h:size_t ngx_sock_ntop(struct sockaddr *sa, socklen_t socklen, u_char *text, It's declared in ./src/core/ngx_inet.h; there's a good chance that it is defined in ./src/core/ngx_inet.c, so look there. It shows that the third argument is "u_char *text"; and for an IPv4 connection there are lines like p = (u_char *) &sin->sin_addr; p = ngx_snprintf(text, len, "%ud.%ud.%ud.%ud", p[0], p[1], p[2], p[3]); For an IPv6 connection or a unix domain socket connection, there are other ngx_snprintf or (after another function call) ngx_sprintf calls that write appropriate things to "text". So addr_text.data is "whatever the other end of this connection is", with the caveat that the realip module can be explicitly configured to change what that appears to be. (And therefore, any module *could* change what that appears to be. So don't run modules you don't trust, if you want to be able to believe what your nginx reports its internal state to be.) Cheers, f -- Francis Daly francis at daoine.org From pnickerson at cashstar.com Fri Feb 3 23:02:15 2017 From: pnickerson at cashstar.com (Paul Nickerson) Date: Fri, 3 Feb 2017 18:02:15 -0500 Subject: Where does $remote_addr come from? In-Reply-To: <20170203211326.GD2958@daoine.org> References: <20170203211326.GD2958@daoine.org> Message-ID: > Reading that file, the next likely looking line is: > c->addr_text.len = ngx_sock_ntop(c->sockaddr, c->socklen, > c->addr_text.data, > ls->addr_text_max_len, 0); Thank you for the boost. From what you said, it looks like the variable is constructed from c->sockaddr src/event/ngx_event_accept.c line 167 c->sockaddr = ngx_palloc(c->pool, socklen); I chased that down, and it looks like ngx_palloc only allocates some memory; it doesn't fill it. Moving on. line 173 ngx_memcpy(c->sockaddr, &sa, socklen); It looks like ngx_memcpy is a wrapper around the standard C library function memcpy. For memcpy(A, B, C), it copies to destination A from source B, and it does amount C. So now I want to know where &sa comes from. line 70 s = accept(lc->fd, &sa.sockaddr, &socklen); Here, &sa.sockaddr is being sent into something. I think &sa.sockaddr becomes c->sockaddr, so I chase this. Bash man 2 accept accept is a Linux system call: "accept a connection on a socket" int accept(int sockfd, struct sockaddr *addr, socklen_t *addrlen); "The argument addr is a pointer to a sockaddr structure. This structure is filled in with the address of the peer socket, as known to the communications layer. The exact format of the address returned addr is determined by the socket's address family (see socket(2) and the respective protocol man pages). When addr is NULL, nothing is filled in; in this case, addrlen is not used, and should also be NULL." And so, the answer to my question appears to be: $remote_addr is constructed from "struct sockaddr *addr" of the "accept" Linux system call. It is the address of the peer socket. I am going to read through socket(2) and the respective protocol man pages, but at this point we're outside of NGINX, and so the scope of this mailing list. Thank you again for your help. ~ Paul Nickerson -- *CONFIDENTIALITY NOTICE* The attached information is PRIVILEGED AND CONFIDENTIAL and is intended only for the use of the addressee named above. If the reader of this message is not the intended recipient or the employee or agent responsible for delivering the message to the intended recipient, please be aware that any dissemination, distribution or duplication of this communication is strictly prohibited. If you receive this communication in error, please notify us immediately by telephone, delete the message and destroy any printed copy of the message. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Sat Feb 4 05:54:43 2017 From: gfrankliu at gmail.com (Frank Liu) Date: Fri, 3 Feb 2017 21:54:43 -0800 Subject: limit connection based on Host header Message-ID: Hi, I have a default "server" block with "server_name _ ;". Since connections coming in may have different Host header, I am trying to limit the connection based on Host header. limit_conn_zone $server_name zone=perserver:10m; limit_conn perserver 10; Will this work? It seems if the connection for one Host reaches 10, I see errors "limiting connections by zone "perserver" for connections with other Host as well. Did I miss anything? Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Feb 4 08:06:31 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 4 Feb 2017 08:06:31 +0000 Subject: limit connection based on Host header In-Reply-To: References: Message-ID: <20170204080631.GE2958@daoine.org> On Fri, Feb 03, 2017 at 09:54:43PM -0800, Frank Liu wrote: Hi there, > I have a default "server" block with "server_name _ ;". Since connections > coming in may have different Host header, I am trying to limit the > connection based on Host header. > > limit_conn_zone $server_name zone=perserver:10m; $server_name is the primary server name, which is the first parameter in the server_name directive. The "host" header sent by the client is $http_host. A tidied-up version of that (which could be populated from something else instead) is $host. > It seems if the connection for one Host reaches 10, I see errors "limiting > connections by zone "perserver" for connections with other Host as well. > Did I miss anything? Your configuration limits based on the server name. All requests to one server{} have the same $server_name, so what you see is what you asked for. Cheers, f -- Francis Daly francis at daoine.org From reallfqq-nginx at yahoo.fr Sat Feb 4 13:00:47 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 4 Feb 2017 14:00:47 +0100 Subject: Where does $remote_addr come from? In-Reply-To: References: <20170203211326.GD2958@daoine.org> Message-ID: I am curious: apart from a training prospective at code digging, what was the goal? In other words, where did you expect the IP address to come from, if not from a system network socket? Have a nice week-end, --- *B. R.* On Sat, Feb 4, 2017 at 12:02 AM, Paul Nickerson wrote: > > Reading that file, the next likely looking line is: > > c->addr_text.len = ngx_sock_ntop(c->sockaddr, c->socklen, > > c->addr_text.data, > > ls->addr_text_max_len, 0); > > Thank you for the boost. From what you said, it looks like the variable is > constructed from c->sockaddr > > src/event/ngx_event_accept.c > line 167 > c->sockaddr = ngx_palloc(c->pool, socklen); > > I chased that down, and it looks like ngx_palloc only allocates some > memory; it doesn't fill it. Moving on. > > line 173 > ngx_memcpy(c->sockaddr, &sa, socklen); > > It looks like ngx_memcpy is a wrapper around the standard C library > function memcpy. For memcpy(A, B, C), it copies to destination A from > source B, and it does amount C. So now I want to know where &sa comes from. > > line 70 > s = accept(lc->fd, &sa.sockaddr, &socklen); > > Here, &sa.sockaddr is being sent into something. I think &sa.sockaddr > becomes c->sockaddr, so I chase this. > > Bash > man 2 accept > > accept is a Linux system call: "accept a connection on a socket" > int accept(int sockfd, struct sockaddr *addr, socklen_t *addrlen); > > "The argument addr is a pointer to a sockaddr structure. This > structure is filled in with the address of the peer socket, as known > to the communications layer. The exact format of the address > returned addr is determined by the socket's address family (see > socket(2) and the respective protocol man pages). When addr is NULL, > nothing is filled in; in this case, addrlen is not used, and should > also be NULL." > > And so, the answer to my question appears to be: $remote_addr is > constructed from "struct sockaddr *addr" of the "accept" Linux system call. > It is the address of the peer socket. > > I am going to read through socket(2) and the respective protocol man > pages, but at this point we're outside of NGINX, and so the scope of this > mailing list. > Thank you again for your help. > > ~ Paul Nickerson > > *CONFIDENTIALITY NOTICE* > > The attached information is PRIVILEGED AND CONFIDENTIAL and is intended > only for the use of the addressee named above. If the reader of this > message is not the intended recipient or the employee or agent responsible > for delivering the message to the intended recipient, please be aware that > any dissemination, distribution or duplication of this communication is > strictly prohibited. If you receive this communication in error, please > notify us immediately by telephone, delete the message and destroy any > printed copy of the message. Thank you. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smntov at gmail.com Mon Feb 6 10:43:24 2017 From: smntov at gmail.com (ST) Date: Mon, 06 Feb 2017 12:43:24 +0200 Subject: mp4 video streaming: fast start up, but gets stuck... Message-ID: <1486377804.1410.9.camel@gmail.com> Hello, I have two mp4 files served by nginx: A ( https://goo.gl/lYGXjC ) and B ( https://goo.gl/ZsnX7M ). File A starts pretty fast ca. 2s and it buffers/caches "future" content, however to B it takes more than 6s and it doesn't seem to buffer/cache. However A gets stuck periodically which makes it impossible to view (probably when it fails to cache), while B plays nicely all the time. Here are my questions: 1. What makes A start up so fast? 2. Is it possible to have both - fast start AND stable playing? I have not encoded neither of the files but here is the ffmpeg -i output of both ( http://pastebin.com/SuwYZkWP ) and AtomicParsley -T of A ( http://pastebin.com/GRWJkqNy ) and B ( http://pastebin.com/YGq0HjCU ). If you need any additional information, please, let me know. Thank you very much! ST From mdounin at mdounin.ru Mon Feb 6 12:45:06 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 6 Feb 2017 15:45:06 +0300 Subject: mp4 video streaming: fast start up, but gets stuck... In-Reply-To: <1486377804.1410.9.camel@gmail.com> References: <1486377804.1410.9.camel@gmail.com> Message-ID: <20170206124506.GY46625@mdounin.ru> Hello! On Mon, Feb 06, 2017 at 12:43:24PM +0200, ST wrote: > I have two mp4 files served by nginx: A ( https://goo.gl/lYGXjC ) and B > ( https://goo.gl/ZsnX7M ). File A starts pretty fast ca. 2s and it > buffers/caches "future" content, however to B it takes more than 6s and > it doesn't seem to buffer/cache. However A gets stuck periodically which > makes it impossible to view (probably when it fails to cache), while B > plays nicely all the time. Here are my questions: 1. What makes A start > up so fast? 2. Is it possible to have both - fast start AND stable > playing? I have not encoded neither of the files but here is the ffmpeg > -i output of both ( http://pastebin.com/SuwYZkWP ) and AtomicParsley -T > of A ( http://pastebin.com/GRWJkqNy ) and B > ( http://pastebin.com/YGq0HjCU ). If you need any additional > information, please, let me know. The obvious difference is 1.6M metadata in B (compared to 68k in A). I don't think there is anything nginx-related here, rather, you should look into encoding of both files. -- Maxim Dounin http://nginx.org/ From pnickerson at cashstar.com Mon Feb 6 14:24:37 2017 From: pnickerson at cashstar.com (Paul Nickerson) Date: Mon, 6 Feb 2017 09:24:37 -0500 Subject: Where does $remote_addr come from? In-Reply-To: References: <20170203211326.GD2958@daoine.org> Message-ID: B.R. > I am curious: apart from a training prospective at code digging, what was the goal? > In other words, where did you expect the IP address to come from, if not from a system network socket? We have NGINX AWS EC2 Instances behind AWS EC2 ELBs, as well as Fastly's CDN and maybe some custom load balancers, but sometimes an IP address that we log is not readily identifiable. I was also seeing some configurations in our setup that suggested we may have been using $remote_addr incorrectly, in log auditing for example. So before I verified that and chased the odd IP's, I wanted to make sure that I understood exactly what $remote_addr refers to. I thought that maybe it was actually derived from the HTTP header, or maybe a module could be modifying it without being explicitly configured to do so, or maybe it's possible for a bad actor to spoof it. Now I know that it's independent of the HTTP header, one native module and probably some third party modules can modify it, and a bad actor would need to spoof the TCP IPv4 internet header's source address. I admit, I probably could have been reasonably confident in our configuration without needing to determine this. But I was surprised to find there was no documentation or past forum posts saying whether this variable came from the TCP/IP or the HTTP headers. After that, my sense of technical discovery took over and kept me interested in the problem. ~ Paul Nickerson -- *CONFIDENTIALITY NOTICE* The attached information is PRIVILEGED AND CONFIDENTIAL and is intended only for the use of the addressee named above. If the reader of this message is not the intended recipient or the employee or agent responsible for delivering the message to the intended recipient, please be aware that any dissemination, distribution or duplication of this communication is strictly prohibited. If you receive this communication in error, please notify us immediately by telephone, delete the message and destroy any printed copy of the message. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From benyamin.dvoskin at gmail.com Mon Feb 6 16:00:03 2017 From: benyamin.dvoskin at gmail.com (Benyamin Dvoskin) Date: Mon, 6 Feb 2017 18:00:03 +0200 Subject: Issue with image resizing Message-ID: Hi, I'm trying to implement live image resizing on nginx using ngx_http_image_filter_module, and I get some weird issues with it. this is my config for resizing: server { listen 1111; server_name localhost; set $backend 'bucket.s3.amazonaws.com'; resolver 8.8.8.8; resolver_timeout 5s; proxy_buffering off; proxy_http_version 1.1; proxy_pass_request_body off; proxy_pass_request_headers off; proxy_hide_header "x-amz-id-2"; proxy_hide_header "x-amz-request-id"; proxy_hide_header "x-amz-storage-class"; proxy_hide_header "Set-Cookie"; proxy_ignore_headers "Set-Cookie"; proxy_set_header Host $backend; proxy_method GET; image_filter_interlace on; location ~ \/(?.+?)\/(?.+?)-W(?(\d+)).(?[a-z_]*) { image_filter resize $width -; proxy_pass http://$backend/$producttitle/$image.$ext; error_page 415 = /empty; } location = /empty { empty_gif; } } and this is my config for caching: server { listen 80; server_name images.domain.com; location ~ \/(?.+?)\/(?.+?)-W(?(\d+)).(?[a-z_]*) { proxy_pass http://localhost:1111; proxy_cache mc_images_cache; proxy_cache_key "$host$document_uri$image$width"; proxy_cache_lock on; proxy_cache_valid 200 30d; proxy_cache_valid any 15s; proxy_cache_use_stale error timeout invalid_header updating; proxy_http_version 1.1; expires 30d; } } the issue I'm experiencing is getting 415 errors out of no where for some of the width values I provide in the URL. sometimes they work and sometimes I get 415. nothing informative in debug log. any idea what I'm doing wrong here? -- Thanks, Benyamin -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx+list at olstad.com Tue Feb 7 10:37:26 2017 From: nginx+list at olstad.com (Kai Stian Olstad) Date: Tue, 07 Feb 2017 11:37:26 +0100 Subject: Nginx only sends hostname to syslog. In-Reply-To: <20170203125820.GU46625@mdounin.ru> References: <937e8492e8d8097bb10adef1b9e59552@olstad.com> <20170203125820.GU46625@mdounin.ru> Message-ID: <1e7f82c8dec15acdb2fa9dcaba640146@olstad.com> On 03.02.2017 13:58, Maxim Dounin wrote: > Hello! > > On Fri, Feb 03, 2017 at 01:17:30PM +0100, Kai Stian Olstad wrote: > >> To get the post data from Kibana I use Nginx as a proxy and have the >> following log format, >> >> log_format audit '$time_iso8601 "$remote_user" "$request_uri" >> "$request_body"'; >> >> >> and to send the log to syslog I use the following. >> >> access_log syslog:server=192.168.1.1,tag=audit audit; >> >> >> But Nginx only sends hostname to syslog server, is it possible to >> configure Nginx to send fully qualified domain name? > > Much like for the $hostname variable, for syslog logging nginx > uses whatever is returned by the gethostname() call. It's up to > you to configure fully qualified domain name as a hostname of a > particular server. Thank you for the answer. The challenge is that in Debian hostname should only be the name and not the FQDN. https://www.debian.org/doc/manuals/debian-reference/ch03.en.html#_the_hostname When I set the hostname to FQDN, Nginx do log with the FQDN, but this break other things that relay on hostname only be name and not FQDN. If it's not possible to configure this in Nginx I would need to find a workaround. One way could be to make Nginx log to file and rsyslog read that file. Is that my only way? -- Kai Stian Olstad From nick at nginx.com Tue Feb 7 22:55:24 2017 From: nick at nginx.com (Nick Shadrin) Date: Tue, 7 Feb 2017 14:55:24 -0800 Subject: NGINX is hiring a technical evangelist Message-ID: <3DC51D1A-DE82-4AA5-B440-369C9AD17D30@nginx.com> NGINX is seeking a Technical Evangelist to join our team. If you enjoy building demos, presenting at events, and assisting users in the community (including this mailing list), please apply for this job here: http://www.greythorn.com/job/technical-evangelist-jobid-2468-b -- Nick Shadrin From reallfqq-nginx at yahoo.fr Wed Feb 8 08:05:19 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 8 Feb 2017 09:05:19 +0100 Subject: Nginx only sends hostname to syslog. In-Reply-To: <1e7f82c8dec15acdb2fa9dcaba640146@olstad.com> References: <937e8492e8d8097bb10adef1b9e59552@olstad.com> <20170203125820.GU46625@mdounin.ru> <1e7f82c8dec15acdb2fa9dcaba640146@olstad.com> Message-ID: Correct me if I am wrong, since I probably will. >From what I read , 'host name' aka name of the host is a FQDN in the hostname.domain format ( domain being able to have n level-s). Thus, it seems that setting a FQDN as hostname is wrong, since resolving the FQDN needs querying hostname against hosts . I am not quite sure that the standard then allow you to set more than 1 level in the hostname file. Since nginx is using the hostname when sending logs to syslog, it does not seem you can get the FQDN through the standard settings. I could not find any variable containing the FQDN in the docs either. ?Maybe someone w/ greater knowledge could give us a hand here.? --- *B. R.* On Tue, Feb 7, 2017 at 11:37 AM, Kai Stian Olstad wrote: > On 03.02.2017 13:58, Maxim Dounin wrote: > >> Hello! >> >> On Fri, Feb 03, 2017 at 01:17:30PM +0100, Kai Stian Olstad wrote: >> >> To get the post data from Kibana I use Nginx as a proxy and have the >>> following log format, >>> >>> log_format audit '$time_iso8601 "$remote_user" "$request_uri" >>> "$request_body"'; >>> >>> >>> and to send the log to syslog I use the following. >>> >>> access_log syslog:server=192.168.1.1,tag=audit audit; >>> >>> >>> But Nginx only sends hostname to syslog server, is it possible to >>> configure Nginx to send fully qualified domain name? >>> >> >> Much like for the $hostname variable, for syslog logging nginx >> uses whatever is returned by the gethostname() call. It's up to >> you to configure fully qualified domain name as a hostname of a >> particular server. >> > > Thank you for the answer. > > The challenge is that in Debian hostname should only be the name and not > the FQDN. > https://www.debian.org/doc/manuals/debian-reference/ch03.en. > html#_the_hostname > > When I set the hostname to FQDN, Nginx do log with the FQDN, but this > break other things that relay on hostname only be name and not FQDN. > > If it's not possible to configure this in Nginx I would need to find a > workaround. > One way could be to make Nginx log to file and rsyslog read that file. > > Is that my only way? > > -- > Kai Stian Olstad > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Feb 8 12:28:34 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 8 Feb 2017 15:28:34 +0300 Subject: Nginx only sends hostname to syslog. In-Reply-To: References: <937e8492e8d8097bb10adef1b9e59552@olstad.com> <20170203125820.GU46625@mdounin.ru> <1e7f82c8dec15acdb2fa9dcaba640146@olstad.com> Message-ID: <20170208122834.GE46625@mdounin.ru> Hello! On Wed, Feb 08, 2017 at 09:05:19AM +0100, B.R. via nginx wrote: > From what I read > , > 'host name' aka name of the host is a FQDN in the hostname.domain format ( > domain being able to have n level-s). Just quoting the link above: : Function: int gethostname (char *name, size_t size) ... : If the system participates in the DNS, this is the FQDN (see : above). -- Maxim Dounin http://nginx.org/ From nginx at nginxuser.net Wed Feb 8 18:10:21 2017 From: nginx at nginxuser.net (nginx user) Date: Wed, 8 Feb 2017 21:10:21 +0300 Subject: What does the "event timer add" step in the debug log signify? Message-ID: i am having a problem with trying to use oauth services in that the callback from the provider gets ?stuck? on my server. The logs in the pastebin here: http://pastebin.com/qncJKVwQ summarise the experience. Nginx is setup as reverse proxy to Apache which handles PHP using CGI. that is, the old fashioned CGI and not fastCGI. So the stream of things see to be as follows: 1. User makes login request to a PHP file 2. Nginx proxies this to Apache 3. Apache runs PHP and redirects to oauth provider 4. Provider authenticates and returns callback 5. Nginx proxies this to Apache 6. Apache runs PHP, user is logged in and redirected to index.php 7. Nginx proxies this to Apache 6. Apache runs PHP and serves index.php On rare successful attempts, there is a big delay in Step 5 and most of the time I just get a 504 Timeout response. Steps 2, 5 & 7 are essentially the same thing but the issues only arises with Step 5 with the callback request. A look at the debug log shows the issue comes in on the callback call between the "event timer add" & "post event? steps. Will appreciate any insights. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chadhansen at google.com Wed Feb 8 22:47:13 2017 From: chadhansen at google.com (Chad Hansen) Date: Wed, 08 Feb 2017 22:47:13 +0000 Subject: Cache based on custom header Message-ID: I use nginx as a reverse proxy, and upstream clients have a need for my service to cache differently than downstream servers. Is there a way to change what header nginx uses to accept cache settings? Or a way to re-write cache headers before the cache settings take affect? For example, I could have the upstream server set X-CustomCache (intended for my service) and CacheControl (intended for my downstream), then use add-header to set X-Accel-Expires to that value, except the header re-write happens after the request has been cached from Cache-Control. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4851 bytes Desc: S/MIME Cryptographic Signature URL: From peter_booth at me.com Thu Feb 9 00:10:42 2017 From: peter_booth at me.com (Peter Booth) Date: Wed, 08 Feb 2017 19:10:42 -0500 Subject: Cache based on custom header In-Reply-To: References: Message-ID: <48D0BD2E-604C-4BA2-9327-486490B2E528@me.com> Yes you can. For some subtle custom cache logic I needed to use openresty, which is an nginx bundle that adds a lot of customization points. Sent from my iPhone > On Feb 8, 2017, at 5:47 PM, Chad Hansen via nginx wrote: > > I use nginx as a reverse proxy, and upstream clients have a need for my service to cache differently than downstream servers. > > Is there a way to change what header nginx uses to accept cache settings? > Or a way to re-write cache headers before the cache settings take affect? > > For example, I could have the upstream server set X-CustomCache (intended for my service) and CacheControl (intended for my downstream), then use add-header to set X-Accel-Expires to that value, except the header re-write happens after the request has been cached from Cache-Control. From maxim at nginx.com Thu Feb 9 10:20:27 2017 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 9 Feb 2017 13:20:27 +0300 Subject: SSL Offloading in UDP load In-Reply-To: References: <323e4e45-98ac-1ae2-4102-875266e63d8b@nginx.com> <3b76f5149ca39a2068de0e82e404ccfe.NginxMailingListEnglish@forum.nginx.org> Message-ID: <243fbb56-307c-df09-48c2-3c57da348e2b@nginx.com> Hello, On 1/13/17 10:46 PM, Maxim Konovalov wrote: > On 1/13/17 12:51 PM, nginxsantos wrote: >> Thanks Maxim. >> I am looking for a scenario to load balance the LWM2M server (my backend >> servers would be LWM2M Servers). I am thinking of using the Nginx UDP >> loadbalancer for this. Now, if you look at the LW2M stack, it has DTLS over >> UDP. So, I was thinking if I could offload the DTLS traffic here. >> >> Any thoughts? >> > OK, thanks for sharing this. > > Indeed, we do have this item in the stream module roadmap. I wouldn't > promise any ETA for this specific feature, still need to figure out > the demand for it from the community. > We are working on some patches in this area that hopefully enable DTLS support. Are you ok to test them in your environment? We will probably ask you to provide some additional information about your setup and use cases. Hope that's ok. Thanks, Maxim -- Maxim Konovalov From chadhansen at google.com Thu Feb 9 19:43:35 2017 From: chadhansen at google.com (Chad Hansen) Date: Thu, 09 Feb 2017 19:43:35 +0000 Subject: Cache based on custom header In-Reply-To: <48D0BD2E-604C-4BA2-9327-486490B2E528@me.com> References: <48D0BD2E-604C-4BA2-9327-486490B2E528@me.com> Message-ID: Thanks! What part of openresty supplies that? For instance, it includes more-set-headers: if I include that, could i rewrite the incoming cache-related headers before cache layer processes them? On Wed, Feb 8, 2017 at 7:10 PM Peter Booth wrote: > Yes you can. For some subtle custom cache logic I needed to use openresty, > which is an nginx bundle that adds a lot of customization points. > > Sent from my iPhone > > > On Feb 8, 2017, at 5:47 PM, Chad Hansen via nginx > wrote: > > > > I use nginx as a reverse proxy, and upstream clients have a need for my > service to cache differently than downstream servers. > > > > Is there a way to change what header nginx uses to accept cache settings? > > Or a way to re-write cache headers before the cache settings take affect? > > > > For example, I could have the upstream server set X-CustomCache > (intended for my service) and CacheControl (intended for my downstream), > then use add-header to set X-Accel-Expires to that value, except the header > re-write happens after the request has been cached from Cache-Control. > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4851 bytes Desc: S/MIME Cryptographic Signature URL: From pnickerson at cashstar.com Thu Feb 9 22:49:13 2017 From: pnickerson at cashstar.com (Paul Nickerson) Date: Thu, 9 Feb 2017 17:49:13 -0500 Subject: Behavior of realip module with this config Message-ID: I've got the config below. I don't have these settings reconfigured anywhere else. My understanding is that no matter anything else at all anywhere else, and no matter whether the X-Forwarded-For field in the HTTP header has one or multiple IP addresses, or isn't even present, $remote_addr will not be altered. set_real_ip_from 0.0.0.0/0; real_ip_header X-Forwarded-For; real_ip_recursive on; >From what I read, "real_ip_recursive on" means that $remote_addr can only be set to an IP address that is not in the range set by set_real_ip_from. And since that's 0.0.0.0/0, there is no IP that can meet this requirement. Am I correct in my analysis? http://nginx.org/en/docs/http/ngx_http_realip_module.html ~ Paul Nickerson -- *CONFIDENTIALITY NOTICE* The attached information is PRIVILEGED AND CONFIDENTIAL and is intended only for the use of the addressee named above. If the reader of this message is not the intended recipient or the employee or agent responsible for delivering the message to the intended recipient, please be aware that any dissemination, distribution or duplication of this communication is strictly prohibited. If you receive this communication in error, please notify us immediately by telephone, delete the message and destroy any printed copy of the message. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From iridude at aol.com Fri Feb 10 04:54:14 2017 From: iridude at aol.com (iridude at aol.com) Date: Thu, 9 Feb 2017 23:54:14 -0500 Subject: Request_Id Variable unknown? Message-ID: I am using Nginx version 1.10.2 and get the following error: Unknown ?request_id? variable This is in nginx.conf: Log_format main ?$remote_addr - $remote_user [$time_local] ?$request? $status $body_bytes_sent ?$http_referer? ?$http_user_agent? - $request_id?; Sent from Mail for Windows 10 --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.cox at kroger.com Fri Feb 10 05:12:48 2017 From: eric.cox at kroger.com (Cox, Eric S) Date: Fri, 10 Feb 2017 05:12:48 +0000 Subject: Request_Id Variable unknown? In-Reply-To: <20170210045420.9E0CB2C50D6B@mail.nginx.com> References: <20170210045420.9E0CB2C50D6B@mail.nginx.com> Message-ID: <74A4D440E25E6843BC8E324E67BB3E39456B5495@N060XBOXP38.kroger.com> $request_id unique request identifier generated from 16 random bytes, in hexadecimal (1.11.0) You need at least version 1.11.0 -----Original Message----- From: iridude--- via nginx [nginx at nginx.org] Received: Thursday, 09 Feb 2017, 11:54PM To: nginx at nginx.org [nginx at nginx.org] CC: iridude at aol.com [iridude at aol.com] Subject: Request_Id Variable unknown? I am using Nginx version 1.10.2 and get the following error: Unknown ?request_id? variable This is in nginx.conf: Log_format main ?$remote_addr - $remote_user [$time_local] ?$request? $status $body_bytes_sent ?$http_referer? ?$http_user_agent? - $request_id?; Sent from Mail for Windows 10 ________________________________ [Avast logo] This email has been checked for viruses by Avast antivirus software. www.avast.com ________________________________ This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is confidential and protected by law from unauthorized disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From llbgurs at gmail.com Fri Feb 10 05:31:34 2017 From: llbgurs at gmail.com (linbo liao) Date: Fri, 10 Feb 2017 13:31:34 +0800 Subject: Location url start with modifier, nginx -t pass the configuration test Message-ID: Hi, I setup an test Nginx 1.10.3 on local VM (Centos 6.7 x86_64). I configure the following location location =/404.html { root /usr/share/nginx/html; } As my understanding, it is not an valid url, but `nginx -t` pass the configuration test. Is it a bug ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.mcdonnell at buzzfeed.com Fri Feb 10 10:38:36 2017 From: mark.mcdonnell at buzzfeed.com (Mark McDonnell) Date: Fri, 10 Feb 2017 10:38:36 +0000 Subject: View the client's HTTP protocol? Message-ID: I know the $status variable shows you the upstream/origin's HTTP protocol (e.g. HTTP/1.1 200) but is there a way to view the protocol the client made the request with? For example we've seen some S3 errors returned with a 505 which suggests the user made a request with some strange HTTP protocol, but we don't know what it would have been. It would be good for us to log the client's protocol so we have that information in future. ?Thanks.? -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Feb 10 10:54:30 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 10 Feb 2017 10:54:30 +0000 Subject: View the client's HTTP protocol? In-Reply-To: References: Message-ID: <20170210105430.GL2958@daoine.org> On Fri, Feb 10, 2017 at 10:38:36AM +0000, Mark McDonnell via nginx wrote: Hi there, > I know the $status variable shows you the upstream/origin's HTTP protocol > (e.g. HTTP/1.1 200) but is there a way to view the protocol the client made > the request with? http://nginx.org/r/$server_protocol or if you want the whole request: http://nginx.org/r/$request f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Fri Feb 10 12:33:35 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Feb 2017 15:33:35 +0300 Subject: Behavior of realip module with this config In-Reply-To: References: Message-ID: <20170210123335.GH46625@mdounin.ru> Hello! On Thu, Feb 09, 2017 at 05:49:13PM -0500, Paul Nickerson wrote: > I've got the config below. I don't have these settings reconfigured > anywhere else. My understanding is that no matter anything else at all > anywhere else, and no matter whether the X-Forwarded-For field in the HTTP > header has one or multiple IP addresses, or isn't even present, > $remote_addr will not be altered. > > set_real_ip_from 0.0.0.0/0; > real_ip_header X-Forwarded-For; > real_ip_recursive on; > > From what I read, "real_ip_recursive on" means that $remote_addr can only > be set to an IP address that is not in the range set by set_real_ip_from. > And since that's 0.0.0.0/0, there is no IP that can meet this requirement. > > Am I correct in my analysis? CIDR 0.0.0.0/0 means 0.0.0.0 - 255.255.255.255, so any IP address is allowed to change the IP via X-Forwarded-For. You can find more information about CIDR notation here: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing And real_ip_recursive switched on means that this happens recursively. As a result, with the configuration in question nginx will use the first address in X-Forwarded-For provided, if any (assuming all addresses are valid). Note that "set_real_ip_from 0.0.0.0/0" makes client's address as seen by nginx easily spoofable by any client, and it is generally a bad idea to use it in production. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Feb 10 12:45:02 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Feb 2017 15:45:02 +0300 Subject: Location url start with modifier, nginx -t pass the configuration test In-Reply-To: References: Message-ID: <20170210124502.GI46625@mdounin.ru> Hello! On Fri, Feb 10, 2017 at 01:31:34PM +0800, linbo liao wrote: > I setup an test Nginx 1.10.3 on local VM (Centos 6.7 x86_64). I configure > the following location > > location =/404.html { > root /usr/share/nginx/html; > } > > As my understanding, it is not an valid url, but `nginx -t` pass the > configuration test. > > Is it a bug ? In this particular case the location is interpreted as "/404.html" with "=" modifier (see http://nginx.org/r/location). That is, it matches requests to "/404.html" exactly. Space between "=" and "/404.html" can be omitted, and nginx is smart enough to recognize this. On the other hand, there are no limitations on locations to configure in nginx.conf. You can configure anything you like, even locations which can't match real URIs. Moreover, such locations can used for artificial processing with rewrites, and I've seen such configs in practice. This is not something I can recommend though. -- Maxim Dounin http://nginx.org/ From pnickerson at cashstar.com Fri Feb 10 15:20:01 2017 From: pnickerson at cashstar.com (Paul Nickerson) Date: Fri, 10 Feb 2017 10:20:01 -0500 Subject: Behavior of realip module with this config In-Reply-To: <20170210123335.GH46625@mdounin.ru> References: <20170210123335.GH46625@mdounin.ru> Message-ID: On Fri, Feb 10, 2017 at 7:33 AM, Maxim Dounin wrote: > And real_ip_recursive switched on means that this happens > recursively. As a result, with the configuration in question > nginx will use the first address in X-Forwarded-For provided, if > any (assuming all addresses are valid). > Note that "set_real_ip_from 0.0.0.0/0" makes client's address as > seen by nginx easily spoofable by any client, and it is generally > a bad idea to use it in production. Thank you for the reply, Maxim. "set_real_ip_from 0.0.0.0/0" does indeed seem like a bad idea in production. Thank you for calling that out. I am confused by this statement in the documentation: http://nginx.org/en/docs/http/ngx_http_realip_module.html "If recursive search is enabled, the original client address that matches one of the trusted addresses is replaced by the last non-trusted address sent in the request header field." The language "last non-trusted address" suggests that NGINX looks for something in real_ip_header which does not match set_real_ip_from. But maybe I am interpreting that incorrectly. If set_real_ip_from were set correctly to the host's content delivery network, load balancer, and reverse proxy infrastructures, then my interpretation would make sense, as $remote_addr would then get set to the client's public IP, even if the client has network address translation and forward proxy infrastructures which append to X-Forwarded-For. But in your answer, wouldn't $remote_addr be set to the client's private IP address if their firewall/gateway adds that private IP address to X-Forwarded-For while it does the NATing? That doesn't seem very useful. This is an example situation I'm thinking of (all the IPs are random, and are the IPs "facing" NGINX): set_real_ip_from 10.6.1.0/24, 8.47.98.0/24; real_ip_header X-Forwarded-For; real_ip_recursive on; client's computer (192.168.1.79) > client's gateway (178.150.189.138) > my content delivery network (8.47.98.129) > my load balancer (10.6.1.56) > my NGINX box X-Forwarded-For = 192.168.1.79, 178.150.189.138, 8.47.98.129 I think in your answer, $remote_addr would be set to 192.168.1.25, while in my interpretation, it would be set to 178.150.189.138. And in either case, $realip_remote_addr is 10.6.1.56. It would be a strangely configured client gateway / firewall / NAT / proxy that adds to X-Forwarded-For, but it can happen. I guess I am still confused. ~ Paul Nickerson -- *CONFIDENTIALITY NOTICE* The attached information is PRIVILEGED AND CONFIDENTIAL and is intended only for the use of the addressee named above. If the reader of this message is not the intended recipient or the employee or agent responsible for delivering the message to the intended recipient, please be aware that any dissemination, distribution or duplication of this communication is strictly prohibited. If you receive this communication in error, please notify us immediately by telephone, delete the message and destroy any printed copy of the message. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Feb 10 16:05:08 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Feb 2017 19:05:08 +0300 Subject: Behavior of realip module with this config In-Reply-To: References: <20170210123335.GH46625@mdounin.ru> Message-ID: <20170210160508.GL46625@mdounin.ru> Hello! On Fri, Feb 10, 2017 at 10:20:01AM -0500, Paul Nickerson wrote: > On Fri, Feb 10, 2017 at 7:33 AM, Maxim Dounin wrote: > > And real_ip_recursive switched on means that this happens > > recursively. As a result, with the configuration in question > > nginx will use the first address in X-Forwarded-For provided, if > > any (assuming all addresses are valid). > > Note that "set_real_ip_from 0.0.0.0/0" makes client's address as > > seen by nginx easily spoofable by any client, and it is generally > > a bad idea to use it in production. > > Thank you for the reply, Maxim. "set_real_ip_from 0.0.0.0/0" does indeed > seem like a bad idea in production. Thank you for calling that out. > > I am confused by this statement in the documentation: > http://nginx.org/en/docs/http/ngx_http_realip_module.html > "If recursive search is enabled, the original client address that matches > one of the trusted addresses is replaced by the last non-trusted address > sent in the request header field." > > The language "last non-trusted address" suggests that NGINX looks for > something in real_ip_header which does not match set_real_ip_from. But > maybe I am interpreting that incorrectly. > > If set_real_ip_from were set correctly to the host's content delivery > network, load balancer, and reverse proxy infrastructures, then my > interpretation would make sense, as $remote_addr would then get set to the > client's public IP, even if the client has network address translation and > forward proxy infrastructures which append to X-Forwarded-For. But in your > answer, wouldn't $remote_addr be set to the client's private IP address if > their firewall/gateway adds that private IP address to X-Forwarded-For > while it does the NATing? That doesn't seem very useful. > > This is an example situation I'm thinking of (all the IPs are random, and > are the IPs "facing" NGINX): > > set_real_ip_from 10.6.1.0/24, 8.47.98.0/24; > real_ip_header X-Forwarded-For; > real_ip_recursive on; > > client's computer (192.168.1.79) > client's gateway (178.150.189.138) > my > content delivery network (8.47.98.129) > my load balancer (10.6.1.56) > my > NGINX box > > X-Forwarded-For = 192.168.1.79, 178.150.189.138, 8.47.98.129 > > I think in your answer, $remote_addr would be set to 192.168.1.25, while in > my interpretation, it would be set to 178.150.189.138. And in either case, > $realip_remote_addr is 10.6.1.56. Note that my answer ("with the configuration in question nginx will use the first address in X-Forwarded-For provided") only applies to the particular configuration with "set_real_ip_from 0.0.0.0/0", and it is incorrect to assume it can be used as an universal answer to all questions. Let me elaborate how things work a bit more, hopefully it will help you to understand things better. In this case the original client address, as obtained from the TCP connection, will be 10.6.1.56. $remote_addr = 10.6.1.56 Since this address is trusted (listed in set_real_ip_from), nginx will process X-Forwarded-For header. On the first step, it will use most recent address from X-Forwarded-For, 8.47.98.129. So the result will be: $remote_addr = 8.47.98.129 (rest of X-Forwarded-For) = 192.168.1.79, 178.150.189.138 Since real_ip_recursive is on, nginx will then repeat the whole process using the new client address and the rest of X-Forwarded-For. The address 8.47.98.129 is again trusted, so nginx will use the next address from X-Forwarded-For, 178.150.189.138: $remote_addr = 178.150.189.138 (rest of X-Forwarded-For) = 192.168.1.79 Since 178.150.189.138 is not trusted, the process will stop here. The sentence : If recursive search is enabled, the original client : address that matches one of the trusted addresses is replaced by : the last non-trusted address sent in the request header field. in the documentation might be a bit confusing since it doesn't explain the process. Though it actually describes what happens as the result of the process: the last (rightmost) untrusted address from X-Forwarded-For is used as client's address. The last untrusted address in "192.168.1.79, 178.150.189.138, 8.47.98.129" is 178.150.189.138. The only address which follows it, 8.47.98.129, is trusted. So 178.150.189.138 is used as the client address. -- Maxim Dounin http://nginx.org/ From pnickerson at cashstar.com Fri Feb 10 18:29:59 2017 From: pnickerson at cashstar.com (Paul Nickerson) Date: Fri, 10 Feb 2017 13:29:59 -0500 Subject: Behavior of realip module with this config In-Reply-To: <20170210160508.GL46625@mdounin.ru> References: <20170210123335.GH46625@mdounin.ru> <20170210160508.GL46625@mdounin.ru> Message-ID: On Fri, Feb 10, 2017 at 11:05 AM, Maxim Dounin wrote: > Note that my answer ("with the configuration in question nginx > will use the first address in X-Forwarded-For provided") only > applies to the particular configuration with "set_real_ip_from > 0.0.0.0/0", and it is incorrect to assume it can be used as an > universal answer to all questions. Ah, OK, I see. Everything is making sense now. I somehow didn't see "with the configuration in question" in your reply. So "set_real_ip_from 0.0.0.0/0" brings in a special case, where the leftmost / first IP address is used. It sounds like that's because it recursively searches back through the list for an untrusted IP, and if it doesn't find one, then it keeps whatever was the last one checked, which would be the leftmost IP. This is matching what I'm seeing, and I now know how to test out a different configuration. Thank you for the help, Maxim! ~ Paul Nickerson -- *CONFIDENTIALITY NOTICE* The attached information is PRIVILEGED AND CONFIDENTIAL and is intended only for the use of the addressee named above. If the reader of this message is not the intended recipient or the employee or agent responsible for delivering the message to the intended recipient, please be aware that any dissemination, distribution or duplication of this communication is strictly prohibited. If you receive this communication in error, please notify us immediately by telephone, delete the message and destroy any printed copy of the message. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Fri Feb 10 23:18:14 2017 From: gfrankliu at gmail.com (Frank Liu) Date: Fri, 10 Feb 2017 15:18:14 -0800 Subject: ssl_protocols & SNI In-Reply-To: <20170119133655.GK45866@mdounin.ru> References: <20170119133655.GK45866@mdounin.ru> Message-ID: Hi Maxim, Thanks for explaining why overloading ssl_protocols won't work. Since the problem is with how OpenSSL works, will it work if we use other openssl alternatives? I see people reporting boringssl and libressl work fine with nginx. Does nginx still need to be modified to support overloading ssl_protocols or is it just a matter of library switch? Thanks! Frank On Thu, Jan 19, 2017 at 5:36 AM, Maxim Dounin wrote: > Hello! > > On Thu, Jan 19, 2017 at 10:04:46AM +0100, B.R. via nginx wrote: > > > Hello, > > > > I tried to overload the value of my default ssl_protocols (http block > > level) in a server block. > > It did not seem to apply the other value in this virtuel server only. > > > > Since I use SNI on my OpenSSL implementation, which perfectly works to > > support multiple virtual servers, I wonder why this SNI capability isn't > > leveraged to apply different TLS environment depending on the SNI value > and > > the TLS directives configured for the virtual server of the asked domain. > > Can SNI be used for other TLS configuration directives other than > > certificates? > > > > More generally, is it normal you cannot overload directives such as > > ssl_protocols or ssl_ciphers in a specific virtual server, using the same > > socket as others? > > If positive, would it be possible to use SNI to tweak TLS connections > > envrionment depending on domain? > > You can overload ssl_ciphers. You can't overload ssl_protocols > because OpenSSL works this way: it selects the protocol used > before SNI callback (and this behaviour looks more or less natural > beacause the existance of SNI depends on the protocol used, and, > for example, you can't enable SSLv3 in a SNI-based virtual host). > > In general, whether or not some SSL feature can be tweaked for > SNI-based virtual hosts depends on two factors: > > - if it's at all possible; > - how OpenSSL handles it. > > In some cases nginx also tries to provide per-virtualhost support > even for things OpenSSL doesn't handle natively, e.g., ssl_verify, > ssl_verify_depth, ssl_prefer_server_ciphers. > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Feb 11 05:52:58 2017 From: nginx-forum at forum.nginx.org (vishnu.dinoct) Date: Sat, 11 Feb 2017 00:52:58 -0500 Subject: no live upstreams and NO previous error In-Reply-To: <8e29d2dcf208c5d726fd2be50a10e699.NginxMailingListEnglish@forum.nginx.org> References: <99d18d74a058c669b0ac0e4b75a9822f.NginxMailingListEnglish@forum.nginx.org> <8e29d2dcf208c5d726fd2be50a10e699.NginxMailingListEnglish@forum.nginx.org> Message-ID: <24b2258a195ad6a6a5707d3e563e7b27.NginxMailingListEnglish@forum.nginx.org> We have json parsing data in between via api's. Anybody here to advise??? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269577,272345#msg-272345 From llbgurs at gmail.com Sat Feb 11 13:38:38 2017 From: llbgurs at gmail.com (linbo liao) Date: Sat, 11 Feb 2017 21:38:38 +0800 Subject: Location url start with modifier, nginx -t pass the configuration test In-Reply-To: <20170210124502.GI46625@mdounin.ru> References: <20170210124502.GI46625@mdounin.ru> Message-ID: Thank you for the clarification. 2017-02-10 20:45 GMT+08:00 Maxim Dounin : > Hello! > > On Fri, Feb 10, 2017 at 01:31:34PM +0800, linbo liao wrote: > > > I setup an test Nginx 1.10.3 on local VM (Centos 6.7 x86_64). I configure > > the following location > > > > location =/404.html { > > root /usr/share/nginx/html; > > } > > > > As my understanding, it is not an valid url, but `nginx -t` pass the > > configuration test. > > > > Is it a bug ? > > In this particular case the location is interpreted as "/404.html" > with "=" modifier (see http://nginx.org/r/location). That is, it > matches requests to "/404.html" exactly. Space between "=" and > "/404.html" can be omitted, and nginx is smart enough to recognize > this. > > On the other hand, there are no limitations on locations to > configure in nginx.conf. You can configure anything you like, > even locations which can't match real URIs. Moreover, such > locations can used for artificial processing with rewrites, and > I've seen such configs in practice. This is not something I can > recommend though. > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gesh_m1nkov at abv.bg Sat Feb 11 19:15:51 2017 From: gesh_m1nkov at abv.bg (Georgi Minkov) Date: Sat, 11 Feb 2017 21:15:51 +0200 (EET) Subject: proxy_pass isnt loading required resources Message-ID: <1589382283.787904.1486840551931.JavaMail.apache@nm22.abv.bg> Hello :) I`m having difficulties with configuring nginx to serve different applications based on the location in header. I`m successfully hit the index page but the additional resources are not returned (404). Checked the url in browser console and the request url is not modified with the location. Lets say the address is example.com/test .... but the next request for each resource is not automatically adding the location, so remain: example.com/bootstrap/3.3.7/css/bootstrap-theme.min.css instead of example.com/bootstrap/ test /3.3.7/css/bootstrap-theme.min.css . If the location is added to the URL is all fine. Kindly ask if someone can help me or give me a tip (not that i havent tried almost anything on the net), i suggest someone of you will know that or at least will tell me that is not supported. Thanks in advanced! Best, Georgi -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Feb 11 21:06:29 2017 From: nginx-forum at forum.nginx.org (MichaelLogutov) Date: Sat, 11 Feb 2017 16:06:29 -0500 Subject: Strange requests stalling inside nginx Message-ID: Hello. We have some strange issues when requests seems to stall inside nginx - in nginx log we see that request took 1 second (and was terminated by client timeout), while excactly the same request (we have special unique headers to mark them) from the uwsgi logs took only 100ms to complete. We also saw some very strange pauses in nginx debug log: 2017/01/26 20:17:51 [debug] 2749#0: *75780960 get rr peer, try: 2 2017/01/26 20:17:51 [debug] 2749#0: *75780960 get rr peer, current: 00007F6EC6C3D1F0 -1 2017/01/26 20:17:51 [debug] 2749#0: *75780960 stream socket 500 2017/01/26 20:17:51 [debug] 2749#0: *75780960 epoll add connection: fd:500 ev:80002005 2017/01/26 20:17:51 [debug] 2749#0: *75780960 connect to 10.0.0.176:80, fd:500 #75780961 2017/01/26 20:17:51 [debug] 2749#0: *75780960 http upstream connect: -2 2017/01/26 20:17:51 [debug] 2749#0: *75780960 posix_memalign: 00007F6EC5213D60:128 @16 2017/01/26 20:17:51 [debug] 2749#0: *75780960 event timer add: 500: 10000:1485451081089 2017/01/26 20:17:51 [debug] 2749#0: *75780960 http finalize request: -4, "/api/info/account/type/278933/?" a:1, c:2 2017/01/26 20:17:51 [debug] 2749#0: *75780960 http request count:2 blk:0 2017/01/26 20:17:51 [debug] 2749#0: *75780960 http run request: "/api/info/account/type/278933/?" 2017/01/26 20:17:51 [debug] 2749#0: *75780960 http upstream check client, write event:1, "/api/info/account/type/278933/" 2017/01/26 20:17:51 [debug] 2749#0: *75780960 http upstream recv(): -1 (11: Resource temporarily unavailable) 2017/01/26 20:17:54 [debug] 2749#0: *75780960 http upstream request: "/api/info/account/type/278933/?" 2017/01/26 20:17:54 [debug] 2749#0: *75780960 http upstream send request handler 2017/01/26 20:17:54 [debug] 2749#0: *75780960 http upstream send request 2017/01/26 20:17:54 [debug] 2749#0: *75780960 http upstream send request body 2017/01/26 20:17:54 [debug] 2749#0: *75780960 chain writer buf fl:1 s:433 2017/01/26 20:17:54 [debug] 2749#0: *75780960 chain writer in: 00007F6E975088B0 2017/01/26 20:17:54 [debug] 2749#0: *75780960 writev: 433 of 433 2017/01/26 20:17:54 [debug] 2749#0: *75780960 chain writer out: 0000000000000000 2017/01/26 20:17:54 [debug] 2749#0: *75780960 event timer del: 500: 1485451081089 2017/01/26 20:17:54 [debug] 2749#0: *75780960 event timer add: 500: 200000:1485451274089 2017/01/26 20:17:54 [debug] 2749#0: *75780960 http upstream request: "/api/info/account/type/278933/?" 2017/01/26 20:17:54 [debug] 2749#0: *75780960 http upstream process header 2017/01/26 20:17:54 [debug] 2749#0: *75780960 malloc: 00007F6E97AC0070:4096 2017/01/26 20:17:54 [debug] 2749#0: *75780960 posix_memalign: 00007F6ECACC95D0:4096 @16 2017/01/26 20:17:54 [debug] 2749#0: *75780960 recv: fd:500 408 of 4096 2017/01/26 20:17:54 [debug] 2749#0: *75780960 http proxy status 200 "200 OK" Any help would be appreciated. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272357,272357#msg-272357 From rainer at ultra-secure.de Sun Feb 12 00:26:51 2017 From: rainer at ultra-secure.de (Rainer Duffner) Date: Sun, 12 Feb 2017 01:26:51 +0100 Subject: Trouble with redirects from backend Message-ID: Hi, I have typo3 with nginx running behind an nginx reverse-proxy, mapped to a subdirectory. So, it's www.company.bla/ourtypo3site. Typo3 has RealURL extension installed and that adds a "slash" at the end if it's not sent by the browser - this is done via a redirect. The trouble is that when this redirect is issued, the nginx reverse proxy turns it into a redirect into the main site. www.company.bla/ourtypo3site/some/page turns into www.company.bla/some/page I have the following configuration for nginx on the reverse proxy: location /ourtypo3 { include proxy.conf; proxy_set_header HTTPS on; proxy_pass http://ourtypo3:80/; # http://serverfault.com/questions/444532/reverse-proxy-remove-subdirectory } How can I fix this? At first, I tried hard-coding the redirects on the reverse-proxy. But that's a lot of work and not really a solution. nginx 1.10 Regards Rainer From daniel at linux-nerd.de Sun Feb 12 13:19:12 2017 From: daniel at linux-nerd.de (Daniel) Date: Sun, 12 Feb 2017 14:19:12 +0100 Subject: Move from apacht to nginx Message-ID: Hi there, i still moving from Apache to nginx. I have a config part in apache which i dont understand how to convert it correct to nginx. RewriteBase / Options FollowSymLinks AllowOverride All DirectoryIndex index.php Options +FollowSymLinks -Indexes AllowOverride none i Know that this has to be done in locations what how? ;) Cheers Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeff.dyke at gmail.com Sun Feb 12 14:28:45 2017 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Sun, 12 Feb 2017 09:28:45 -0500 Subject: Move from apacht to nginx In-Reply-To: References: Message-ID: There is an `allow all` in a location block, but i would recommend that you determine what part of All is really needed from your Apache config and apply only those rules that need to make the site work. I used apache in exactly this nature for a while and then dug in and found that i only needed one or two of the directives that can go after AllowOverride. This is not direct help i realize but comparing what allow all does and AllowOverride All does will allow you to make the best, most secure choice for your environment, the default is None and so many configurations change this to All for a quick fix, but when changing webservers, give it a second though. location ~ /blah/fe { allow all; } I found that i never had to use this in nginx except for serving lets-encrypt certs out a directory, in nginx, but i use it more of a proxy for scala apps. Jeff On Sun, Feb 12, 2017 at 8:19 AM, Daniel wrote: > Hi there, > > i still moving from Apache to nginx. > > I have a config part in apache which i dont understand how to convert it > correct to nginx. > > > > RewriteBase / > Options FollowSymLinks > AllowOverride All > DirectoryIndex index.php > > > > Options +FollowSymLinks -Indexes > AllowOverride none > > > i Know that this has to be done in locations what how? ;) > > Cheers > > Daniel > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Feb 12 14:44:43 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 12 Feb 2017 17:44:43 +0300 Subject: proxy_pass isnt loading required resources In-Reply-To: <1589382283.787904.1486840551931.JavaMail.apache@nm22.abv.bg> References: <1589382283.787904.1486840551931.JavaMail.apache@nm22.abv.bg> Message-ID: <20170212144443.GN46625@mdounin.ru> Hello! On Sat, Feb 11, 2017 at 09:15:51PM +0200, Georgi Minkov wrote: > Hello :) I`m having difficulties with configuring nginx to serve > different applications based on the location in header. I`m > successfully hit the index page but the additional resources are > not returned (404). Checked the url in browser console and the > request url is not modified with the location. Lets say the > address is example.com/test .... but the next request for each > resource is not automatically adding the location, so remain: > example.com/bootstrap/3.3.7/css/bootstrap-theme.min.css > instead of > example.com/bootstrap/ test /3.3.7/css/bootstrap-theme.min.css > . If the location is added to the URL is all fine. Kindly > ask if someone can help me or give me a tip (not that i havent > tried almost anything on the net), i suggest someone of you > will know that or at least will tell me that is not supported. This is not something proxy_pass is expected to do for you automatically. It only proxies the resource requested, and doesn't try to modify any links to other resources in the resource itself. Moreover, in many cases it is not at all possible, as the format of the resource can be proprietary. The best solution is to teach your backend to use either relative links to additional resources, or to use proper external root URL for resources (usually there is an configuration option for this). You may also try to change links on the fly using the sub_filter directive, see http://nginx.org/r/sub_filter. This is relatively costly and not going to work in all cases, but may help to mitigate problems till there is a possibility to fix the backend. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Sun Feb 12 14:58:46 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 12 Feb 2017 17:58:46 +0300 Subject: Strange requests stalling inside nginx In-Reply-To: References: Message-ID: <20170212145846.GO46625@mdounin.ru> Hello! On Sat, Feb 11, 2017 at 04:06:29PM -0500, MichaelLogutov wrote: > Hello. > We have some strange issues when requests seems to stall inside nginx - in > nginx log we see that request took 1 second (and was terminated by client > timeout), while excactly the same request (we have special unique headers to > mark them) from the uwsgi logs took only 100ms to complete. It is not clear how do you measure that request took 1 second. Note well that request time as seen by nginx is unlikely to be less that the time seen by your backend, but can easily be much longer: for example, nginx has to read the request from client, and it sometimes takes significant time. > We also saw some very strange pauses in nginx debug log: [...] I don't see any "strange pauses" in the logs provided. If you think there are any - please elaborate. Note well that it's almost impossible to analize debug logs for pauses if these logs doesn't contain information about processing of events. To make sure all needed information is present it is important to configure debug logs at global level, as recommended at http://nginx.org/en/docs/debugging_log.html. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Feb 13 00:32:22 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 Feb 2017 03:32:22 +0300 Subject: ssl_protocols & SNI In-Reply-To: References: <20170119133655.GK45866@mdounin.ru> Message-ID: <20170213003221.GP46625@mdounin.ru> Hello! On Fri, Feb 10, 2017 at 03:18:14PM -0800, Frank Liu wrote: > Thanks for explaining why overloading ssl_protocols won't work. Since the > problem is with how OpenSSL works, will it work if we use other openssl > alternatives? I see people reporting boringssl and libressl work fine with > nginx. Does nginx still need to be modified to support overloading > ssl_protocols or is it just a matter of library switch? I doubt there is a difference, as both are OpenSSL forks. And such a support will seriously complicate the code with no obvious benefits. Though I've never tested nor looked into the current sources of these libraries for this particular aspect. Either way, if it is implemented by the library, it's highly unlikely that any changes in nginx will be needed. It already does all it can do. -- Maxim Dounin http://nginx.org/ From he.hailong5 at zte.com.cn Mon Feb 13 06:54:30 2017 From: he.hailong5 at zte.com.cn (he.hailong5 at zte.com.cn) Date: Mon, 13 Feb 2017 14:54:30 +0800 (CST) Subject: having nginx listen the same port more than once Message-ID: <201702131454309535014@zte.com.cn> SGksDQoNCg0KDQoNCg0KDQpJIG9ic2VydmUgdGhhdCB0aGUgbmdpbnggcnVucyB3aXRoIG5vIGVy cm9yIGlmIHRoZXJlIGFyZSBkdXBsaWNhdGUgbGlzdGVuIHBvcnRzIGNvbmZpZ3VyZWQgaW4gdGhl IGh0dHAgc2VydmVyIGJsb2NrIG9yIHN0cmVhbSBzZXJ2ZXIgYmxvY2suDQoNCg0KaXMgdGhpcyBi ZWhhdmlvciBhcyBleHBlY3RlZD8gYW5kIGlmIGEgcmVxdWVzdCBjb21lcyBhdCBzdWNoIGEgcG9y dCwgd2hpY2ggc2VydmVyIHdvdWxkIHNlcnZlIHRoaXMgcmVxdWVzdCwgYnkgcmFkb21seSBvciBy b3VuZC1yb2Jpbj8NCg0KDQoNCg0KDQpUaGFua3MsDQoNCg0KSm9l -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Mon Feb 13 07:13:36 2017 From: r at roze.lv (Reinis Rozitis) Date: Mon, 13 Feb 2017 09:13:36 +0200 Subject: having nginx listen the same port more than once In-Reply-To: <201702131454309535014@zte.com.cn> References: <201702131454309535014@zte.com.cn> Message-ID: <002001d285c8$b1defca0$159cf5e0$@roze.lv> > I observe that the nginx runs with no error if there are duplicate listen ports configured in the http server block or stream server block. is this behavior as expected? That is how every webserver capable of name based virtual hosts works. So yes it's normal and expected. > and if a request comes at such a port, which server would serve this request, by radomly or round-robin? http://nginx.org/en/docs/http/request_processing.html rr From daniel at linux-nerd.de Mon Feb 13 09:23:44 2017 From: daniel at linux-nerd.de (Daniel) Date: Mon, 13 Feb 2017 10:23:44 +0100 Subject: Apache to nginx Message-ID: <18E57C65-35BB-443E-AC42-DB3AC8E12CBF@linux-nerd.de> Hi, i create a vhost confuguration for a vhost but i ma not able to access /vakanz for exmaple. I got a 404 error on the access logs. I Tried already with rewrite rules and i also tried with locations, no matter what i do, nothing works. Anyone has an idea what can i do? Cheers Daniel server { listen 80; root /var/www/vhosts/reisen/sbo/current/web; rewrite ^/static/(.*) /var/www/vhosts/reisen/fe/static/$1 last; rewrite ^/hrouter.js /var/www/vhosts/reisen/fe/index.php last; rewrite ^/router.js /var/www/vhosts/reisen/fe/index.php last; rewrite ^/(vakanz|vrij|ajax|boek|buchen)$ /var/www/vhosts/reisen/fe/index.php last; rewrite ^/(vakanz|vrij|ajax|boek|buchen)/.* /var/www/vhosts/reisen/fe/index.php last; rewrite ^/himage/.* /var/www/vhosts/reisen/fe/index.php last; rewrite ^/image/.* /var/www/vhosts/reisen/fe/index.php last; rewrite ^/images/.* /var/www/vhosts/reisen/fe/index.php last; rewrite ^/nur-flug$ /flight/destination permanent; set $my_https "off"; if ($http_x_forwarded_proto = "https") { set $my_https "on"; } server_name preprod.reisen.de; location / { index app.php; add_header Access-Control-Allow-Headers "Origin, X-Requested-With, Content-Type, Accept"; add_header Access-Control-Allow-Origin "*"; if (-f $request_filename) { break; } try_files $uri @rewriteapp; } location @rewriteapp { if ( $request_filename !~ opcache\.php ){ rewrite ^(.*)$ /app.php/$1 last; } } #rewrite ^/(vakanz|vrij|ajax|boek|buchen)$ /var/www/vhosts/reisen/fe/index.php last; #rewrite ^/(vakanz|vrij|ajax|boek|buchen)/.* /var/www/vhosts/reisen/fe/index.php last; # location /vakanz { # alias /var/www/vhosts/reisen/fe/; # } location ~* .js$ {add_header Service-Worker-Allowed "/"; } location ~ ^/app\.php/_apilogger(/|$) { fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param HTTPS $my_https; fastcgi_param SYMFONY__CMS__ENABLED false; fastcgi_param CMS_ENABLED false; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; add_header Access-Control-Allow-Headers "Origin, X-Requested-With, Content-Type, Accept"; add_header Access-Control-Allow-Origin "*"; # Prevents URIs that include the front controller. This will 404: # http://domain.tld/app.php/some-path # Remove the internal directive to allow URIs like this internal; } location ~ ^/proxy\.php(\?|/|$) { fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; fastcgi_split_path_info ^(.+\.php)(.*)$; include fastcgi_params; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param HTTPS $my_https; fastcgi_param SYMFONY__CMS__ENABLED false; fastcgi_param CMS_ENABLED false; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; add_header Access-Control-Allow-Headers "Origin, X-Requested-With, Content-Type, Accept"; add_header Access-Control-Allow-Origin "*"; # Prevents URIs that include the front controller. This will 404: # http://domain.tld/app.php/some-path # Remove the internal directive to allow URIs like this #internal; } location ~ ^/app\.php(/|$) { fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param HTTPS $my_https; fastcgi_param SYMFONY__CMS__ENABLED false; fastcgi_param CMS_ENABLED false; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; add_header Access-Control-Allow-Headers "Origin, X-Requested-With, Content-Type, Accept"; add_header Access-Control-Allow-Origin "*"; # Prevents URIs that include the front controller. This will 404: # http://domain.tld/app.php/some-path # Remove the internal directive to allow URIs like this internal; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Mon Feb 13 11:42:22 2017 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Mon, 13 Feb 2017 14:42:22 +0300 Subject: having nginx listen the same port more than once In-Reply-To: <002001d285c8$b1defca0$159cf5e0$@roze.lv> References: <201702131454309535014@zte.com.cn> <002001d285c8$b1defca0$159cf5e0$@roze.lv> Message-ID: Assuming a configuration with multiple similar 'listen' and 'server_name' statements, only the first one will work: > server { > listen 9090; > return 404; > server_name example.com; > } > server { > listen 9090; > return 403; > server_name example.com; > } > server { > listen 9090; > return 400; > server_name example.com; > } > nginx: [warn] conflicting server name "example.com" on 0.0.0.0:9090, > ignored > nginx: [warn] conflicting server name "example.com" on 0.0.0.0:9090, > ignored Afaik, the only reply you would be able to get from such configuration is '404' On 13.02.2017 10:13, Reinis Rozitis wrote: >> I observe that the nginx runs with no error if there are duplicate listen ports configured in the http server block or stream server block. > is this behavior as expected? > > That is how every webserver capable of name based virtual hosts works. > So yes it's normal and expected. > >> and if a request comes at such a port, which server would serve this request, by radomly or round-robin? > http://nginx.org/en/docs/http/request_processing.html > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Feb 13 12:16:00 2017 From: nginx-forum at forum.nginx.org (MichaelLogutov) Date: Mon, 13 Feb 2017 07:16:00 -0500 Subject: Strange requests stalling inside nginx In-Reply-To: <20170212145846.GO46625@mdounin.ru> References: <20170212145846.GO46625@mdounin.ru> Message-ID: <7d907dbaa51f69ead0f3ed789130748f.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Sat, Feb 11, 2017 at 04:06:29PM -0500, MichaelLogutov wrote: > > > Hello. > > We have some strange issues when requests seems to stall inside > nginx - in > > nginx log we see that request took 1 second (and was terminated by > client > > timeout), while excactly the same request (we have special unique > headers to > > mark them) from the uwsgi logs took only 100ms to complete. > > It is not clear how do you measure that request took 1 second. > Note well that request time as seen by nginx is unlikely to be > less that the time seen by your backend, but can easily be much > longer: for example, nginx has to read the request from client, > and it sometimes takes significant time. > > > We also saw some very strange pauses in nginx debug log: > > [...] > > I don't see any "strange pauses" in the logs provided. If you > think there are any - please elaborate. > > Note well that it's almost impossible to analize debug logs for > pauses if these logs doesn't contain information about processing > of events. To make sure all needed information is present it is > important to configure debug logs at global level, as recommended > at http://nginx.org/en/docs/debugging_log.html. > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thank you. I was talking about 3 seconds gap here: 2017/01/26 20:17:51 [debug] 2749#0: *75780960 http upstream recv(): -1 (11: Resource temporarily unavailable) 2017/01/26 20:17:54 [debug] 2749#0: *75780960 http upstream request: "/api/info/account/type/278933/?" Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272357,272371#msg-272371 From mdounin at mdounin.ru Mon Feb 13 12:37:57 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 Feb 2017 15:37:57 +0300 Subject: Strange requests stalling inside nginx In-Reply-To: <7d907dbaa51f69ead0f3ed789130748f.NginxMailingListEnglish@forum.nginx.org> References: <20170212145846.GO46625@mdounin.ru> <7d907dbaa51f69ead0f3ed789130748f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170213123757.GQ46625@mdounin.ru> Hello! On Mon, Feb 13, 2017 at 07:16:00AM -0500, MichaelLogutov wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > Hello! > > > > On Sat, Feb 11, 2017 at 04:06:29PM -0500, MichaelLogutov wrote: > > > > > Hello. > > > We have some strange issues when requests seems to stall inside > > nginx - in > > > nginx log we see that request took 1 second (and was terminated by > > client > > > timeout), while excactly the same request (we have special unique > > headers to > > > mark them) from the uwsgi logs took only 100ms to complete. > > > > It is not clear how do you measure that request took 1 second. > > Note well that request time as seen by nginx is unlikely to be > > less that the time seen by your backend, but can easily be much > > longer: for example, nginx has to read the request from client, > > and it sometimes takes significant time. > > > > > We also saw some very strange pauses in nginx debug log: > > > > [...] > > > > I don't see any "strange pauses" in the logs provided. If you > > think there are any - please elaborate. > > > > Note well that it's almost impossible to analize debug logs for > > pauses if these logs doesn't contain information about processing > > of events. To make sure all needed information is present it is > > important to configure debug logs at global level, as recommended > > at http://nginx.org/en/docs/debugging_log.html. > > Thank you. > I was talking about 3 seconds gap here: > > 2017/01/26 20:17:51 [debug] 2749#0: *75780960 http upstream recv(): -1 (11: > Resource temporarily unavailable) > 2017/01/26 20:17:54 [debug] 2749#0: *75780960 http upstream request: > "/api/info/account/type/278933/?" Ah, sorry, I didn't notice this. The time gap here looks like waiting of connect() to complete. It follows connect(): 2017/01/26 20:17:51 [debug] 2749#0: *75780960 connect to 10.0.0.176:80, fd:500 #75780961 and in turn it is followed by sending the request to the upstream server: 2017/01/26 20:17:54 [debug] 2749#0: *75780960 http upstream send request handler (The "http upstream recv()" message is a part of checking if the client closed the connection, and is unrelated.) The 3 seconds here suggests that there is a packet loss somewhere between nginx and the backend. Most likely the backend is simply overloaded, and its listen socket queue overflows and drops packets. On Linux it may be a good idea to use net.ipv4.tcp_abort_on_overflows=1 to better diagnose such problems. Examining listen queue via "ss -nltp" on the backend might also help. -- Maxim Dounin http://nginx.org/ From alex.addy at visionistinc.com Mon Feb 13 15:19:36 2017 From: alex.addy at visionistinc.com (Alex Addy) Date: Mon, 13 Feb 2017 10:19:36 -0500 Subject: SSL DN variable unicode handling Message-ID: While testing the new $ssl_client_s_dn variable (as of 1.11.6) I discovered that it doesn't handle unicode characters correctly, Say if I had a user cert with a DN of 'CN=?????????? ?????? ???????' after going through nginx it becomes 'CN=\D0\B4\D0\BE\D0\B2\D0\B5\D1\80\D0\B5\D0\BD\D0\BD\D0\B0\D1\8F \D1\82\D1\80\D0\B5\D1\82\D1\8C\D1\8F \D1\81\D1\82\D0\BE\D1\80\D0\BE\D0\BD\D0\B0'. This is not very helpful and unicode support here is the only thing preventing our project from using nginx. We are currently using httpd as it handles this case in a more friendly way, but would like to move away from it. Is there something I can set to fix this? Is this a bug or is it working as intended? Thank you for your time, ------------- - Alex Addy - ------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Feb 13 16:01:19 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 Feb 2017 19:01:19 +0300 Subject: SSL DN variable unicode handling In-Reply-To: References: Message-ID: <20170213160119.GR46625@mdounin.ru> Hello! On Mon, Feb 13, 2017 at 10:19:36AM -0500, Alex Addy wrote: > While testing the new $ssl_client_s_dn variable (as of 1.11.6) I discovered > that it doesn't handle unicode characters correctly, Say if I had a user > cert with a DN of 'CN=?????????? ?????? ???????' after going through nginx > it becomes 'CN=\D0\B4\D0\BE\D0\B2\D0\B5\D1\80\D0\B5\D0\BD\D0\BD\D0\B0\D1\8F > \D1\82\D1\80\D0\B5\D1\82\D1\8C\D1\8F > \D1\81\D1\82\D0\BE\D1\80\D0\BE\D0\BD\D0\B0'. > This is not very helpful and unicode support here is the only thing > preventing our project from using nginx. We are currently using httpd as it > handles this case in a more friendly way, but would like to move away from > it. > > Is there something I can set to fix this? Is this a bug or is it working as > intended? This is intended. To produce the $ssl_client_s_dn variable nginx uses the X509_NAME_print_ex() function with the XN_FLAG_RFC2253 flag. It doesn't try to preserve multibyte characters unescaped (it is possible by unsetting the ASN1_STRFLGS_ESC_MSB flag), as it is unknown if the variable will be used in an UTF-8-friendly environment or not. Note well that the form with escaped multibyte characters is correct as per RFC 4514 (and RFC 2253), as RFC 4514 allows escaping of any characters. Any complaint software can properly parse the resulting string to the original DN. If you need a variable with multibyte characters unescaped for some reason, it might be a good idea to elaborate what are you trying to do. May be someone will be able to suggest a better / alternative way to do this. -- Maxim Dounin http://nginx.org/ From alex.addy at visionistinc.com Mon Feb 13 16:42:55 2017 From: alex.addy at visionistinc.com (Alex Addy) Date: Mon, 13 Feb 2017 11:42:55 -0500 Subject: SSL DN variable unicode handling In-Reply-To: <20170213160119.GR46625@mdounin.ru> References: <20170213160119.GR46625@mdounin.ru> Message-ID: We are using the DN of a user's certificate to identify them in the system. This makes for a better user experience once they have installed their certificates. But having the backing services have to parse the DN just to get something readable is annoying at the least. But the way things are moving we may end up having to do more complicated things with the DN so the addition of this would be relatively minor. On Mon, Feb 13, 2017 at 11:01 AM, Maxim Dounin wrote: > Hello! > > On Mon, Feb 13, 2017 at 10:19:36AM -0500, Alex Addy wrote: > > > While testing the new $ssl_client_s_dn variable (as of 1.11.6) I > discovered > > that it doesn't handle unicode characters correctly, Say if I had a user > > cert with a DN of 'CN=?????????? ?????? ???????' after going through > nginx > > it becomes 'CN=\D0\B4\D0\BE\D0\B2\D0\B5\D1\80\D0\B5\D0\BD\D0\BD\D0\B0\ > D1\8F > > \D1\82\D1\80\D0\B5\D1\82\D1\8C\D1\8F > > \D1\81\D1\82\D0\BE\D1\80\D0\BE\D0\BD\D0\B0'. > > This is not very helpful and unicode support here is the only thing > > preventing our project from using nginx. We are currently using httpd as > it > > handles this case in a more friendly way, but would like to move away > from > > it. > > > > Is there something I can set to fix this? Is this a bug or is it working > as > > intended? > > This is intended. > > To produce the $ssl_client_s_dn variable nginx uses the > X509_NAME_print_ex() function with the XN_FLAG_RFC2253 flag. It > doesn't try to preserve multibyte characters unescaped (it is > possible by unsetting the ASN1_STRFLGS_ESC_MSB flag), as it is > unknown if the variable will be used in an UTF-8-friendly > environment or not. > > Note well that the form with escaped multibyte characters is > correct as per RFC 4514 (and RFC 2253), as RFC 4514 allows > escaping of any characters. Any complaint software can properly > parse the resulting string to the original DN. > > If you need a variable with multibyte characters unescaped for > some reason, it might be a good idea to elaborate what are you > trying to do. May be someone will be able to suggest a better / > alternative way to do this. > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- ------------- - Alex Addy - ------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Feb 13 18:34:09 2017 From: francis at daoine.org (Francis Daly) Date: Mon, 13 Feb 2017 18:34:09 +0000 Subject: Apache to nginx In-Reply-To: <18E57C65-35BB-443E-AC42-DB3AC8E12CBF@linux-nerd.de> References: <18E57C65-35BB-443E-AC42-DB3AC8E12CBF@linux-nerd.de> Message-ID: <20170213183409.GM2958@daoine.org> On Mon, Feb 13, 2017 at 10:23:44AM +0100, Daniel wrote: Hi there, > i create a vhost confuguration for a vhost but i ma not able to access /vakanz for exmaple. > I got a 404 error on the access logs. > I Tried already with rewrite rules and i also tried with locations, no matter what i do, nothing works. > Anyone has an idea what can i do? > rewrite ^/(vakanz|vrij|ajax|boek|buchen)$ /var/www/vhosts/reisen/fe/index.php last; There is "filename space" (/var/www/whatever) and "url space" (/vakanz/whatever). "rewrite" changes from "url space" to "url space"; it does not involve "filename space" at all. I think that most of your "rewrites" will probably lead to 404s, because you do not have a file to match url that start with /var/www. If you start again with just one thought -- what do you want nginx to do when you request the url /vakanz/ -- it may make it clearer what the config you want to have in nginx.conf is. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Feb 13 22:33:26 2017 From: nginx-forum at forum.nginx.org (brookscunningham) Date: Mon, 13 Feb 2017 17:33:26 -0500 Subject: TLS Multiplexing to the Origin Server Message-ID: Hello All, I am seeing an increase in the number of new TLS connections to my origin server when using NGINX as a reverse proxy. I am offloading TLS at NGINX and starting a new TLS connection to the origin. The workflow is as follows: client --> NGINX --> origin server I would expect NGINX to either persist a handful of TLS connection or at a minimum re-use previously established TLS connections using TLS session tickets. However, the behavior that we see is NGINX is apparently opening a new TLS connection to the origin for nearly every client request. This means going through the full asymmetric TLS handshake for nearly every request. This is not desirable for both the latency added and CPU performance hit for going through the full TLS handshake. I have validated that my origin server supports TLS Session re-use by using the following openssl command. echo | openssl s_client -tls1_2 -reconnect -state -prexit -connect :443 | grep -i session-id Below is the output from "nginx -v" nginx version: nginx/1.8.1 How can I either persist existing TLS connections or leverage TLS session tickets? I found the following link that may be relevant. http://hg.nginx.org/nginx/rev/1356a3b96924 Thanks! Brooks P.S. Below is the relevant proxy configs that I have for my origin server. #proxy rules in place for the domain proxy_redirect off; proxy_connect_timeout 15; proxy_send_timeout 60; proxy_read_timeout 60; proxy_buffers 8 16k; proxy_buffer_size 16k; proxy_busy_buffers_size 64k; proxy_cache XNXFILES; proxy_cache_use_stale updating error timeout invalid_header http_500 http_502 http_503 http_504; proxy_cache_valid 301 302 0m; proxy_cache_valid 200 60m; proxy_cache_key $host$request_uri; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Accept-Encoding 'gzip'; # The variable $host is sets the host request header to the origin server. proxy_set_header Host $host; #The variables REQUEST_PROTO and PROXY_TO are used when determining which origin to use. proxy_pass $REQUEST_PROTO://$PROXY_TO; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272389,272389#msg-272389 From r1ch+nginx at teamliquid.net Mon Feb 13 23:21:06 2017 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 14 Feb 2017 00:21:06 +0100 Subject: TLS Multiplexing to the Origin Server In-Reply-To: References: Message-ID: You'll want to proxy_pass to a named upstream with keepalive enabled. http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive On Mon, Feb 13, 2017 at 11:33 PM, brookscunningham < nginx-forum at forum.nginx.org> wrote: > Hello All, > > I am seeing an increase in the number of new TLS connections to my origin > server when using NGINX as a reverse proxy. I am offloading TLS at NGINX > and > starting a new TLS connection to the origin. > > The workflow is as follows: > > client --> NGINX --> origin server > > I would expect NGINX to either persist a handful of TLS connection or at a > minimum re-use previously established TLS connections using TLS session > tickets. > However, the behavior that we see is NGINX is apparently opening a new TLS > connection to the origin for nearly every client request. This means going > through the full asymmetric TLS handshake for nearly every request. This is > not desirable for both the latency added and CPU performance hit for going > through the full TLS handshake. > I have validated that my origin server supports TLS Session re-use by using > the following openssl command. > > echo | openssl s_client -tls1_2 -reconnect -state -prexit -connect origin server IP>:443 | grep -i session-id > > Below is the output from "nginx -v" > > nginx version: nginx/1.8.1 > > How can I either persist existing TLS connections or leverage TLS session > tickets? > > I found the following link that may be relevant. > http://hg.nginx.org/nginx/rev/1356a3b96924 > > Thanks! > Brooks > > P.S. Below is the relevant proxy configs that I have for my origin server. > > #proxy rules in place for the domain > > proxy_redirect off; > proxy_connect_timeout 15; > proxy_send_timeout 60; > proxy_read_timeout 60; > proxy_buffers 8 16k; > proxy_buffer_size 16k; > proxy_busy_buffers_size 64k; > > proxy_cache XNXFILES; > proxy_cache_use_stale updating error timeout invalid_header > http_500 > http_502 http_503 http_504; > proxy_cache_valid 301 302 0m; > proxy_cache_valid 200 60m; > proxy_cache_key $host$request_uri; > proxy_http_version 1.1; > proxy_set_header Connection ""; > > proxy_set_header Accept-Encoding 'gzip'; > > # The variable $host is sets the host request header to the origin server. > proxy_set_header Host $host; > > #The variables REQUEST_PROTO and PROXY_TO are used when determining which > origin to use. > proxy_pass $REQUEST_PROTO://$PROXY_TO; > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,272389,272389#msg-272389 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From he.hailong5 at zte.com.cn Tue Feb 14 08:09:10 2017 From: he.hailong5 at zte.com.cn (he.hailong5 at zte.com.cn) Date: Tue, 14 Feb 2017 16:09:10 +0800 (CST) Subject: =?UTF-8?B?562U5aSNOiBSZTrCoGhhdmluZ8KgbmdpbnjCoGxpc3RlbsKgdGhlwqBzYW1lwqBw?= =?UTF-8?B?b3J0wqBtb3JlwqB0aGFuwqBvbmNl?= References: mailman.15.1486987200.51480.nginx@nginx.org0, 201702132119263054165@zte.com.cn Message-ID: <201702141609101670243@zte.com.cn> bm93IEkgdW5kZXJzdGFuZCB0aGUgZHVwbGljYXRlIGxpc3RlbiBwb3J0cyBjb25maWd1cmVkIGlu IHRoZSBodHRwIGJsb2NrIGNhbiBiZSB1c2VkIHRvIGltcGxlbWVudCB2aXJ0dWFsIGhvc3RzLg0K DQoNCmJ1dCB3aGF0J3MgdGhlIHB1cnBvc2UgdG8gYWxsb3cgdGhpcyBpbiB0aGUgc3RyZWFtIGJs b2NrPyBpbiBteSBwcmFjdGlzZSAod2l0aCAxLjkuMTUuMSksIG5naW54IHdpbGwgcmFuZG9tbHkg c2VsZWN0IGEgYmFja2VuZCB0byBzZXJ2ZSB0aGUgdGNwL3VkcCByZXF1ZXN0IHdoaWNoIHNlZW1z IHVzZWxlc3MuDQoNCg0KYi50LncsIEkgYWxzbyB0cmllZCB3aXRoIDEuMTEueCB0b2RheSwgbG9v a3MgbGlrZSBjb25maWd1cmluZyBkdXBsaWNhdGUgbGlzdGVuIHBvcnRzIGluIHN0cmVhbSBibG9j ayBpcyBmb3JiaWRkZW4uIA0KDQoNCg0KDQoNCg0KQlIsDQoNCg0KSm9lDQoNCg0KDQoNCg0KDQoN Cg0KDQoNCg0KDQoNCg0KDQoNCg0KDQoNCg0KDQoNCg0KDQoNCg0KDQoNCg0KDQrljp/lp4vpgq7k u7YNCg0KDQoNCuWPkeS7tuS6uu+8miDvvJxuZ2lueC1yZXF1ZXN0QG5naW54Lm9yZ++8ng0KDQoN Cg0K5pS25Lu25Lq677yaIO+8nG5naW54QG5naW54Lm9yZ++8ng0K5pelIOacnyDvvJoyMDE35bm0 MDLmnIgxM+aXpSAyMDowMA0K5Li7IOmimCDvvJpuZ2lueCBEaWdlc3QsIFZvbCA4OCwgSXNzdWUg MTgNCg0KDQoNCg0KDQpTZW5kIG5naW54IG1haWxpbmcgbGlzdCBzdWJtaXNzaW9ucyB0bw0KICAg IG5naW54QG5naW54Lm9yZw0KDQpUbyBzdWJzY3JpYmUgb3IgdW5zdWJzY3JpYmUgdmlhIHRoZSBX b3JsZCBXaWRlIFdlYiwgdmlzaXQNCiAgICBodHRwOi8vbWFpbG1hbi5uZ2lueC5vcmcvbWFpbG1h bi9saXN0aW5mby9uZ2lueA0Kb3IsIHZpYSBlbWFpbCwgc2VuZCBhIG1lc3NhZ2Ugd2l0aCBzdWJq ZWN0IG9yIGJvZHkgJ2hlbHAnIHRvDQogICAgbmdpbngtcmVxdWVzdEBuZ2lueC5vcmcNCg0KWW91 IGNhbiByZWFjaCB0aGUgcGVyc29uIG1hbmFnaW5nIHRoZSBsaXN0IGF0DQogICAgbmdpbngtb3du ZXJAbmdpbngub3JnDQoNCldoZW4gcmVwbHlpbmcsIHBsZWFzZSBlZGl0IHlvdXIgU3ViamVjdCBs aW5lIHNvIGl0IGlzIG1vcmUgc3BlY2lmaWMNCnRoYW4gIlJlOiBDb250ZW50cyBvZiBuZ2lueCBk aWdlc3QuLi4iDQoNCg0KVG9kYXkncyBUb3BpY3M6DQoNCiAgIDEuIFJFOiBoYXZpbmcgbmdpbngg bGlzdGVuIHRoZSBzYW1lIHBvcnQgbW9yZSB0aGFuIG9uY2UNCiAgICAgIChSZWluaXMgUm96aXRp cykNCiAgIDIuIEFwYWNoZSB0byBuZ2lueCAoRGFuaWVsKQ0KICAgMy4gUmU6IGhhdmluZyBuZ2lu eCBsaXN0ZW4gdGhlIHNhbWUgcG9ydCBtb3JlIHRoYW4gb25jZQ0KICAgICAgKElnb3IgQS4gSXBw b2xpdG92KQ0KDQoNCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCg0KTWVzc2FnZTogMQ0KRGF0ZTogTW9uLCAxMyBG ZWIgMjAxNyAwOToxMzozNiArMDIwMA0KRnJvbTogIlJlaW5pcyBSb3ppdGlzIiDvvJxyQHJvemUu bHbvvJ4NClRvOiDvvJxuZ2lueEBuZ2lueC5vcmfvvJ4NClN1YmplY3Q6IFJFOiBoYXZpbmcgbmdp bnggbGlzdGVuIHRoZSBzYW1lIHBvcnQgbW9yZSB0aGFuIG9uY2UNCk1lc3NhZ2UtSUQ6IO+8nDAw MjAwMWQyODVjOCRiMWRlZmNhMCQxNTljZjVlMCRAcm96ZS5sdu+8ng0KQ29udGVudC1UeXBlOiB0 ZXh0L3BsYWluICAgIGNoYXJzZXQ9IlVURi04Ig0KDQrvvJ4gSSBvYnNlcnZlIHRoYXQgdGhlIG5n aW54IHJ1bnMgd2l0aCBubyBlcnJvciBpZiB0aGVyZSBhcmUgZHVwbGljYXRlIGxpc3RlbiBwb3J0 cyBjb25maWd1cmVkIGluIHRoZSBodHRwIHNlcnZlciBibG9jayBvciBzdHJlYW0gc2VydmVyIGJs b2NrLg0KaXMgdGhpcyBiZWhhdmlvciBhcyBleHBlY3RlZD8gDQoNClRoYXQgaXMgaG93IGV2ZXJ5 IHdlYnNlcnZlciBjYXBhYmxlIG9mIG5hbWUgYmFzZWQgdmlydHVhbCBob3N0cyB3b3Jrcy4gDQpT byB5ZXMgaXQncyBub3JtYWwgYW5kIGV4cGVjdGVkLg0KDQrvvJ4gYW5kIGlmIGEgcmVxdWVzdCBj b21lcyBhdCBzdWNoIGEgcG9ydCwgd2hpY2ggc2VydmVyIHdvdWxkIHNlcnZlIHRoaXMgcmVxdWVz dCwgYnkgcmFkb21seSBvciByb3VuZC1yb2Jpbj8NCg0KaHR0cDovL25naW54Lm9yZy9lbi9kb2Nz L2h0dHAvcmVxdWVzdF9wcm9jZXNzaW5nLmh0bWwNCg0KcnINCg0KDQoNCi0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLQ0KDQpNZXNzYWdlOiAyDQpEYXRlOiBNb24sIDEzIEZlYiAyMDE3IDEw OjIzOjQ0ICswMTAwDQpGcm9tOiBEYW5pZWwg77ycZGFuaWVsQGxpbnV4LW5lcmQuZGXvvJ4NClRv OiBuZ2lueEBuZ2lueC5vcmcNClN1YmplY3Q6IEFwYWNoZSB0byBuZ2lueA0KTWVzc2FnZS1JRDog 77ycMThFNTdDNjUtMzVCQi00NDNFLUFDNDItREIzQUM4RTEyQ0JGQGxpbnV4LW5lcmQuZGXvvJ4N CkNvbnRlbnQtVHlwZTogdGV4dC9wbGFpbiBjaGFyc2V0PSJ1cy1hc2NpaSINCg0KSGksDQoNCmkg Y3JlYXRlIGEgdmhvc3QgY29uZnVndXJhdGlvbiBmb3IgYSB2aG9zdCBidXQgaSBtYSBub3QgYWJs ZSB0byBhY2Nlc3MgL3Zha2FueiBmb3IgZXhtYXBsZS4NCkkgZ290IGEgNDA0IGVycm9yIG9uIHRo ZSBhY2Nlc3MgbG9ncy4NCkkgVHJpZWQgYWxyZWFkeSB3aXRoIHJld3JpdGUgcnVsZXMgYW5kIGkg YWxzbyB0cmllZCB3aXRoIGxvY2F0aW9ucywgbm8gbWF0dGVyIHdoYXQgaSBkbywgbm90aGluZyB3 b3Jrcy4NCkFueW9uZSBoYXMgYW4gaWRlYSB3aGF0IGNhbiBpIGRvPw0KDQpDaGVlcnMNCg0KRGFu aWVsDQoNCg0KDQogICAgc2VydmVyIHsNCg0KICAgICAgICBsaXN0ZW4gODANCg0KICAgICAgICBy b290IC92YXIvd3d3L3Zob3N0cy9yZWlzZW4vc2JvL2N1cnJlbnQvd2ViDQoNCg0KDQpyZXdyaXRl IF4vc3RhdGljLyguKikgL3Zhci93d3cvdmhvc3RzL3JlaXNlbi9mZS9zdGF0aWMvJDEgbGFzdA0K DQpyZXdyaXRlIF4vaHJvdXRlci5qcyAvdmFyL3d3dy92aG9zdHMvcmVpc2VuL2ZlL2luZGV4LnBo cCBsYXN0DQoNCnJld3JpdGUgXi9yb3V0ZXIuanMgL3Zhci93d3cvdmhvc3RzL3JlaXNlbi9mZS9p bmRleC5waHAgbGFzdA0KDQpyZXdyaXRlIF4vKHZha2Fuenx2cmlqfGFqYXh8Ym9la3xidWNoZW4p ezB9bmJzcC92YXIvd3d3L3Zob3N0cy9yZWlzZW4vZmUvaW5kZXgucGhwIGxhc3QNCg0KcmV3cml0 ZSBeLyh2YWthbnp8dnJpanxhamF4fGJvZWt8YnVjaGVuKS8uKiAvdmFyL3d3dy92aG9zdHMvcmVp c2VuL2ZlL2luZGV4LnBocCBsYXN0DQoNCnJld3JpdGUgXi9oaW1hZ2UvLiogL3Zhci93d3cvdmhv c3RzL3JlaXNlbi9mZS9pbmRleC5waHAgbGFzdA0KDQpyZXdyaXRlIF4vaW1hZ2UvLiogL3Zhci93 d3cvdmhvc3RzL3JlaXNlbi9mZS9pbmRleC5waHAgbGFzdA0KDQpyZXdyaXRlIF4vaW1hZ2VzLy4q IC92YXIvd3d3L3Zob3N0cy9yZWlzZW4vZmUvaW5kZXgucGhwIGxhc3QNCg0KcmV3cml0ZSBeL251 ci1mbHVnezB9bmJzcC9mbGlnaHQvZGVzdGluYXRpb24gcGVybWFuZW50DQoNCg0KDQoNCg0KICAg ICAgICBzZXQgJG15X2h0dHBzICJvZmYiDQoNCiAgICAgICAgaWYgKCRodHRwX3hfZm9yd2FyZGVk X3Byb3RvID0gImh0dHBzIikgew0KDQogICAgICAgICAgICBzZXQgJG15X2h0dHBzICJvbiINCg0K ICAgICAgICB9DQoNCiAgICAgICAgc2VydmVyX25hbWUgcHJlcHJvZC5yZWlzZW4uZGUNCg0KDQoN CiAgICAgICAgbG9jYXRpb24gLyB7DQoNCiAgICAgICAgICAgIGluZGV4IGFwcC5waHANCg0KICAg ICAgICAgICAgYWRkX2hlYWRlciBBY2Nlc3MtQ29udHJvbC1BbGxvdy1IZWFkZXJzICJPcmlnaW4s IFgtUmVxdWVzdGVkLVdpdGgsIENvbnRlbnQtVHlwZSwgQWNjZXB0Ig0KDQogICAgICAgICAgICBh ZGRfaGVhZGVyIEFjY2Vzcy1Db250cm9sLUFsbG93LU9yaWdpbiAiKiINCg0KICAgICAgICAgICAg aWYgKC1mICRyZXF1ZXN0X2ZpbGVuYW1lKSB7DQoNCiAgICAgICAgICAgICAgICBicmVhaw0KDQog ICAgICAgICAgICB9DQoNCiAgICAgICAgICAgIHRyeV9maWxlcyAkdXJpIEByZXdyaXRlYXBwDQoN CiAgICAgICAgfQ0KDQoNCg0KICAgICAgICBsb2NhdGlvbiBAcmV3cml0ZWFwcCB7DQoNCiAgICAg ICAgICAgIGlmICggJHJlcXVlc3RfZmlsZW5hbWUgIX4gb3BjYWNoZVwucGhwICApew0KDQogICAg ICAgICAgICByZXdyaXRlIF4oLiopezB9bmJzcC9hcHAucGhwLyQxIGxhc3QNCg0KICAgICAgICB9 DQoNCiAgICAgICAgICAgIH0NCg0KDQoNCg0KDQojcmV3cml0ZSBeLyh2YWthbnp8dnJpanxhamF4 fGJvZWt8YnVjaGVuKXswfW5ic3AvdmFyL3d3dy92aG9zdHMvcmVpc2VuL2ZlL2luZGV4LnBocCBs YXN0DQoNCiNyZXdyaXRlIF4vKHZha2Fuenx2cmlqfGFqYXh8Ym9la3xidWNoZW4pLy4qIC92YXIv d3d3L3Zob3N0cy9yZWlzZW4vZmUvaW5kZXgucGhwIGxhc3QNCg0KDQoNCg0KDQojIGxvY2F0aW9u IC92YWthbnogew0KDQojICAgIGFsaWFzIC92YXIvd3d3L3Zob3N0cy9yZWlzZW4vZmUvDQoNCiMg IH0NCg0KDQoNCmxvY2F0aW9uIH4qIC5qcyQNCg0KICAgICAge2FkZF9oZWFkZXIgIFNlcnZpY2Ut V29ya2VyLUFsbG93ZWQgIi8iDQoNCiAgICAgICB9DQoNCg0KDQogICAgICAgIGxvY2F0aW9uIH4g Xi9hcHBcLnBocC9fYXBpbG9nZ2VyKC98JCkgew0KDQogICAgZmFzdGNnaV9wYXNzIHVuaXg6L3Zh ci9ydW4vcGhwL3BocDcuMC1mcG0uc29jaw0KDQogICAgICAgICAgICBmYXN0Y2dpX3NwbGl0X3Bh dGhfaW5mbyBeKC4rXC5waHApKC8uKikkDQoNCiAgICAgICAgICAgIGluY2x1ZGUgZmFzdGNnaV9w YXJhbXMNCg0KICAgICAgICAgICAgZmFzdGNnaV9wYXJhbSBTQ1JJUFRfTkFNRSAkZmFzdGNnaV9z Y3JpcHRfbmFtZQ0KDQogICAgICAgICAgICBmYXN0Y2dpX3BhcmFtIFNDUklQVF9GSUxFTkFNRSAk ZG9jdW1lbnRfcm9vdCRmYXN0Y2dpX3NjcmlwdF9uYW1lDQoNCiAgICAgICAgICAgIGZhc3RjZ2lf cGFyYW0gUEFUSF9JTkZPICRmYXN0Y2dpX3BhdGhfaW5mbw0KDQogICAgICAgICAgICBmYXN0Y2dp X3BhcmFtIFBBVEhfVFJBTlNMQVRFRCAkZG9jdW1lbnRfcm9vdCRmYXN0Y2dpX3BhdGhfaW5mbw0K DQogICAgICAgICAgICBmYXN0Y2dpX3BhcmFtIEhUVFBTICRteV9odHRwcw0KDQogICAgZmFzdGNn aV9wYXJhbSBTWU1GT05ZX19DTVNfX0VOQUJMRUQgZmFsc2UNCg0KICAgICAgICAgICAgZmFzdGNn aV9wYXJhbSBDTVNfRU5BQkxFRCBmYWxzZQ0KDQogICAgICAgICAgICBmYXN0Y2dpX2J1ZmZlcl9z aXplIDEyOGsNCg0KICAgICAgICAgICAgZmFzdGNnaV9idWZmZXJzIDQgMjU2aw0KDQogICAgICAg ICAgICBmYXN0Y2dpX2J1c3lfYnVmZmVyc19zaXplIDI1NmsNCg0KICAgICAgICAgICAgYWRkX2hl YWRlciBBY2Nlc3MtQ29udHJvbC1BbGxvdy1IZWFkZXJzICJPcmlnaW4sIFgtUmVxdWVzdGVkLVdp dGgsIENvbnRlbnQtVHlwZSwgQWNjZXB0Ig0KDQogICAgICAgICAgICBhZGRfaGVhZGVyIEFjY2Vz cy1Db250cm9sLUFsbG93LU9yaWdpbiAiKiINCg0KDQoNCiAgICAgICAgICAgICMgUHJldmVudHMg VVJJcyB0aGF0IGluY2x1ZGUgdGhlIGZyb250IGNvbnRyb2xsZXIuIFRoaXMgd2lsbCA0MDQ6DQoN CiAgICAgICAgICAgICMgaHR0cDovL2RvbWFpbi50bGQvYXBwLnBocC9zb21lLXBhdGgNCg0KICAg ICAgICAgICAgIyBSZW1vdmUgdGhlIGludGVybmFsIGRpcmVjdGl2ZSB0byBhbGxvdyBVUklzIGxp a2UgdGhpcw0KDQogICAgICAgICAgICBpbnRlcm5hbA0KDQogICAgICAgIH0NCg0KDQoNCiAgICAg ICBsb2NhdGlvbiB+IF4vcHJveHlcLnBocChcP3wvfCQpIHsNCg0KICAgIGZhc3RjZ2lfcGFzcyB1 bml4Oi92YXIvcnVuL3BocC9waHA3LjAtZnBtLnNvY2sNCg0KICAgICAgICAgICAgZmFzdGNnaV9z cGxpdF9wYXRoX2luZm8gXiguK1wucGhwKSguKikkDQoNCiAgICAgICAgICAgIGluY2x1ZGUgZmFz dGNnaV9wYXJhbXMNCg0KICAgICAgICAgICAgZmFzdGNnaV9wYXJhbSBTQ1JJUFRfTkFNRSAkZmFz dGNnaV9zY3JpcHRfbmFtZQ0KDQogICAgICAgICAgICBmYXN0Y2dpX3BhcmFtIFNDUklQVF9GSUxF TkFNRSAkZG9jdW1lbnRfcm9vdCRmYXN0Y2dpX3NjcmlwdF9uYW1lDQoNCiAgICAgICAgICAgIGZh c3RjZ2lfcGFyYW0gUEFUSF9JTkZPICRmYXN0Y2dpX3BhdGhfaW5mbw0KDQogICAgICAgICAgICBm YXN0Y2dpX3BhcmFtIFBBVEhfVFJBTlNMQVRFRCAkZG9jdW1lbnRfcm9vdCRmYXN0Y2dpX3BhdGhf aW5mbw0KDQogICAgICAgICAgICBmYXN0Y2dpX3BhcmFtIEhUVFBTICRteV9odHRwcw0KDQogICAg ZmFzdGNnaV9wYXJhbSBTWU1GT05ZX19DTVNfX0VOQUJMRUQgZmFsc2UNCg0KICAgICAgICAgICAg ZmFzdGNnaV9wYXJhbSBDTVNfRU5BQkxFRCBmYWxzZQ0KDQogICAgICAgICAgICBmYXN0Y2dpX2J1 ZmZlcl9zaXplIDEyOGsNCg0KICAgICAgICAgICAgZmFzdGNnaV9idWZmZXJzIDQgMjU2aw0KDQog ICAgICAgICAgICBmYXN0Y2dpX2J1c3lfYnVmZmVyc19zaXplIDI1NmsNCg0KICAgICAgICAgICAg YWRkX2hlYWRlciBBY2Nlc3MtQ29udHJvbC1BbGxvdy1IZWFkZXJzICJPcmlnaW4sIFgtUmVxdWVz dGVkLVdpdGgsIENvbnRlbnQtVHlwZSwgQWNjZXB0Ig0KDQogICAgICAgICAgICBhZGRfaGVhZGVy IEFjY2Vzcy1Db250cm9sLUFsbG93LU9yaWdpbiAiKiINCg0KICAgICAgICAgICAgIyBQcmV2ZW50 cyBVUklzIHRoYXQgaW5jbHVkZSB0aGUgZnJvbnQgY29udHJvbGxlci4gVGhpcyB3aWxsIDQwNDoN Cg0KICAgICAgICAgICAgIyBodHRwOi8vZG9tYWluLnRsZC9hcHAucGhwL3NvbWUtcGF0aA0KDQog ICAgICAgICAgICAjIFJlbW92ZSB0aGUgaW50ZXJuYWwgZGlyZWN0aXZlIHRvIGFsbG93IFVSSXMg bGlrZSB0aGlzDQoNCiAgICAgICAgICAgICNpbnRlcm5hbA0KDQogICAgICAgIH0NCg0KDQoNCiAg ICAgICAgbG9jYXRpb24gfiBeL2FwcFwucGhwKC98JCkgew0KDQogICAgZmFzdGNnaV9wYXNzIHVu aXg6L3Zhci9ydW4vcGhwL3BocDcuMC1mcG0uc29jaw0KDQogICAgICAgICAgICBmYXN0Y2dpX3Nw bGl0X3BhdGhfaW5mbyBeKC4rXC5waHApKC8uKikkDQoNCiAgICAgICAgICAgIGluY2x1ZGUgZmFz dGNnaV9wYXJhbXMNCg0KICAgICAgICAgICAgZmFzdGNnaV9wYXJhbSBTQ1JJUFRfTkFNRSAkZmFz dGNnaV9zY3JpcHRfbmFtZQ0KDQogICAgICAgICAgICBmYXN0Y2dpX3BhcmFtIFNDUklQVF9GSUxF TkFNRSAkZG9jdW1lbnRfcm9vdCRmYXN0Y2dpX3NjcmlwdF9uYW1lDQoNCiAgICAgICAgICAgIGZh c3RjZ2lfcGFyYW0gUEFUSF9JTkZPICRmYXN0Y2dpX3BhdGhfaW5mbw0KDQogICAgICAgICAgICBm YXN0Y2dpX3BhcmFtIFBBVEhfVFJBTlNMQVRFRCAkZG9jdW1lbnRfcm9vdCRmYXN0Y2dpX3BhdGhf aW5mbw0KDQogICAgICAgICAgICBmYXN0Y2dpX3BhcmFtIEhUVFBTICRteV9odHRwcw0KDQogICAg ZmFzdGNnaV9wYXJhbSBTWU1GT05ZX19DTVNfX0VOQUJMRUQgZmFsc2UNCg0KICAgICAgICAgICAg ZmFzdGNnaV9wYXJhbSBDTVNfRU5BQkxFRCBmYWxzZQ0KDQogICAgICAgICAgICBmYXN0Y2dpX2J1 ZmZlcl9zaXplIDEyOGsNCg0KICAgICAgICAgICAgZmFzdGNnaV9idWZmZXJzIDQgMjU2aw0KDQog ICAgICAgICAgICBmYXN0Y2dpX2J1c3lfYnVmZmVyc19zaXplIDI1NmsNCg0KICAgICAgICAgICAg YWRkX2hlYWRlciBBY2Nlc3MtQ29udHJvbC1BbGxvdy1IZWFkZXJzICJPcmlnaW4sIFgtUmVxdWVz dGVkLVdpdGgsIENvbnRlbnQtVHlwZSwgQWNjZXB0Ig0KDQogICAgICAgICAgICBhZGRfaGVhZGVy IEFjY2Vzcy1Db250cm9sLUFsbG93LU9yaWdpbiAiKiINCg0KDQoNCiAgICAgICAgICAgICMgUHJl dmVudHMgVVJJcyB0aGF0IGluY2x1ZGUgdGhlIGZyb250IGNvbnRyb2xsZXIuIFRoaXMgd2lsbCA0 MDQ6DQoNCiAgICAgICAgICAgICMgaHR0cDovL2RvbWFpbi50bGQvYXBwLnBocC9zb21lLXBhdGgN Cg0KICAgICAgICAgICAgIyBSZW1vdmUgdGhlIGludGVybmFsIGRpcmVjdGl2ZSB0byBhbGxvdyBV UklzIGxpa2UgdGhpcw0KDQogICAgICAgICAgICBpbnRlcm5hbA0KDQogICAgICAgIH0NCg0KDQoN Cg0KDQogICAgfQ0KDQoNCi0tLS0tLS0tLS0tLS0tIG5leHQgcGFydCAtLS0tLS0tLS0tLS0tLQ0K QW4gSFRNTCBhdHRhY2htZW50IHdhcyBzY3J1YmJlZC4uLg0KVVJMOiDvvJxodHRwOi8vbWFpbG1h bi5uZ2lueC5vcmcvcGlwZXJtYWlsL25naW54L2F0dGFjaG1lbnRzLzIwMTcwMjEzL2IxOTFjNTI3 L2F0dGFjaG1lbnQtMDAwMS5odG1s77yeDQoNCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LQ0KDQpNZXNzYWdlOiAzDQpEYXRlOiBNb24sIDEzIEZlYiAyMDE3IDE0OjQyOjIyICswMzAwDQpG cm9tOiAiSWdvciBBLiBJcHBvbGl0b3YiIO+8nGlpcHBvbGl0b3ZAbmdpbnguY29t77yeDQpUbzog bmdpbnhAbmdpbngub3JnDQpTdWJqZWN0OiBSZTogaGF2aW5nIG5naW54IGxpc3RlbiB0aGUgc2Ft ZSBwb3J0IG1vcmUgdGhhbiBvbmNlDQpNZXNzYWdlLUlEOiDvvJxiZTM4YTFiZS1kNWUyLWRlMDkt Njg4Zi02NDIzMjZjNWNhZWJAbmdpbnguY29t77yeDQpDb250ZW50LVR5cGU6IHRleHQvcGxhaW4g Y2hhcnNldD13aW5kb3dzLTEyNTIgZm9ybWF0PWZsb3dlZA0KDQpBc3N1bWluZyBhIGNvbmZpZ3Vy YXRpb24gd2l0aCBtdWx0aXBsZSBzaW1pbGFyICdsaXN0ZW4nIGFuZCANCidzZXJ2ZXJfbmFtZScg c3RhdGVtZW50cywgb25seSB0aGUgZmlyc3Qgb25lIHdpbGwgd29yazoNCu+8niAgICAgc2VydmVy IHsNCu+8niAgICAgICAgIGxpc3RlbiA5MDkwDQrvvJ4gICAgICAgICByZXR1cm4gNDA0DQrvvJ4g ICAgICAgICBzZXJ2ZXJfbmFtZSBleGFtcGxlLmNvbQ0K77yeICAgICB9DQrvvJ4gICAgIHNlcnZl ciB7DQrvvJ4gICAgICAgICBsaXN0ZW4gOTA5MA0K77yeICAgICAgICAgcmV0dXJuIDQwMw0K77ye ICAgICAgICAgc2VydmVyX25hbWUgZXhhbXBsZS5jb20NCu+8niAgICAgfQ0K77yeICAgICBzZXJ2 ZXIgew0K77yeICAgICAgICAgbGlzdGVuIDkwOTANCu+8niAgICAgICAgIHJldHVybiA0MDANCu+8 niAgICAgICAgIHNlcnZlcl9uYW1lIGV4YW1wbGUuY29tDQrvvJ4gICAgIH0NCg0K77yeIG5naW54 OiBbd2Fybl0gY29uZmxpY3Rpbmcgc2VydmVyIG5hbWUgImV4YW1wbGUuY29tIiBvbiAwLjAuMC4w OjkwOTAsIA0K77yeIGlnbm9yZWQNCu+8niBuZ2lueDogW3dhcm5dIGNvbmZsaWN0aW5nIHNlcnZl ciBuYW1lICJleGFtcGxlLmNvbSIgb24gMC4wLjAuMDo5MDkwLCANCu+8niBpZ25vcmVkIA0KQWZh aWssIHRoZSBvbmx5IHJlcGx5IHlvdSB3b3VsZCBiZSBhYmxlIHRvIGdldCBmcm9tIHN1Y2ggY29u ZmlndXJhdGlvbiANCmlzICc0MDQnDQoNCk9uIDEzLjAyLjIwMTcgMTA6MTMsIFJlaW5pcyBSb3pp dGlzIHdyb3RlOg0K77ye77yeIEkgb2JzZXJ2ZSB0aGF0IHRoZSBuZ2lueCBydW5zIHdpdGggbm8g ZXJyb3IgaWYgdGhlcmUgYXJlIGR1cGxpY2F0ZSBsaXN0ZW4gcG9ydHMgY29uZmlndXJlZCBpbiB0 aGUgaHR0cCBzZXJ2ZXIgYmxvY2sgb3Igc3RyZWFtIHNlcnZlciBibG9jay4NCu+8niBpcyB0aGlz IGJlaGF2aW9yIGFzIGV4cGVjdGVkPw0K77yeDQrvvJ4gVGhhdCBpcyBob3cgZXZlcnkgd2Vic2Vy dmVyIGNhcGFibGUgb2YgbmFtZSBiYXNlZCB2aXJ0dWFsIGhvc3RzIHdvcmtzLg0K77yeIFNvIHll cyBpdCdzIG5vcm1hbCBhbmQgZXhwZWN0ZWQuDQrvvJ4NCu+8nu+8niBhbmQgaWYgYSByZXF1ZXN0 IGNvbWVzIGF0IHN1Y2ggYSBwb3J0LCB3aGljaCBzZXJ2ZXIgd291bGQgc2VydmUgdGhpcyByZXF1 ZXN0LCBieSByYWRvbWx5IG9yIHJvdW5kLXJvYmluPw0K77yeIGh0dHA6Ly9uZ2lueC5vcmcvZW4v ZG9jcy9odHRwL3JlcXVlc3RfcHJvY2Vzc2luZy5odG1sDQrvvJ4NCu+8niBycg0K77yeDQrvvJ4g X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCu+8niBuZ2lu eCBtYWlsaW5nIGxpc3QNCu+8niBuZ2lueEBuZ2lueC5vcmcNCu+8niBodHRwOi8vbWFpbG1hbi5u Z2lueC5vcmcvbWFpbG1hbi9saXN0aW5mby9uZ2lueA0KDQoNCg0KDQotLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0NCg0KU3ViamVjdDogRGlnZXN0IEZvb3Rlcg0KDQpfX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KbmdpbnggbWFpbGluZyBsaXN0DQpu Z2lueEBuZ2lueC5vcmcNCmh0dHA6Ly9tYWlsbWFuLm5naW54Lm9yZy9tYWlsbWFuL2xpc3RpbmZv L25naW54DQoNCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KDQpFbmQgb2Ygbmdpbngg RGlnZXN0LCBWb2wgODgsIElzc3VlIDE4DQoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioq -------------- next part -------------- An HTML attachment was scrubbed... URL: From he.hailong5 at zte.com.cn Tue Feb 14 08:15:44 2017 From: he.hailong5 at zte.com.cn (he.hailong5 at zte.com.cn) Date: Tue, 14 Feb 2017 16:15:44 +0800 (CST) Subject: =?UTF-8?B?UmU6wqBoYXZpbmfCoG5naW54wqBsaXN0ZW7CoHRoZcKgc2FtZcKgcG9ydMKgbW9y?= =?UTF-8?B?ZcKgdGhhbsKgb25jZQ==?= References: mailman.15.1486987200.51480.nginx@nginx.org0, 201702132119263054165@zte.com.cn Message-ID: <201702141615443340650@zte.com.cn> bm93IEkgdW5kZXJzdGFuZCB0aGUgZHVwbGljYXRlIGxpc3RlbiBwb3J0cyBjb25maWd1cmVkIGlu IHRoZSBodHRwIGJsb2NrIGNhbiBiZSB1c2VkIHRvIGltcGxlbWVudCB2aXJ0dWFsIGhvc3RzLg0K DQoNCmJ1dCB3aGF0J3MgdGhlIHB1cnBvc2UgdG8gYWxsb3cgdGhpcyBpbiB0aGUgc3RyZWFtIGJs b2NrPyBpbiBteSBwcmFjdGlzZSAod2l0aCAxLjkuMTUuMSksIG5naW54IHdpbGwgcmFuZG9tbHkg c2VsZWN0IGEgYmFja2VuZCB0byBzZXJ2ZSB0aGUgdGNwL3VkcCByZXF1ZXN0IHdoaWNoIHNlZW1z IHVzZWxlc3MuDQoNCg0KYi50LncsIEkgYWxzbyB0cmllZCB3aXRoIDEuMTEueCB0b2RheSwgbG9v a3MgbGlrZSBjb25maWd1cmluZyBkdXBsaWNhdGUgbGlzdGVuIHBvcnRzIGluIHN0cmVhbSBibG9j ayBpcyBmb3JiaWRkZW4uIA0KDQoNCg0KDQoNCkJSLA0KDQoNCkpvZQ0KDQoNCg0KDQoNCg0K5Y6f 5aeL6YKu5Lu2DQoNCg0KDQrlj5Hku7bkurrvvJog77ycbmdpbngtcmVxdWVzdEBuZ2lueC5vcmfv vJ4NCuaUtuS7tuS6uu+8miDvvJxuZ2lueEBuZ2lueC5vcmfvvJ4NCuaXpSDmnJ8g77yaMjAxN+W5 tDAy5pyIMTPml6UgMjA6MDANCuS4uyDpopgg77yabmdpbnggRGlnZXN0LCBWb2wgODgsIElzc3Vl IDE4DQoNCg0KDQoNCg0KU2VuZCBuZ2lueCBtYWlsaW5nIGxpc3Qgc3VibWlzc2lvbnMgdG8NCiAg ICBuZ2lueEBuZ2lueC5vcmcNCg0KVG8gc3Vic2NyaWJlIG9yIHVuc3Vic2NyaWJlIHZpYSB0aGUg V29ybGQgV2lkZSBXZWIsIHZpc2l0DQogICAgaHR0cDovL21haWxtYW4ubmdpbngub3JnL21haWxt YW4vbGlzdGluZm8vbmdpbngNCm9yLCB2aWEgZW1haWwsIHNlbmQgYSBtZXNzYWdlIHdpdGggc3Vi amVjdCBvciBib2R5ICdoZWxwJyB0bw0KICAgIG5naW54LXJlcXVlc3RAbmdpbngub3JnDQoNCllv dSBjYW4gcmVhY2ggdGhlIHBlcnNvbiBtYW5hZ2luZyB0aGUgbGlzdCBhdA0KICAgIG5naW54LW93 bmVyQG5naW54Lm9yZw0KDQpXaGVuIHJlcGx5aW5nLCBwbGVhc2UgZWRpdCB5b3VyIFN1YmplY3Qg bGluZSBzbyBpdCBpcyBtb3JlIHNwZWNpZmljDQp0aGFuICJSZTogQ29udGVudHMgb2Ygbmdpbngg ZGlnZXN0Li4uIg0KDQoNClRvZGF5J3MgVG9waWNzOg0KDQogICAxLiBSRTogaGF2aW5nIG5naW54 IGxpc3RlbiB0aGUgc2FtZSBwb3J0IG1vcmUgdGhhbiBvbmNlDQogICAgICAoUmVpbmlzIFJveml0 aXMpDQogICAyLiBBcGFjaGUgdG8gbmdpbnggKERhbmllbCkNCiAgIDMuIFJlOiBoYXZpbmcgbmdp bnggbGlzdGVuIHRoZSBzYW1lIHBvcnQgbW9yZSB0aGFuIG9uY2UNCiAgICAgIChJZ29yIEEuIElw cG9saXRvdikNCg0KDQotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQoNCk1lc3NhZ2U6IDENCkRhdGU6IE1vbiwgMTMg RmViIDIwMTcgMDk6MTM6MzYgKzAyMDANCkZyb206ICJSZWluaXMgUm96aXRpcyIg77ycckByb3pl Lmx277yeDQpUbzog77ycbmdpbnhAbmdpbngub3Jn77yeDQpTdWJqZWN0OiBSRTogaGF2aW5nIG5n aW54IGxpc3RlbiB0aGUgc2FtZSBwb3J0IG1vcmUgdGhhbiBvbmNlDQpNZXNzYWdlLUlEOiDvvJww MDIwMDFkMjg1YzgkYjFkZWZjYTAkMTU5Y2Y1ZTAkQHJvemUubHbvvJ4NCkNvbnRlbnQtVHlwZTog dGV4dC9wbGFpbiAgICBjaGFyc2V0PSJVVEYtOCINCg0K77yeIEkgb2JzZXJ2ZSB0aGF0IHRoZSBu Z2lueCBydW5zIHdpdGggbm8gZXJyb3IgaWYgdGhlcmUgYXJlIGR1cGxpY2F0ZSBsaXN0ZW4gcG9y dHMgY29uZmlndXJlZCBpbiB0aGUgaHR0cCBzZXJ2ZXIgYmxvY2sgb3Igc3RyZWFtIHNlcnZlciBi bG9jay4NCmlzIHRoaXMgYmVoYXZpb3IgYXMgZXhwZWN0ZWQ/IA0KDQpUaGF0IGlzIGhvdyBldmVy eSB3ZWJzZXJ2ZXIgY2FwYWJsZSBvZiBuYW1lIGJhc2VkIHZpcnR1YWwgaG9zdHMgd29ya3MuIA0K U28geWVzIGl0J3Mgbm9ybWFsIGFuZCBleHBlY3RlZC4NCg0K77yeIGFuZCBpZiBhIHJlcXVlc3Qg Y29tZXMgYXQgc3VjaCBhIHBvcnQsIHdoaWNoIHNlcnZlciB3b3VsZCBzZXJ2ZSB0aGlzIHJlcXVl c3QsIGJ5IHJhZG9tbHkgb3Igcm91bmQtcm9iaW4/DQoNCmh0dHA6Ly9uZ2lueC5vcmcvZW4vZG9j cy9odHRwL3JlcXVlc3RfcHJvY2Vzc2luZy5odG1sDQoNCnJyDQoNCg0KDQotLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0NCg0KTWVzc2FnZTogMg0KRGF0ZTogTW9uLCAxMyBGZWIgMjAxNyAx MDoyMzo0NCArMDEwMA0KRnJvbTogRGFuaWVsIO+8nGRhbmllbEBsaW51eC1uZXJkLmRl77yeDQpU bzogbmdpbnhAbmdpbngub3JnDQpTdWJqZWN0OiBBcGFjaGUgdG8gbmdpbngNCk1lc3NhZ2UtSUQ6 IO+8nDE4RTU3QzY1LTM1QkItNDQzRS1BQzQyLURCM0FDOEUxMkNCRkBsaW51eC1uZXJkLmRl77ye DQpDb250ZW50LVR5cGU6IHRleHQvcGxhaW4gY2hhcnNldD0idXMtYXNjaWkiDQoNCkhpLA0KDQpp IGNyZWF0ZSBhIHZob3N0IGNvbmZ1Z3VyYXRpb24gZm9yIGEgdmhvc3QgYnV0IGkgbWEgbm90IGFi bGUgdG8gYWNjZXNzIC92YWthbnogZm9yIGV4bWFwbGUuDQpJIGdvdCBhIDQwNCBlcnJvciBvbiB0 aGUgYWNjZXNzIGxvZ3MuDQpJIFRyaWVkIGFscmVhZHkgd2l0aCByZXdyaXRlIHJ1bGVzIGFuZCBp IGFsc28gdHJpZWQgd2l0aCBsb2NhdGlvbnMsIG5vIG1hdHRlciB3aGF0IGkgZG8sIG5vdGhpbmcg d29ya3MuDQpBbnlvbmUgaGFzIGFuIGlkZWEgd2hhdCBjYW4gaSBkbz8NCg0KQ2hlZXJzDQoNCkRh bmllbA0KDQoNCg0KICAgIHNlcnZlciB7DQoNCiAgICAgICAgbGlzdGVuIDgwDQoNCiAgICAgICAg cm9vdCAvdmFyL3d3dy92aG9zdHMvcmVpc2VuL3Niby9jdXJyZW50L3dlYg0KDQoNCg0KcmV3cml0 ZSBeL3N0YXRpYy8oLiopIC92YXIvd3d3L3Zob3N0cy9yZWlzZW4vZmUvc3RhdGljLyQxIGxhc3QN Cg0KcmV3cml0ZSBeL2hyb3V0ZXIuanMgL3Zhci93d3cvdmhvc3RzL3JlaXNlbi9mZS9pbmRleC5w aHAgbGFzdA0KDQpyZXdyaXRlIF4vcm91dGVyLmpzIC92YXIvd3d3L3Zob3N0cy9yZWlzZW4vZmUv aW5kZXgucGhwIGxhc3QNCg0KcmV3cml0ZSBeLyh2YWthbnp8dnJpanxhamF4fGJvZWt8YnVjaGVu KXswfW5ic3AvdmFyL3d3dy92aG9zdHMvcmVpc2VuL2ZlL2luZGV4LnBocCBsYXN0DQoNCnJld3Jp dGUgXi8odmFrYW56fHZyaWp8YWpheHxib2VrfGJ1Y2hlbikvLiogL3Zhci93d3cvdmhvc3RzL3Jl aXNlbi9mZS9pbmRleC5waHAgbGFzdA0KDQpyZXdyaXRlIF4vaGltYWdlLy4qIC92YXIvd3d3L3Zo b3N0cy9yZWlzZW4vZmUvaW5kZXgucGhwIGxhc3QNCg0KcmV3cml0ZSBeL2ltYWdlLy4qIC92YXIv d3d3L3Zob3N0cy9yZWlzZW4vZmUvaW5kZXgucGhwIGxhc3QNCg0KcmV3cml0ZSBeL2ltYWdlcy8u KiAvdmFyL3d3dy92aG9zdHMvcmVpc2VuL2ZlL2luZGV4LnBocCBsYXN0DQoNCnJld3JpdGUgXi9u dXItZmx1Z3swfW5ic3AvZmxpZ2h0L2Rlc3RpbmF0aW9uIHBlcm1hbmVudA0KDQoNCg0KDQoNCiAg ICAgICAgc2V0ICRteV9odHRwcyAib2ZmIg0KDQogICAgICAgIGlmICgkaHR0cF94X2ZvcndhcmRl ZF9wcm90byA9ICJodHRwcyIpIHsNCg0KICAgICAgICAgICAgc2V0ICRteV9odHRwcyAib24iDQoN CiAgICAgICAgfQ0KDQogICAgICAgIHNlcnZlcl9uYW1lIHByZXByb2QucmVpc2VuLmRlDQoNCg0K DQogICAgICAgIGxvY2F0aW9uIC8gew0KDQogICAgICAgICAgICBpbmRleCBhcHAucGhwDQoNCiAg ICAgICAgICAgIGFkZF9oZWFkZXIgQWNjZXNzLUNvbnRyb2wtQWxsb3ctSGVhZGVycyAiT3JpZ2lu LCBYLVJlcXVlc3RlZC1XaXRoLCBDb250ZW50LVR5cGUsIEFjY2VwdCINCg0KICAgICAgICAgICAg YWRkX2hlYWRlciBBY2Nlc3MtQ29udHJvbC1BbGxvdy1PcmlnaW4gIioiDQoNCiAgICAgICAgICAg IGlmICgtZiAkcmVxdWVzdF9maWxlbmFtZSkgew0KDQogICAgICAgICAgICAgICAgYnJlYWsNCg0K ICAgICAgICAgICAgfQ0KDQogICAgICAgICAgICB0cnlfZmlsZXMgJHVyaSBAcmV3cml0ZWFwcA0K DQogICAgICAgIH0NCg0KDQoNCiAgICAgICAgbG9jYXRpb24gQHJld3JpdGVhcHAgew0KDQogICAg ICAgICAgICBpZiAoICRyZXF1ZXN0X2ZpbGVuYW1lICF+IG9wY2FjaGVcLnBocCAgKXsNCg0KICAg ICAgICAgICAgcmV3cml0ZSBeKC4qKXswfW5ic3AvYXBwLnBocC8kMSBsYXN0DQoNCiAgICAgICAg fQ0KDQogICAgICAgICAgICB9DQoNCg0KDQoNCg0KI3Jld3JpdGUgXi8odmFrYW56fHZyaWp8YWph eHxib2VrfGJ1Y2hlbil7MH1uYnNwL3Zhci93d3cvdmhvc3RzL3JlaXNlbi9mZS9pbmRleC5waHAg bGFzdA0KDQojcmV3cml0ZSBeLyh2YWthbnp8dnJpanxhamF4fGJvZWt8YnVjaGVuKS8uKiAvdmFy L3d3dy92aG9zdHMvcmVpc2VuL2ZlL2luZGV4LnBocCBsYXN0DQoNCg0KDQoNCg0KIyBsb2NhdGlv biAvdmFrYW56IHsNCg0KIyAgICBhbGlhcyAvdmFyL3d3dy92aG9zdHMvcmVpc2VuL2ZlLw0KDQoj ICB9DQoNCg0KDQpsb2NhdGlvbiB+KiAuanMkDQoNCiAgICAgIHthZGRfaGVhZGVyICBTZXJ2aWNl LVdvcmtlci1BbGxvd2VkICIvIg0KDQogICAgICAgfQ0KDQoNCg0KICAgICAgICBsb2NhdGlvbiB+ IF4vYXBwXC5waHAvX2FwaWxvZ2dlcigvfCQpIHsNCg0KICAgIGZhc3RjZ2lfcGFzcyB1bml4Oi92 YXIvcnVuL3BocC9waHA3LjAtZnBtLnNvY2sNCg0KICAgICAgICAgICAgZmFzdGNnaV9zcGxpdF9w YXRoX2luZm8gXiguK1wucGhwKSgvLiopJA0KDQogICAgICAgICAgICBpbmNsdWRlIGZhc3RjZ2lf cGFyYW1zDQoNCiAgICAgICAgICAgIGZhc3RjZ2lfcGFyYW0gU0NSSVBUX05BTUUgJGZhc3RjZ2lf c2NyaXB0X25hbWUNCg0KICAgICAgICAgICAgZmFzdGNnaV9wYXJhbSBTQ1JJUFRfRklMRU5BTUUg JGRvY3VtZW50X3Jvb3QkZmFzdGNnaV9zY3JpcHRfbmFtZQ0KDQogICAgICAgICAgICBmYXN0Y2dp X3BhcmFtIFBBVEhfSU5GTyAkZmFzdGNnaV9wYXRoX2luZm8NCg0KICAgICAgICAgICAgZmFzdGNn aV9wYXJhbSBQQVRIX1RSQU5TTEFURUQgJGRvY3VtZW50X3Jvb3QkZmFzdGNnaV9wYXRoX2luZm8N Cg0KICAgICAgICAgICAgZmFzdGNnaV9wYXJhbSBIVFRQUyAkbXlfaHR0cHMNCg0KICAgIGZhc3Rj Z2lfcGFyYW0gU1lNRk9OWV9fQ01TX19FTkFCTEVEIGZhbHNlDQoNCiAgICAgICAgICAgIGZhc3Rj Z2lfcGFyYW0gQ01TX0VOQUJMRUQgZmFsc2UNCg0KICAgICAgICAgICAgZmFzdGNnaV9idWZmZXJf c2l6ZSAxMjhrDQoNCiAgICAgICAgICAgIGZhc3RjZ2lfYnVmZmVycyA0IDI1NmsNCg0KICAgICAg ICAgICAgZmFzdGNnaV9idXN5X2J1ZmZlcnNfc2l6ZSAyNTZrDQoNCiAgICAgICAgICAgIGFkZF9o ZWFkZXIgQWNjZXNzLUNvbnRyb2wtQWxsb3ctSGVhZGVycyAiT3JpZ2luLCBYLVJlcXVlc3RlZC1X aXRoLCBDb250ZW50LVR5cGUsIEFjY2VwdCINCg0KICAgICAgICAgICAgYWRkX2hlYWRlciBBY2Nl c3MtQ29udHJvbC1BbGxvdy1PcmlnaW4gIioiDQoNCg0KDQogICAgICAgICAgICAjIFByZXZlbnRz IFVSSXMgdGhhdCBpbmNsdWRlIHRoZSBmcm9udCBjb250cm9sbGVyLiBUaGlzIHdpbGwgNDA0Og0K DQogICAgICAgICAgICAjIGh0dHA6Ly9kb21haW4udGxkL2FwcC5waHAvc29tZS1wYXRoDQoNCiAg ICAgICAgICAgICMgUmVtb3ZlIHRoZSBpbnRlcm5hbCBkaXJlY3RpdmUgdG8gYWxsb3cgVVJJcyBs aWtlIHRoaXMNCg0KICAgICAgICAgICAgaW50ZXJuYWwNCg0KICAgICAgICB9DQoNCg0KDQogICAg ICAgbG9jYXRpb24gfiBeL3Byb3h5XC5waHAoXD98L3wkKSB7DQoNCiAgICBmYXN0Y2dpX3Bhc3Mg dW5peDovdmFyL3J1bi9waHAvcGhwNy4wLWZwbS5zb2NrDQoNCiAgICAgICAgICAgIGZhc3RjZ2lf c3BsaXRfcGF0aF9pbmZvIF4oLitcLnBocCkoLiopJA0KDQogICAgICAgICAgICBpbmNsdWRlIGZh c3RjZ2lfcGFyYW1zDQoNCiAgICAgICAgICAgIGZhc3RjZ2lfcGFyYW0gU0NSSVBUX05BTUUgJGZh c3RjZ2lfc2NyaXB0X25hbWUNCg0KICAgICAgICAgICAgZmFzdGNnaV9wYXJhbSBTQ1JJUFRfRklM RU5BTUUgJGRvY3VtZW50X3Jvb3QkZmFzdGNnaV9zY3JpcHRfbmFtZQ0KDQogICAgICAgICAgICBm YXN0Y2dpX3BhcmFtIFBBVEhfSU5GTyAkZmFzdGNnaV9wYXRoX2luZm8NCg0KICAgICAgICAgICAg ZmFzdGNnaV9wYXJhbSBQQVRIX1RSQU5TTEFURUQgJGRvY3VtZW50X3Jvb3QkZmFzdGNnaV9wYXRo X2luZm8NCg0KICAgICAgICAgICAgZmFzdGNnaV9wYXJhbSBIVFRQUyAkbXlfaHR0cHMNCg0KICAg IGZhc3RjZ2lfcGFyYW0gU1lNRk9OWV9fQ01TX19FTkFCTEVEIGZhbHNlDQoNCiAgICAgICAgICAg IGZhc3RjZ2lfcGFyYW0gQ01TX0VOQUJMRUQgZmFsc2UNCg0KICAgICAgICAgICAgZmFzdGNnaV9i dWZmZXJfc2l6ZSAxMjhrDQoNCiAgICAgICAgICAgIGZhc3RjZ2lfYnVmZmVycyA0IDI1NmsNCg0K ICAgICAgICAgICAgZmFzdGNnaV9idXN5X2J1ZmZlcnNfc2l6ZSAyNTZrDQoNCiAgICAgICAgICAg IGFkZF9oZWFkZXIgQWNjZXNzLUNvbnRyb2wtQWxsb3ctSGVhZGVycyAiT3JpZ2luLCBYLVJlcXVl c3RlZC1XaXRoLCBDb250ZW50LVR5cGUsIEFjY2VwdCINCg0KICAgICAgICAgICAgYWRkX2hlYWRl ciBBY2Nlc3MtQ29udHJvbC1BbGxvdy1PcmlnaW4gIioiDQoNCiAgICAgICAgICAgICMgUHJldmVu dHMgVVJJcyB0aGF0IGluY2x1ZGUgdGhlIGZyb250IGNvbnRyb2xsZXIuIFRoaXMgd2lsbCA0MDQ6 DQoNCiAgICAgICAgICAgICMgaHR0cDovL2RvbWFpbi50bGQvYXBwLnBocC9zb21lLXBhdGgNCg0K ICAgICAgICAgICAgIyBSZW1vdmUgdGhlIGludGVybmFsIGRpcmVjdGl2ZSB0byBhbGxvdyBVUklz IGxpa2UgdGhpcw0KDQogICAgICAgICAgICAjaW50ZXJuYWwNCg0KICAgICAgICB9DQoNCg0KDQog ICAgICAgIGxvY2F0aW9uIH4gXi9hcHBcLnBocCgvfCQpIHsNCg0KICAgIGZhc3RjZ2lfcGFzcyB1 bml4Oi92YXIvcnVuL3BocC9waHA3LjAtZnBtLnNvY2sNCg0KICAgICAgICAgICAgZmFzdGNnaV9z cGxpdF9wYXRoX2luZm8gXiguK1wucGhwKSgvLiopJA0KDQogICAgICAgICAgICBpbmNsdWRlIGZh c3RjZ2lfcGFyYW1zDQoNCiAgICAgICAgICAgIGZhc3RjZ2lfcGFyYW0gU0NSSVBUX05BTUUgJGZh c3RjZ2lfc2NyaXB0X25hbWUNCg0KICAgICAgICAgICAgZmFzdGNnaV9wYXJhbSBTQ1JJUFRfRklM RU5BTUUgJGRvY3VtZW50X3Jvb3QkZmFzdGNnaV9zY3JpcHRfbmFtZQ0KDQogICAgICAgICAgICBm YXN0Y2dpX3BhcmFtIFBBVEhfSU5GTyAkZmFzdGNnaV9wYXRoX2luZm8NCg0KICAgICAgICAgICAg ZmFzdGNnaV9wYXJhbSBQQVRIX1RSQU5TTEFURUQgJGRvY3VtZW50X3Jvb3QkZmFzdGNnaV9wYXRo X2luZm8NCg0KICAgICAgICAgICAgZmFzdGNnaV9wYXJhbSBIVFRQUyAkbXlfaHR0cHMNCg0KICAg IGZhc3RjZ2lfcGFyYW0gU1lNRk9OWV9fQ01TX19FTkFCTEVEIGZhbHNlDQoNCiAgICAgICAgICAg IGZhc3RjZ2lfcGFyYW0gQ01TX0VOQUJMRUQgZmFsc2UNCg0KICAgICAgICAgICAgZmFzdGNnaV9i dWZmZXJfc2l6ZSAxMjhrDQoNCiAgICAgICAgICAgIGZhc3RjZ2lfYnVmZmVycyA0IDI1NmsNCg0K ICAgICAgICAgICAgZmFzdGNnaV9idXN5X2J1ZmZlcnNfc2l6ZSAyNTZrDQoNCiAgICAgICAgICAg IGFkZF9oZWFkZXIgQWNjZXNzLUNvbnRyb2wtQWxsb3ctSGVhZGVycyAiT3JpZ2luLCBYLVJlcXVl c3RlZC1XaXRoLCBDb250ZW50LVR5cGUsIEFjY2VwdCINCg0KICAgICAgICAgICAgYWRkX2hlYWRl ciBBY2Nlc3MtQ29udHJvbC1BbGxvdy1PcmlnaW4gIioiDQoNCg0KDQogICAgICAgICAgICAjIFBy ZXZlbnRzIFVSSXMgdGhhdCBpbmNsdWRlIHRoZSBmcm9udCBjb250cm9sbGVyLiBUaGlzIHdpbGwg NDA0Og0KDQogICAgICAgICAgICAjIGh0dHA6Ly9kb21haW4udGxkL2FwcC5waHAvc29tZS1wYXRo DQoNCiAgICAgICAgICAgICMgUmVtb3ZlIHRoZSBpbnRlcm5hbCBkaXJlY3RpdmUgdG8gYWxsb3cg VVJJcyBsaWtlIHRoaXMNCg0KICAgICAgICAgICAgaW50ZXJuYWwNCg0KICAgICAgICB9DQoNCg0K DQoNCg0KICAgIH0NCg0KDQotLS0tLS0tLS0tLS0tLSBuZXh0IHBhcnQgLS0tLS0tLS0tLS0tLS0N CkFuIEhUTUwgYXR0YWNobWVudCB3YXMgc2NydWJiZWQuLi4NClVSTDog77ycaHR0cDovL21haWxt YW4ubmdpbngub3JnL3BpcGVybWFpbC9uZ2lueC9hdHRhY2htZW50cy8yMDE3MDIxMy9iMTkxYzUy Ny9hdHRhY2htZW50LTAwMDEuaHRtbO+8ng0KDQotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0NCg0KTWVzc2FnZTogMw0KRGF0ZTogTW9uLCAxMyBGZWIgMjAxNyAxNDo0MjoyMiArMDMwMA0K RnJvbTogIklnb3IgQS4gSXBwb2xpdG92IiDvvJxpaXBwb2xpdG92QG5naW54LmNvbe+8ng0KVG86 IG5naW54QG5naW54Lm9yZw0KU3ViamVjdDogUmU6IGhhdmluZyBuZ2lueCBsaXN0ZW4gdGhlIHNh bWUgcG9ydCBtb3JlIHRoYW4gb25jZQ0KTWVzc2FnZS1JRDog77ycYmUzOGExYmUtZDVlMi1kZTA5 LTY4OGYtNjQyMzI2YzVjYWViQG5naW54LmNvbe+8ng0KQ29udGVudC1UeXBlOiB0ZXh0L3BsYWlu IGNoYXJzZXQ9d2luZG93cy0xMjUyIGZvcm1hdD1mbG93ZWQNCg0KQXNzdW1pbmcgYSBjb25maWd1 cmF0aW9uIHdpdGggbXVsdGlwbGUgc2ltaWxhciAnbGlzdGVuJyBhbmQgDQonc2VydmVyX25hbWUn IHN0YXRlbWVudHMsIG9ubHkgdGhlIGZpcnN0IG9uZSB3aWxsIHdvcms6DQrvvJ4gICAgIHNlcnZl ciB7DQrvvJ4gICAgICAgICBsaXN0ZW4gOTA5MA0K77yeICAgICAgICAgcmV0dXJuIDQwNA0K77ye ICAgICAgICAgc2VydmVyX25hbWUgZXhhbXBsZS5jb20NCu+8niAgICAgfQ0K77yeICAgICBzZXJ2 ZXIgew0K77yeICAgICAgICAgbGlzdGVuIDkwOTANCu+8niAgICAgICAgIHJldHVybiA0MDMNCu+8 niAgICAgICAgIHNlcnZlcl9uYW1lIGV4YW1wbGUuY29tDQrvvJ4gICAgIH0NCu+8niAgICAgc2Vy dmVyIHsNCu+8niAgICAgICAgIGxpc3RlbiA5MDkwDQrvvJ4gICAgICAgICByZXR1cm4gNDAwDQrv vJ4gICAgICAgICBzZXJ2ZXJfbmFtZSBleGFtcGxlLmNvbQ0K77yeICAgICB9DQoNCu+8niBuZ2lu eDogW3dhcm5dIGNvbmZsaWN0aW5nIHNlcnZlciBuYW1lICJleGFtcGxlLmNvbSIgb24gMC4wLjAu MDo5MDkwLCANCu+8niBpZ25vcmVkDQrvvJ4gbmdpbng6IFt3YXJuXSBjb25mbGljdGluZyBzZXJ2 ZXIgbmFtZSAiZXhhbXBsZS5jb20iIG9uIDAuMC4wLjA6OTA5MCwgDQrvvJ4gaWdub3JlZCANCkFm YWlrLCB0aGUgb25seSByZXBseSB5b3Ugd291bGQgYmUgYWJsZSB0byBnZXQgZnJvbSBzdWNoIGNv bmZpZ3VyYXRpb24gDQppcyAnNDA0Jw0KDQpPbiAxMy4wMi4yMDE3IDEwOjEzLCBSZWluaXMgUm96 aXRpcyB3cm90ZToNCu+8nu+8niBJIG9ic2VydmUgdGhhdCB0aGUgbmdpbnggcnVucyB3aXRoIG5v IGVycm9yIGlmIHRoZXJlIGFyZSBkdXBsaWNhdGUgbGlzdGVuIHBvcnRzIGNvbmZpZ3VyZWQgaW4g dGhlIGh0dHAgc2VydmVyIGJsb2NrIG9yIHN0cmVhbSBzZXJ2ZXIgYmxvY2suDQrvvJ4gaXMgdGhp cyBiZWhhdmlvciBhcyBleHBlY3RlZD8NCu+8ng0K77yeIFRoYXQgaXMgaG93IGV2ZXJ5IHdlYnNl cnZlciBjYXBhYmxlIG9mIG5hbWUgYmFzZWQgdmlydHVhbCBob3N0cyB3b3Jrcy4NCu+8niBTbyB5 ZXMgaXQncyBub3JtYWwgYW5kIGV4cGVjdGVkLg0K77yeDQrvvJ7vvJ4gYW5kIGlmIGEgcmVxdWVz dCBjb21lcyBhdCBzdWNoIGEgcG9ydCwgd2hpY2ggc2VydmVyIHdvdWxkIHNlcnZlIHRoaXMgcmVx dWVzdCwgYnkgcmFkb21seSBvciByb3VuZC1yb2Jpbj8NCu+8niBodHRwOi8vbmdpbngub3JnL2Vu L2RvY3MvaHR0cC9yZXF1ZXN0X3Byb2Nlc3NpbmcuaHRtbA0K77yeDQrvvJ4gcnINCu+8ng0K77ye IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQrvvJ4gbmdp bnggbWFpbGluZyBsaXN0DQrvvJ4gbmdpbnhAbmdpbngub3JnDQrvvJ4gaHR0cDovL21haWxtYW4u bmdpbngub3JnL21haWxtYW4vbGlzdGluZm8vbmdpbngNCg0KDQoNCg0KLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tDQoNClN1YmplY3Q6IERpZ2VzdCBGb290ZXINCg0KX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCm5naW54IG1haWxpbmcgbGlzdA0K bmdpbnhAbmdpbngub3JnDQpodHRwOi8vbWFpbG1hbi5uZ2lueC5vcmcvbWFpbG1hbi9saXN0aW5m by9uZ2lueA0KDQotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCg0KRW5kIG9mIG5naW54 IERpZ2VzdCwgVm9sIDg4LCBJc3N1ZSAxOA0KKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq KioqKioqKg== -------------- next part -------------- An HTML attachment was scrubbed... URL: From vl at nginx.com Tue Feb 14 09:03:51 2017 From: vl at nginx.com (Vladimir Homutov) Date: Tue, 14 Feb 2017 12:03:51 +0300 Subject: =?UTF-8?B?UmU6IOetlOWkjTogUmU6wqBoYXZpbmfCoG5naW54wqBsaXN0ZW7CoHRoZcKgc2Ft?= =?UTF-8?B?ZcKgcG9ydMKgbW9yZcKgdGhhbsKgb25jZQ==?= In-Reply-To: <201702141609101670243@zte.com.cn> References: <201702141609101670243@zte.com.cn> Message-ID: <20170214090350.GA16441@vlpc.nginx.com> On Tue, Feb 14, 2017 at 04:09:10PM +0800, he.hailong5 at zte.com.cn wrote: > now I understand the duplicate listen ports configured in the http > block can be used to implement virtual hosts. > > > but what's the purpose to allow this in the stream block? in my > practise (with 1.9.15.1), nginx will randomly select a backend to > serve the tcp/udp request which seems useless. yes, in stream it does not make sense and was not intented to work. > > > b.t.w, I also tried with 1.11.x today, looks like configuring > duplicate listen ports in stream block is forbidden. correct, this was fixed by this commit: http://hg.nginx.org/nginx/rev/68854ce64ec7 From mdounin at mdounin.ru Tue Feb 14 15:52:16 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 14 Feb 2017 18:52:16 +0300 Subject: nginx-1.11.10 Message-ID: <20170214155216.GY46625@mdounin.ru> Changes with nginx 1.11.10 14 Feb 2017 *) Change: cache header format has been changed, previously cached responses will be invalidated. *) Feature: support of "stale-while-revalidate" and "stale-if-error" extensions in the "Cache-Control" backend response header line. *) Feature: the "proxy_cache_background_update", "fastcgi_cache_background_update", "scgi_cache_background_update", and "uwsgi_cache_background_update" directives. *) Feature: nginx is now able to cache responses with the "Vary" header line up to 128 characters long (instead of 42 characters in previous versions). *) Feature: the "build" parameter of the "server_tokens" directive. Thanks to Tom Thorogood. *) Bugfix: "[crit] SSL_write() failed" messages might appear in logs when handling requests with the "Expect: 100-continue" request header line. *) Bugfix: the ngx_http_slice_module did not work in named locations. *) Bugfix: a segmentation fault might occur in a worker process when using AIO after an "X-Accel-Redirect" redirection. *) Bugfix: reduced memory consumption for long-lived requests using gzipping. -- Maxim Dounin http://nginx.org/ From ebaystardust at gmail.com Tue Feb 14 19:10:13 2017 From: ebaystardust at gmail.com (Ebayer Ebayer) Date: Tue, 14 Feb 2017 19:10:13 +0000 Subject: How to cache static files under root /var/www/html/images Message-ID: Hi, I have Nginx running as a webserver (not as proxy). I need to cache static files that are under /var/www/html/images in memory. What's the simplest way to do this? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Feb 14 19:14:39 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 14 Feb 2017 19:14:39 +0000 Subject: How to cache static files under root /var/www/html/images In-Reply-To: References: Message-ID: <20170214191439.GQ2958@daoine.org> On Tue, Feb 14, 2017 at 07:10:13PM +0000, Ebayer Ebayer wrote: Hi there, > I have Nginx running as a webserver (not as proxy). I need to cache static > files that are under /var/www/html/images in memory. What's the simplest > way to do this? Don't do anything special in nginx. Make sure that you use an operating system which has decent file caching, and make sure that you have enough memory that the operating system can cache these files. f -- Francis Daly francis at daoine.org From rainer at ultra-secure.de Tue Feb 14 19:16:24 2017 From: rainer at ultra-secure.de (Rainer Duffner) Date: Tue, 14 Feb 2017 20:16:24 +0100 Subject: How to cache static files under root /var/www/html/images In-Reply-To: References: Message-ID: <23CF55D0-E84D-42F0-9597-400C0B20C19E@ultra-secure.de> > Am 14.02.2017 um 20:10 schrieb Ebayer Ebayer : > > Hi, > > I have Nginx running as a webserver (not as proxy). I need to cache static files that are under /var/www/html/images in memory. What's the simplest way to do this? Your OS does that for you. That?s why it does not make sense to cache it again and why there?s no point in running varnish on local files. AFAIK, you can get a speed-increase if you have either FreeBSD+ZFS+AIO compiled into NGINX or FreeBSD 11 + nginx 1.11, which uses a special sendfile implementation to maximize deliver-speed of local files. https://www.nginx.com/blog/nginx-and-netflix-contribute-new-sendfile2-to-freebsd/ Also take a look at https://calomel.org/nginx.html Rainer -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.cox at kroger.com Tue Feb 14 20:08:22 2017 From: eric.cox at kroger.com (Cox, Eric S) Date: Tue, 14 Feb 2017 20:08:22 +0000 Subject: Including Multiple Server Blocks via wildcard Message-ID: <74A4D440E25E6843BC8E324E67BB3E39456BEBD1@N060XBOXP38.kroger.com> In my main nginx.conf file I am doing an include for various files to include multiple server blocks (1 block per file).... If I use a wildcard include the https servers break but the http server is fine.... Example.... include /servers/*; this would include 3 server blocks 1 http 2 https If I include each file specifically the servers work fine (including both https server blocks) ANY IDEA WHY THIS WOULD BE? The server starts up fine but I just can't connect to the HTTPS endpoints (timeout) .... Example include /servers/server1; include /servers/server2; include /servers/server3; Each server file contains a while server block.... Example..... server { listen 443 ssl backlog=2048; server_name server1.domain.com; ssl_certificate /test.crt; ssl_certificate_key /test.key; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_protocols TLSv1.2; location / { root /html; } include /locations/*_conf; status_zone https_server1; } Thanks, Eric ________________________________ This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is confidential and protected by law from unauthorized disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ebaystardust at gmail.com Tue Feb 14 20:25:23 2017 From: ebaystardust at gmail.com (Ebayer Ebayer) Date: Tue, 14 Feb 2017 20:25:23 +0000 Subject: How to cache static files under root /var/www/html/images In-Reply-To: <23CF55D0-E84D-42F0-9597-400C0B20C19E@ultra-secure.de> References: <23CF55D0-E84D-42F0-9597-400C0B20C19E@ultra-secure.de> Message-ID: Is there a more deterministic way besides fully trusting the MMU? I really don't think the MMU will execute well on what I'm setting to accomplish. Some more info: * I run Linux 2.6.32 (RH's) * I don't trust /dev/shm as a memory store * I want the kernel to keep files cached for a pre determined length of time Xmns * Don't want to think too hard about how the MMU evicts pages and how that affects caching exactly Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From rainer at ultra-secure.de Tue Feb 14 20:30:32 2017 From: rainer at ultra-secure.de (Rainer Duffner) Date: Tue, 14 Feb 2017 21:30:32 +0100 Subject: How to cache static files under root /var/www/html/images In-Reply-To: References: <23CF55D0-E84D-42F0-9597-400C0B20C19E@ultra-secure.de> Message-ID: <4D57E36B-A6F5-4184-8AFB-F1712FE79E47@ultra-secure.de> > Am 14.02.2017 um 21:25 schrieb Ebayer Ebayer : > > Is there a more deterministic way besides fully trusting the MMU? I really don't think the MMU will execute well on what I'm setting to accomplish. Some more info: > > * I run Linux 2.6.32 (RH's) > > * I don't trust /dev/shm as a memory store > > * I want the kernel to keep files cached for a pre determined length of time Xmns > > * Don't want to think too hard about how the MMU evicts pages and how that affects caching exactly You are overthinking this problem. Get a better OS if you don?t trust your current one. How large is your dataset? How much of that is ?hot?? What?s your specific use-case? -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.cox at kroger.com Tue Feb 14 20:39:59 2017 From: eric.cox at kroger.com (Cox, Eric S) Date: Tue, 14 Feb 2017 20:39:59 +0000 Subject: Including Multiple Server Blocks via wildcard In-Reply-To: <74A4D440E25E6843BC8E324E67BB3E39456BEBD1@N060XBOXP38.kroger.com> References: <74A4D440E25E6843BC8E324E67BB3E39456BEBD1@N060XBOXP38.kroger.com> Message-ID: <74A4D440E25E6843BC8E324E67BB3E39456BED10@N060XBOXP38.kroger.com> It appears it had nothing to do with the includes but what I had in my server blocks. If I put a particular https server include above another it broke one. I made 1 of the https server blocks as the default. From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Cox, Eric S Sent: Tuesday, February 14, 2017 3:08 PM To: nginx at nginx.org Subject: Including Multiple Server Blocks via wildcard In my main nginx.conf file I am doing an include for various files to include multiple server blocks (1 block per file).... If I use a wildcard include the https servers break but the http server is fine.... Example.... include /servers/*; this would include 3 server blocks 1 http 2 https If I include each file specifically the servers work fine (including both https server blocks) ANY IDEA WHY THIS WOULD BE? The server starts up fine but I just can't connect to the HTTPS endpoints (timeout) .... Example include /servers/server1; include /servers/server2; include /servers/server3; Each server file contains a while server block.... Example..... server { listen 443 ssl backlog=2048; server_name server1.domain.com; ssl_certificate /test.crt; ssl_certificate_key /test.key; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_protocols TLSv1.2; location / { root /html; } include /locations/*_conf; status_zone https_server1; } Thanks, Eric ________________________________ This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is confidential and protected by law from unauthorized disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. ________________________________ This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is confidential and protected by law from unauthorized disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ebaystardust at gmail.com Tue Feb 14 21:07:46 2017 From: ebaystardust at gmail.com (Ebayer Ebayer) Date: Tue, 14 Feb 2017 21:07:46 +0000 Subject: How to cache static files under root /var/www/html/images In-Reply-To: <4D57E36B-A6F5-4184-8AFB-F1712FE79E47@ultra-secure.de> References: <23CF55D0-E84D-42F0-9597-400C0B20C19E@ultra-secure.de> <4D57E36B-A6F5-4184-8AFB-F1712FE79E47@ultra-secure.de> Message-ID: I want to cache critical files indefinitely regardless of them being hot or stale until they're purged (by the app). Thanks On Feb 15, 2017 4:30 AM, "Rainer Duffner" wrote: Am 14.02.2017 um 21:25 schrieb Ebayer Ebayer : Is there a more deterministic way besides fully trusting the MMU? I really don't think the MMU will execute well on what I'm setting to accomplish. Some more info: * I run Linux 2.6.32 (RH's) * I don't trust /dev/shm as a memory store * I want the kernel to keep files cached for a pre determined length of time Xmns * Don't want to think too hard about how the MMU evicts pages and how that affects caching exactly You are overthinking this problem. Get a better OS if you don?t trust your current one. How large is your dataset? How much of that is ?hot?? What?s your specific use-case? _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Feb 14 21:40:45 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 14 Feb 2017 21:40:45 +0000 Subject: How to cache static files under root /var/www/html/images In-Reply-To: References: <23CF55D0-E84D-42F0-9597-400C0B20C19E@ultra-secure.de> <4D57E36B-A6F5-4184-8AFB-F1712FE79E47@ultra-secure.de> Message-ID: <20170214214045.GS2958@daoine.org> On Tue, Feb 14, 2017 at 09:07:46PM +0000, Ebayer Ebayer wrote: Hi there, > I want to cache critical files indefinitely regardless of them being hot or > stale until they're purged (by the app). It still doesn't sound like a task for nginx to me. If you want your OS file-cache to do busy-work to keep these files "hot", do just that. while :; do find /var/www/html/images -type f -exec cat \{} > /dev/null + sleep 10 done I don't recommend it; but it's not my server. Good luck with it, f -- Francis Daly francis at daoine.org From rainer at ultra-secure.de Tue Feb 14 21:45:57 2017 From: rainer at ultra-secure.de (Rainer Duffner) Date: Tue, 14 Feb 2017 22:45:57 +0100 Subject: How to cache static files under root /var/www/html/images In-Reply-To: References: <23CF55D0-E84D-42F0-9597-400C0B20C19E@ultra-secure.de> <4D57E36B-A6F5-4184-8AFB-F1712FE79E47@ultra-secure.de> Message-ID: <94E066BB-8EA5-4FC4-BFE7-9547A7618F0A@ultra-secure.de> > Am 14.02.2017 um 22:07 schrieb Ebayer Ebayer : > > I want to cache critical files indefinitely regardless of them being hot or stale until they're purged (by the app). > If you have enough RAM, they will stay cached. Do you also want to do the memory-management of your apps, allocating RAM to them as you see fit? Or for the cache in your HD? This problem has been solved, trust me (and others, and your OS). -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Feb 15 13:23:28 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 15 Feb 2017 16:23:28 +0300 Subject: input required on proxy_next_upstream In-Reply-To: References: Message-ID: <20170215132328.GG46625@mdounin.ru> Hello! On Wed, Feb 15, 2017 at 01:27:53PM +0530, Kaustubh Deorukhkar wrote: > We are using nginx as reverse proxy and have a set of upstream servers > configured > with upstream next enabled for few error conditions to try next upstream > server. > For some reason this is not working. Can someone suggest if am missing > something? [...] This question looks irrelevant to the nginx-devel@ mailing list. Please keep user-level questions in the nginx@ mailing list. Thank you. -- Maxim Dounin http://nginx.org/ From anoopalias01 at gmail.com Wed Feb 15 13:27:00 2017 From: anoopalias01 at gmail.com (Anoop Alias) Date: Wed, 15 Feb 2017 18:57:00 +0530 Subject: nginx-1.11.10 In-Reply-To: <20170214155216.GY46625@mdounin.ru> References: <20170214155216.GY46625@mdounin.ru> Message-ID: *_cache_background_update - What does it do? On Tue, Feb 14, 2017 at 9:22 PM, Maxim Dounin wrote: > Changes with nginx 1.11.10 14 Feb > 2017 > > *) Change: cache header format has been changed, previously cached > responses will be invalidated. > > *) Feature: support of "stale-while-revalidate" and "stale-if-error" > extensions in the "Cache-Control" backend response header line. > > *) Feature: the "proxy_cache_background_update", > "fastcgi_cache_background_update", "scgi_cache_background_update", > and "uwsgi_cache_background_update" directives. > > *) Feature: nginx is now able to cache responses with the "Vary" header > line up to 128 characters long (instead of 42 characters in previous > versions). > > *) Feature: the "build" parameter of the "server_tokens" directive. > Thanks to Tom Thorogood. > > *) Bugfix: "[crit] SSL_write() failed" messages might appear in logs > when handling requests with the "Expect: 100-continue" request > header > line. > > *) Bugfix: the ngx_http_slice_module did not work in named locations. > > *) Bugfix: a segmentation fault might occur in a worker process when > using AIO after an "X-Accel-Redirect" redirection. > > *) Bugfix: reduced memory consumption for long-lived requests using > gzipping. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruz at sports.ru Wed Feb 15 16:37:27 2017 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Wed, 15 Feb 2017 19:37:27 +0300 Subject: nginx-1.11.10 In-Reply-To: <20170214155216.GY46625@mdounin.ru> References: <20170214155216.GY46625@mdounin.ru> Message-ID: On Tue, Feb 14, 2017 at 6:52 PM, Maxim Dounin wrote: > *) Feature: the "proxy_cache_background_update", > "fastcgi_cache_background_update", "scgi_cache_background_update", > and "uwsgi_cache_background_update" directives. > Sounds so sexy :) Blog post? docs? -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruz at sports.ru Wed Feb 15 16:38:04 2017 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Wed, 15 Feb 2017 19:38:04 +0300 Subject: nginx-1.11.10 In-Reply-To: <20170214155216.GY46625@mdounin.ru> References: <20170214155216.GY46625@mdounin.ru> Message-ID: On Tue, Feb 14, 2017 at 6:52 PM, Maxim Dounin wrote: > *) Bugfix: a segmentation fault might occur in a worker process when > using AIO after an "X-Accel-Redirect" redirection. > Thank you. Been bothering us a lot... -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaustubh.deo at gmail.com Wed Feb 15 17:17:56 2017 From: kaustubh.deo at gmail.com (Kaustubh Deorukhkar) Date: Wed, 15 Feb 2017 22:47:56 +0530 Subject: input required on proxy_next_upstream Message-ID: Hi, We are using nginx as reverse proxy and have a set of upstream servers configured with upstream next enabled for few error conditions to try next upstream server. For some reason this is not working. Can someone suggest if am missing something? http { ... upstream myservice { server localhost:8081; server localhost:8082; } server { ... location / { proxy_pass http://myservice; proxy_next_upstream error timeout invalid_header http_502 http_503 http_504; } } } So what i want is if any upstream server gives the above errors, it should try the next upstream instance, but it does not and just reports error to clients. Note that, in my case one of the upstream server responds early for some PUT request with 503 before entire request is read by upstream. I understand that nginx closes the current upstream connection where it received early response, but i expect it to try the next upstream server as configured for the same request before it responds with error to client. Am I missing some nginx trick here? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdm at mgb-tech.com Wed Feb 15 20:38:10 2017 From: bdm at mgb-tech.com (Bruno De Maesschalck) Date: Wed, 15 Feb 2017 20:38:10 +0000 Subject: reverse proxy with TLS-PSK In-Reply-To: References: Message-ID: Hey, For our application we want embedded devices to access backend websocket/http services through nginx with TLS/SSL. The embedded devices are very resource constraint and would benefit from using TLS-PSK. My question is does nginx support reverse proxy and using TLS-PSK to secure incoming connections? >From what I understand nginx uses openssl and that supports TLS-PSK. If nginx supports this could you give me a hint in how to configure this? Thank you in advance. Bruno De Maesschalck -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Feb 15 21:58:27 2017 From: nginx-forum at forum.nginx.org (nrahl) Date: Wed, 15 Feb 2017 16:58:27 -0500 Subject: Client certificate fails with "unsupported certificate purpose" from iPad, works in desktop browsers Message-ID: <3f3b97eb181714bc93e642b272fc366d.NginxMailingListEnglish@forum.nginx.org> We have client certificates set up and working for desktop browsers, but when using the same certificates that work on the desktop browser from an iPad, we get a "400: The SSL certificate error" in the browser, and the following in the log: "18205#18205: *11 client SSL certificate verify error: (26:unsupported certificate purpose) while reading client request headers, client" "openssl x509 -purpose" for the cert used to create the pkcs12 file is: Certificate purposes: SSL client : Yes SSL client CA : No SSL server : Yes SSL server CA : No Netscape SSL server : Yes Netscape SSL server CA : No S/MIME signing : Yes S/MIME signing CA : No S/MIME encryption : Yes S/MIME encryption CA : No CRL signing : Yes CRL signing CA : No Any Purpose : Yes Any Purpose CA : Yes OCSP helper : Yes OCSP helper CA : No Time Stamp signing : No Time Stamp signing CA : No Which appears to be the correct purpose, and it does work in regular browsers. We have a CA, and intermediate CA to sign the client certs and then the client cert itself. The command used to create the pkcs file is: openssl pkcs12 -export -out file.pk12 -inkey file.key -in file.crt -certfile ca.comb -nodes -passout pass:mypassword Where ca.comb is the file specified in the ssl_client_certificate directive, which contains the public certificates for the CA, and the intermediary CA. Since this works fine on desktop browsers, I'm not sure what to check. How can I figure out what is going wrong? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272444,272444#msg-272444 From alexc at sbrella.com Thu Feb 16 03:59:54 2017 From: alexc at sbrella.com (alexc at sbrella.com) Date: Thu, 16 Feb 2017 11:59:54 +0800 Subject: potential null dereference Message-ID: <201702161159538821041@sbrella.com> Hi? In file /src/http/ngx_http_upstream.c, function ngx_http_upstream_finalize_request // if u->pipe == NULL, ngx_http_file_cache_free(r->cache, u->pipe->temp_file); will dereference a null pointer, it's that right ? // Regards // Alex if (u->store && u->pipe && u->pipe->temp_file && u->pipe->temp_file->file.fd != NGX_INVALID_FILE) { if (ngx_delete_file(u->pipe->temp_file->file.name.data) == NGX_FILE_ERROR) { ngx_log_error(NGX_LOG_CRIT, r->connection->log, ngx_errno, ngx_delete_file_n " \"%s\" failed", u->pipe->temp_file->file.name.data); } } #if (NGX_HTTP_CACHE) if (r->cache) { ...... ngx_http_file_cache_free(r->cache, u->pipe->temp_file); } alexc at sbrella.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Feb 16 05:40:49 2017 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Thu, 16 Feb 2017 00:40:49 -0500 Subject: swapiness value to be set for high load nginx server Message-ID: <2f8e05b46a3c70b643e479b79f03312c.NginxMailingListEnglish@forum.nginx.org> Hi, We are using nginx/1.10.2 as web server on centos and redhat Linux 7.2 OS. We are getting issues of getting our SWAP fully utilized even though we have free memory , vm.swapiness is kept as default i.e. 60. below are memory details for your reference : # free -g total used free shared buff/cache available Mem: 1511 32 3 361 1475 1091 Swap: 199 199 0 Please suggest method by which we can avoid swap full scenario. also let us know what should be vm.swapiness value for high load nginx web server. Also let us know any other sysctl parameters to improve nginx performance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272446,272446#msg-272446 From dewanggaba at xtremenitro.org Thu Feb 16 06:08:35 2017 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Thu, 16 Feb 2017 13:08:35 +0700 Subject: Question about proxy_cache_key Message-ID: <1390bd2f-1fd3-6878-2901-0605a17c907c@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hello! I've compiled latest nginx 1.11.10 with ngx_cache_purge, my configurations likes: proxy_cache_key "$uri$is_args$args"; proxy_cache_path /var/cache/nginx/proxy_cache levels=1:2 keys_zone=networksninja_cache:60m inactive=60m use_temp_path=off max_size=8192m; And location syntax is : location / { proxy_pass http://10.8.0.10:80; proxy_cache networksninja_cache; proxy_cache_purge PURGE; proxy_cache_use_stale error timeout updating http_500 http_503 http_504; proxy_cache_valid 200 302 5m; proxy_cache_valid any 3s; proxy_cache_lock on; proxy_cache_revalidate on; proxy_cache_min_uses 1; proxy_cache_bypass $arg_clear; proxy_no_cache $arg_clear; proxy_ignore_headers Cache-Control Expires Set-Cookie; rewrite (?.*) $capt?$args&view=$mobile break; } My question is, the URI https://subdomain.domain.tld/artikel/berita/a-pen-by-hair accessed from another browser (eg. Safari) and cache status was HIT, then I tried to invoke `PURGE` command from shell using cURL, it was failed and return cache_key not found But, if the URI accessed from shell using cURL and PURGE command was invoke from shell too, the cache_key found, but returning only like this : Successful purge

Successful purge


Key : /artikel/berita/a-pen-by-hair&view=desktop?
Path: /var/cache/nginx/proxy_cache/a/5e/7527d90390275ac034d4a3d5b2e485ea

nginx/1.11.10
The following argument "&view=desktop?" should be follow. Is there any hints to match the proxy_cache_key? So I can purge the cache anywhere and anytime? -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQI4BAEBCAAiBQJYpUHfGxxkZXdhbmdnYWJhQHh0cmVtZW5pdHJvLm9yZwAKCRDl f9IgoCjNcCz/D/wJqeJrn7vkxh2Nm7ZFAtBiNB7GL1oS3il7m7Rx7+rP9gZWVpOm zFqap7Nv+qzZ96319soTxGDxB3enGKbkP9Rr8J6ica3X3p4vG1rUryxnQX5cbV77 E6ikNckkFXK26MLWnbXHee8YrLNUjhePpqPZYpSvIMWpumTH2XtY3+EYRWlDFWJY 6tOqTTsz6nkvgXvcnrAvPl1oHfysm2Lzc773sd0uWxE/ue4DHQleKNVzG67tXxNF YFQCPp3Fa3qlK8F3s3jf/tgw/uZ6gwmDn4/0z6WiqIQ8HGxyEYdhxFr+1lJaWjG9 j6iYFjf/stpC9EHTvLH3NDvv0MSR38eeIvTB5SYiI/yGwsx+I+izHZn26Z1vYUeh QNkZhvjSpd6truliYQU6ftBR796LspM8LdoQuLB3z2Swg2BD1SB1vL3/Cm9Lwwoe MY3ghSKiVHyV9adySDCK9MWK/g77Cq3GUIXBW3uWFyBPTgqE84kRibw6tHI4Edaf r2gZPqT0N0qBr4cs6/6Q+84PNFCjQtr2lIB7UzNXV7leZRU5gax6rJN2wE6KemD5 pHRdocVM8QDKW4XJRBAjrdgYMsQu3DBLffsxFCCR75HIOnrxKxEWn4mk2qKtDTVO +lsSxancZ77VU2aZvh7znZ7dUy7eQQ6oA3OtR19h4yCccDbq5UrJxQLTeQ== =x9J2 -----END PGP SIGNATURE----- From dewanggaba at xtremenitro.org Thu Feb 16 06:58:12 2017 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Thu, 16 Feb 2017 13:58:12 +0700 Subject: Question about proxy_cache_key In-Reply-To: <1390bd2f-1fd3-6878-2901-0605a17c907c@xtremenitro.org> References: <1390bd2f-1fd3-6878-2901-0605a17c907c@xtremenitro.org> Message-ID: <9080bbf9-b423-467f-f0d8-f26d844cb404@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Example: [dominique at galea ~]$ curl -I "http://networksninja/artikel/berita/a-pen-by-hair" HTTP/1.1 200 OK Server: networksninja Date: Thu, 16 Feb 2017 06:48:13 GMT Content-Type: text/html; charset=UTF-8 Connection: keep-alive Vary: Accept-Encoding Vary: Accept-Encoding Set-Cookie: PHPSESSID=ddkfbo9jd6bhkhhbknkd4pq854; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Cache-Control: no-cache X-Cache: MISS [dominique at galea ~]$ curl -I "http://networksninja/artikel/berita/a-pen-by-hair" HTTP/1.1 200 OK Server: networksninja Date: Thu, 16 Feb 2017 06:48:14 GMT Content-Type: text/html; charset=UTF-8 Connection: keep-alive Vary: Accept-Encoding Vary: Accept-Encoding Set-Cookie: PHPSESSID=ddkfbo9jd6bhkhhbknkd4pq854; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Cache-Control: no-cache X-Cache: HIT [dominique at galea ~]$ curl -X PURGE "http://networksninja/artikel/berita/a-pen-by-hair" Successful purge

Successful purge


Key : /artikel/berita/a-pen-by-hair?view=desktop
Path: /var/cache/nginx/networksninja_cache/f/bf/f3863443c164cdfa95f6fe870be7db ff

nginx/1.11.10
On 02/16/2017 01:08 PM, Dewangga Bachrul Alam wrote: > Hello! > > I've compiled latest nginx 1.11.10 with ngx_cache_purge, my > configurations likes: > > proxy_cache_key "$uri$is_args$args"; proxy_cache_path > /var/cache/nginx/proxy_cache levels=1:2 > keys_zone=networksninja_cache:60m inactive=60m use_temp_path=off > max_size=8192m; > > And location syntax is : > > location / { proxy_pass http://10.8.0.10:80; proxy_cache > networksninja_cache; proxy_cache_purge PURGE; > proxy_cache_use_stale error timeout updating http_500 http_503 > http_504; proxy_cache_valid 200 302 5m; proxy_cache_valid any 3s; > proxy_cache_lock on; proxy_cache_revalidate on; > proxy_cache_min_uses 1; proxy_cache_bypass $arg_clear; > proxy_no_cache $arg_clear; proxy_ignore_headers > Cache-Control Expires Set-Cookie; rewrite (?.*) > $capt?$args&view=$mobile break; } > > My question is, the URI > https://subdomain.domain.tld/artikel/berita/a-pen-by-hair accessed > from another browser (eg. Safari) and cache status was HIT, then I > tried to invoke `PURGE` command from shell using cURL, it was > failed and return cache_key not found > > But, if the URI accessed from shell using cURL and PURGE command > was invoke from shell too, the cache_key found, but returning only > like this : > > Successful purge bgcolor="white">

Successful purge


Key : > /artikel/berita/a-pen-by-hair&view=desktop?
Path: > /var/cache/nginx/proxy_cache/a/5e/7527d90390275ac034d4a3d5b2e485ea >

nginx/1.11.10
> > The following argument "&view=desktop?" should be follow. Is there > any hints to match the proxy_cache_key? So I can purge the cache > anywhere and anytime? > -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQI4BAEBCAAiBQJYpU2BGxxkZXdhbmdnYWJhQHh0cmVtZW5pdHJvLm9yZwAKCRDl f9IgoCjNcMLYD/9roaDzgiqMkISbOVt/upKenciszjrjAs1/1UgKB8R82+gTTRV6 UHburcH4Y90NhXL+8jUscHI1Ln3ehj0UL6SubWkcqtR5ezroE9paj/qh6yxnn4UA clc7SH8rKYwWCm/r7BfN8IvBVVYqLuD/m3EHj50QX3Y0g6g/dE92ij5wKKBXOR1B Wf+jSl66zHWZtBDbcvK60FUYHKEGkA94scbQL13v+jd+EUNG+ULMK0usu0CLFYKx +VeDcfoTO5cqPGRupi3EvMK6cAGde1OwnHQdALZZEaTbme+JXNMUJYK/mh4f2duU 4BYhJbdR2Dj1OQavDFOyOGXv+Ui3AI85BV3m0CqhqJO383lpwAvAnzV+xGjSkjbA 7alpO8d/hDQfR4/JQ6gFnwGD/S3t6bre9I6w0kjHvWVNRq4AXL0fYs3JL9X+oTno PNEH8puGC2miNIGbh8vFlznpfVJh7m1GxyyDYffuQyygt3fxSyu/zzDaRz4+RMNY wbiCfRC7vlBI8UVY0kpS1Nz4e7EHBk33TWuxLHLRtyJqgEWdRaMprj3rKGBj17RU VzhTj/fcjiqwB565uWPcKHIG7eMXpygMwf05guTbxKuXiUUK3MBXOWpwyN8EPqIi 6HX59SmfHmE74Fl7AOcrB+cyRJtJODnsECKwIyNEpppRzhMr+gsrwVsJBQ== =xdSV -----END PGP SIGNATURE----- From reallfqq-nginx at yahoo.fr Thu Feb 16 08:43:57 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 16 Feb 2017 09:43:57 +0100 Subject: potential null dereference In-Reply-To: <201702161159538821041@sbrella.com> References: <201702161159538821041@sbrella.com> Message-ID: If you think you spot a bug, You'd better open a ticket on Trac . You could also talk about development matters on the nginx-devel Mailling List. This is the 'users' ML, centered on use cases/configuration/help with/discussions around the software. --- *B. R.* On Thu, Feb 16, 2017 at 4:59 AM, alexc at sbrella.com wrote: > Hi? > > In file /src/http/ngx_http_upstream.c, function > ngx_http_upstream_finalize_request > > > // if u->pipe == NULL, ngx_http_file_cache_free(r->cache, > u->pipe->temp_file); will dereference a null pointer, it's that right ? > > // Regards > // Alex > > if (u->store && u->pipe && u->pipe->temp_file > && u->pipe->temp_file->file.fd != NGX_INVALID_FILE) > { > if (ngx_delete_file(u->pipe->temp_file->file.name.data) > == NGX_FILE_ERROR) > { > ngx_log_error(NGX_LOG_CRIT, r->connection->log, ngx_errno, > ngx_delete_file_n " \"%s\" failed", > u->pipe->temp_file->file.name.data); > } > } > > #if (NGX_HTTP_CACHE) > > if (r->cache) { > > ...... > > ngx_http_file_cache_free(r->cache, u->pipe->temp_file); > } > > > ------------------------------ > alexc at sbrella.com > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Feb 16 12:41:04 2017 From: francis at daoine.org (Francis Daly) Date: Thu, 16 Feb 2017 12:41:04 +0000 Subject: Question about proxy_cache_key In-Reply-To: <1390bd2f-1fd3-6878-2901-0605a17c907c@xtremenitro.org> References: <1390bd2f-1fd3-6878-2901-0605a17c907c@xtremenitro.org> Message-ID: <20170216124104.GT2958@daoine.org> On Thu, Feb 16, 2017 at 01:08:35PM +0700, Dewangga Bachrul Alam wrote: Hi there, > proxy_cache_key "$uri$is_args$args"; > > location / { > proxy_ignore_headers Cache-Control Expires Set-Cookie; > rewrite (?.*) $capt?$args&view=$mobile break; > } > My question is, the URI > https://subdomain.domain.tld/artikel/berita/a-pen-by-hair accessed > from another browser (eg. Safari) and cache status was HIT, then I > tried to invoke `PURGE` command from shell using cURL, it was failed > and return cache_key not found I do not know the "fix"; but I suspect that the *reason* is due to the Vary header that is sent in the response. If you know that "Vary" is not relevant in your case, then possibly temporarily adding it to proxy_ignore_headers will let you see whether that is related. > The following argument "&view=desktop?" should be follow. Is there any > hints to match the proxy_cache_key? So I can purge the cache anywhere > and anytime? I guess that the "view=desktop" part comes from your rewrite directive; it may take more documentation-digging to find the actual cache key that is used internally in nginx, so that the purge facility can work with it. (Or possibly the ngx_cache_purge module could be updated to work transparently with the current nginx cache key behaviour -- I do not know whether that is possible.) Good luck with it, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Thu Feb 16 13:20:02 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 16 Feb 2017 16:20:02 +0300 Subject: potential null dereference In-Reply-To: <201702161159538821041@sbrella.com> References: <201702161159538821041@sbrella.com> Message-ID: <20170216132002.GJ46625@mdounin.ru> Hello! On Thu, Feb 16, 2017 at 11:59:54AM +0800, alexc at sbrella.com wrote: > In file /src/http/ngx_http_upstream.c, function > ngx_http_upstream_finalize_request > > > // if u->pipe == NULL, ngx_http_file_cache_free(r->cache, > u->pipe->temp_file); will dereference a null pointer, it's that > right ? Sure. _If_ u->pipe == NULL, ngx_http_file_cache_free() will dereference a null pointer. But u->pipe == NULL with r->cache != NULL means a bug elsewhere. As already suggested, the nginx-devel@ mailing list may be a better place for such questions. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Feb 16 13:26:35 2017 From: nginx-forum at forum.nginx.org (epoch1) Date: Thu, 16 Feb 2017 08:26:35 -0500 Subject: Proxying and static files Message-ID: <6acabfb909c12b183c72a2207702bd60.NginxMailingListEnglish@forum.nginx.org> Hi I have a number of apps running behind nginx and I want to configure nginx so that is serves static content (js and css files) from the pulic directory of each app directly rather than proxying these requests, for example: myserver.com/app1 dynamic requests proxied to hypnotoad (perl server) listening on http://*:5000, css/js files served directly by Nginx (example path app1/public/js/script.js) myserver.com/app2 dynamic requests proxied to hypnotoad (perl server) listening on http://*:5001, css/js files served directly by Nginx (example path app2/public/js/script.js) myserver.com/appN dynamic requests proxied to hypnotoad (perl server) listening on http://*:5..N, css/js files served directly by Nginx (example path app3/public/js/script.js) My conf file: server { listen 80 default_server; listen [::]:80 default_server; # Root for stuff like default index.html root /var/www/html; location /app1 { proxy_pass http://127.0.0.1:5000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Host $host:$server_port; } location /app2 { proxy_pass http://127.0.0.1:5001; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Host $host:$server_port; } location /appN {...} } I've tried something like the following but can't get it work for each app: location ~* /(images|css|js|files)/ { root /home/username/app1/public/; } If I request app1/js/script.js for example it goes to /home/username/app1/public/app1/js/script.js rather than /home/username/app1/public/js/script.js How can I get this to work for each app? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272457,272457#msg-272457 From mdounin at mdounin.ru Thu Feb 16 14:29:14 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 16 Feb 2017 17:29:14 +0300 Subject: reverse proxy with TLS-PSK In-Reply-To: References: Message-ID: <20170216142914.GM46625@mdounin.ru> Hello! On Wed, Feb 15, 2017 at 08:38:10PM +0000, Bruno De Maesschalck wrote: > Hey, > > For our application we want embedded devices to access backend websocket/http services through nginx with TLS/SSL. > The embedded devices are very resource constraint and would benefit from using TLS-PSK. > > My question is does nginx support reverse proxy and using TLS-PSK to secure incoming connections? > From what I understand nginx uses openssl and that supports TLS-PSK. > If nginx supports this could you give me a hint in how to configure this? No, nginx doesn't support TLS-PSK. -- Maxim Dounin http://nginx.org/ From yar at nginx.com Thu Feb 16 17:56:45 2017 From: yar at nginx.com (Yaroslav Zhuravlev) Date: Thu, 16 Feb 2017 20:56:45 +0300 Subject: nginx-1.11.10 In-Reply-To: References: <20170214155216.GY46625@mdounin.ru> Message-ID: <1630EB93-6398-4EC4-B3F2-E5D6A8E70654@nginx.com> On 15 Feb 2017, at 19:37, ?????? ??????? wrote: > > On Tue, Feb 14, 2017 at 6:52 PM, Maxim Dounin wrote: > *) Feature: the "proxy_cache_background_update", > "fastcgi_cache_background_update", "scgi_cache_background_update", > and "uwsgi_cache_background_update" directives. > > Sounds so sexy :) Blog post? docs? > http://nginx.org/r/proxy_cache_background_update http://nginx.org/r/fastcgi_cache_background_update http://nginx.org/r/scgi_cache_background_update http://nginx.org/r/uwsgi_cache_background_update [...] From francis at daoine.org Thu Feb 16 21:38:21 2017 From: francis at daoine.org (Francis Daly) Date: Thu, 16 Feb 2017 21:38:21 +0000 Subject: Proxying and static files In-Reply-To: <6acabfb909c12b183c72a2207702bd60.NginxMailingListEnglish@forum.nginx.org> References: <6acabfb909c12b183c72a2207702bd60.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170216213821.GU2958@daoine.org> On Thu, Feb 16, 2017 at 08:26:35AM -0500, epoch1 wrote: Hi there, > I've tried something like the following but can't get it work for each app: > location ~* /(images|css|js|files)/ { > root /home/username/app1/public/; > } > > If I request app1/js/script.js for example it goes to > /home/username/app1/public/app1/js/script.js rather than > /home/username/app1/public/js/script.js If the web request /one/two/thr.ee does not correspond to a file /docroot/one/two/thr.ee, then you probably want to use "alias" (http://nginx.org/r/alias) instead of "root". It's not clear to me what your exact pattern for recognising "static" vs "dynamic" files is; perhaps something like one of location /app1/images/ { alias /home/username/app1/public/images/; } (with similar things for the other directories); or location ~* ^/app1/(images|css|js|files)/(.*) { alias /home/username/app1/public/$1/$2; } or location ~* ^/app1/(.*.(js|css))$ { alias /home/username/app1/public/$1; } In each case, the "location"s (particularly the regex one) could be nested within the matching "main" location that you already have. God luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Feb 17 07:35:05 2017 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Fri, 17 Feb 2017 02:35:05 -0500 Subject: swapiness value to be set for high load nginx server In-Reply-To: <2f8e05b46a3c70b643e479b79f03312c.NginxMailingListEnglish@forum.nginx.org> References: <2f8e05b46a3c70b643e479b79f03312c.NginxMailingListEnglish@forum.nginx.org> Message-ID: Kindly assist .. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272446,272479#msg-272479 From iippolitov at nginx.com Fri Feb 17 09:09:41 2017 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Fri, 17 Feb 2017 12:09:41 +0300 Subject: swapiness value to be set for high load nginx server In-Reply-To: <2f8e05b46a3c70b643e479b79f03312c.NginxMailingListEnglish@forum.nginx.org> References: <2f8e05b46a3c70b643e479b79f03312c.NginxMailingListEnglish@forum.nginx.org> Message-ID: I would suggest disabling swap at all. With 1.5Tb of RAM I doubt you need any. You can try finding out what is swapped by `smem` utility. May be you can live we those files swapped out of memory. Anyway, I doubt swappiness tuning will help you. Look through: https://www.kernel.org/doc/Documentation/sysctl/vm.txt Set swappiness to 0. On 16.02.2017 08:40, omkar_jadhav_20 wrote: > Hi, > > We are using nginx/1.10.2 as web server on centos and redhat Linux 7.2 OS. > We are getting issues of getting our SWAP fully utilized even though we have > free memory , vm.swapiness is kept as default i.e. 60. > below are memory details for your reference : > > # free -g > total used free shared buff/cache > available > Mem: 1511 32 3 361 1475 > 1091 > Swap: 199 199 0 > > Please suggest method by which we can avoid swap full scenario. also let us > know what should be vm.swapiness value for high load nginx web server. Also > let us know any other sysctl parameters to improve nginx performance. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272446,272446#msg-272446 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From kaustubh.deo at gmail.com Fri Feb 17 14:08:39 2017 From: kaustubh.deo at gmail.com (Kaustubh Deorukhkar) Date: Fri, 17 Feb 2017 19:38:39 +0530 Subject: input required on proxy_next_upstream In-Reply-To: References: Message-ID: Hello, Any inputs on this? Is is supported to retry next upstream, if upstream server responds early rejecting request for any valid reason? Thanks, Kaustubh On Wed, Feb 15, 2017 at 10:47 PM, Kaustubh Deorukhkar < kaustubh.deo at gmail.com> wrote: > Hi, > > We are using nginx as reverse proxy and have a set of upstream servers > configured > with upstream next enabled for few error conditions to try next upstream > server. > For some reason this is not working. Can someone suggest if am missing > something? > > http { > ... > upstream myservice { > server localhost:8081; > server localhost:8082; > } > > server { > ... > location / { > proxy_pass http://myservice; > proxy_next_upstream error timeout invalid_header http_502 http_503 > http_504; > } > } > } > > So what i want is if any upstream server gives the above errors, it should > try > the next upstream instance, but it does not and just reports error to > clients. > > Note that, in my case one of the upstream server responds early for some > PUT request with 503 before entire request is read by upstream. I > understand that nginx closes the current upstream connection where it > received early response, but i expect it to try the next upstream server > as configured for the same request before it responds with error to client. > > Am I missing some nginx trick here? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Feb 17 19:52:53 2017 From: nginx-forum at forum.nginx.org (agforte) Date: Fri, 17 Feb 2017 14:52:53 -0500 Subject: SSL Passthrough Message-ID: <4c7f38b0e9e08ec1ae0de0415779f0df.NginxMailingListEnglish@forum.nginx.org> Hi all, I have the following setup: PRIVATE SERVER <--> NGINX <--> PUBLIC SERVER I need the NGINX server to work as both reverse and forward proxy with SSL passthrough. I have found online the following configuration for achieving this (note that for the forward proxy, I send packets always to the same destination, the public server, hardcoded in proxy_pass): stream { upstream backend { server :8080; } # Reverse proxy server { listen 9090; proxy_pass backend; } # Forward proxy server{ listen 9092; proxy_pass :8080; } } I have not tried the reverse proxy capability yet as the forward proxy is already giving me problems. In particular, when the private server tries to connect to the public server the TLS session fails. On the public server it says: http: TLS handshake error from :49848: tls: oversized record received with length 20037 while on the private server it says: Post https://:8080/subscribe: malformed HTTP response "\x15\x03\x01\x00\x02\x02\x16" This is what I see with tshark: PRIVATE_SRV ? NGINX TCP 74 48044?9092 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=1209793579 TSecr=0 WS=128 NGINX ? PRIVATE_SRV TCP 74 9092?48044 [SYN, ACK] Seq=0 Ack=1 Win=28960 Len=0 MSS=1460 SACK_PERM=1 TSval=1209793579 TSecr=1209793579 WS=128 PRIVATE_SRV ? NGINX TCP 66 48044?9092 [ACK] Seq=1 Ack=1 Win=29312 Len=0 TSval=1209793579 TSecr=1209793579 NGINX ? PUBLIC_SRV TCP 74 49848?8080 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=1209793579 TSecr=0 WS=128 PRIVATE_SRV ? NGINX HTTP 161 CONNECT :8080 HTTP/1.1 NGINX ? PRIVATE_SRV TCP 66 9092?48044 [ACK] Seq=1 Ack=96 Win=29056 Len=0 TSval=1209793580 TSecr=1209793580 PUBLIC_SRV ? NGINX TCP 74 8080?49848 [SYN, ACK] Seq=0 Ack=1 Win=28960 Len=0 MSS=1460 SACK_PERM=1 TSval=854036623 TSecr=1209793579 WS=128 NGINX ? PUBLIC_SRV TCP 66 49848?8080 [ACK] Seq=1 Ack=1 Win=29312 Len=0 TSval=1209793580 TSecr=854036623 NGINX ? PUBLIC_SRV HTTP 161 CONNECT :8080 HTTP/1.1 PUBLIC_SRV ? NGINX TCP 66 8080?49848 [ACK] Seq=1 Ack=96 Win=29056 Len=0 TSval=854036623 TSecr=1209793580 PUBLIC_SRV ? NGINX HTTP 73 Continuation NGINX ? PUBLIC_SRV TCP 66 49848?8080 [ACK] Seq=96 Ack=8 Win=29312 Len=0 TSval=1209793581 TSecr=854036623 NGINX ? PRIVATE_SRV HTTP 73 Continuation PRIVATE_SRV ? NGINX TCP 66 48044?9092 [ACK] Seq=96 Ack=8 Win=29312 Len=0 TSval=1209793581 TSecr=1209793581 PUBLIC_SRV ? NGINX TCP 66 8080?49848 [FIN, ACK] Seq=8 Ack=96 Win=29056 Len=0 TSval=854036624 TSecr=1209793580 NGINX ? PUBLIC_SRV TCP 66 49848?8080 [FIN, ACK] Seq=96 Ack=9 Win=29312 Len=0 TSval=1209793581 TSecr=854036624 NGINX ? PRIVATE_SRV TCP 66 9092?48044 [FIN, ACK] Seq=8 Ack=96 Win=29056 Len=0 TSval=1209793581 TSecr=1209793581 PRIVATE_SRV ? NGINX TCP 66 48044?9092 [FIN, ACK] Seq=96 Ack=9 Win=29312 Len=0 TSval=1209793581 TSecr=1209793581 NGINX ? PRIVATE_SRV TCP 66 9092?48044 [ACK] Seq=9 Ack=97 Win=29056 Len=0 TSval=1209793581 TSecr=1209793581 PUBLIC_SRV ? NGINX TCP 66 8080?49848 [ACK] Seq=9 Ack=97 Win=29056 Len=0 TSval=854036624 TSecr=1209793581 Do you have any suggestion on how to debug this? Is the fact that I am using HTTPS POST matter? Does it matter for NGINX that I am not using the default port 443 for SSL? Thanks a lot for all the help you may give me. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272487,272487#msg-272487 From iippolitov at nginx.com Fri Feb 17 21:11:06 2017 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Sat, 18 Feb 2017 00:11:06 +0300 Subject: input required on proxy_next_upstream In-Reply-To: References: Message-ID: <21604c61-3dd7-3bf1-8dc4-5174622074cc@nginx.com> Could it happen, that all servers reply with HTTP 503? I suggest you could extend your logs with upstream_status variable and if there is only one upstream reply status - try looking through error logs. On 15.02.2017 20:17, Kaustubh Deorukhkar wrote: > Hi, > > We are using nginx as reverse proxy and have a set of upstream servers > configured > with upstream next enabled for few error conditions to try next > upstream server. > For some reason this is not working. Can someone suggest if am missing > something? > > http { > ... > upstream myservice { > server localhost:8081; > server localhost:8082; > } > > server { > ... > location / { > proxy_pass http://myservice ; > proxy_next_upstream error timeout invalid_header http_502 > http_503 http_504; > } > } > } > > So what i want is if any upstream server gives the above errors, it > should try > the next upstream instance, but it does not and just reports error to > clients. > > Note that, in my case one of the upstream server responds early for > some PUT request with 503 before entire request is read by upstream. I > understand that nginx closes the current upstream connection where it > received early response, but i expect it to try the next upstream > server as configured for the same request before it responds with > error to client. > > Am I missing some nginx trick here? > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Feb 17 21:51:20 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 17 Feb 2017 21:51:20 +0000 Subject: SSL Passthrough In-Reply-To: <4c7f38b0e9e08ec1ae0de0415779f0df.NginxMailingListEnglish@forum.nginx.org> References: <4c7f38b0e9e08ec1ae0de0415779f0df.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170217215120.GV2958@daoine.org> On Fri, Feb 17, 2017 at 02:52:53PM -0500, agforte wrote: Hi there, > I have the following setup: > > PRIVATE SERVER <--> NGINX <--> PUBLIC SERVER > > I need the NGINX server to work as both reverse and forward proxy with SSL > passthrough. That's not going to work without a lot of patching of the nginx source. nginx is not a forward proxy. If you can rephrase your requirements in terms of things that nginx can do, it might be possible to find a design that works. If you can rephrase your requirements in terms of requests and responses (I am not sure what exactly you are trying to do as-is), it may be possible to come up with a solution -- but if the solution is "use this non-nginx product in this particular way", you may be happier looking for confirmation elsewhere. > stream { Note: "stream" is (effectively) a tcp-forwarder. nginx does not know or care about what is inside the packets. "proxying", in the sense of http, does not apply. > while on the private server it says: > Post https://:8080/subscribe: malformed HTTP response > "\x15\x03\x01\x00\x02\x02\x16" Searching the web for \x15\x03\x01\x00\x02\x02\x16 suggests that that is what you get back when you make a http request to a https server. > PRIVATE_SRV ? NGINX HTTP 161 CONNECT :8080 HTTP/1.1 That "CONNECT" is what a http client does when it is configured to use a http-proxy to connect to a https service. > Do you have any suggestion on how to debug this? Is the fact that I am using > HTTPS POST matter? Does it matter for NGINX that I am not using the default > port 443 for SSL? Your nginx config means that nginx does not care about http or https; it just copies packets. You'll want to rethink your design, in order to find something that can do what you want. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Feb 17 22:23:24 2017 From: nginx-forum at forum.nginx.org (agforte) Date: Fri, 17 Feb 2017 17:23:24 -0500 Subject: SSL Passthrough In-Reply-To: <4c7f38b0e9e08ec1ae0de0415779f0df.NginxMailingListEnglish@forum.nginx.org> References: <4c7f38b0e9e08ec1ae0de0415779f0df.NginxMailingListEnglish@forum.nginx.org> Message-ID: <788b652fc77b3a6c293bb4b1a3f55cfe.NginxMailingListEnglish@forum.nginx.org> up vote 0 down vote accept I have found the problem. It was actually a code issue. I am using Golang. The problem was that I was configuring the Proxy as: *httpsCl = http.Client{ Transport: &http.Transport{ Proxy: http.ProxyURL(proxyUrl), TLSClientConfig: tlsConfig }, } and in doing so the private server was expecting to talk to the Proxy which does not work if we are using SSL passthrough. Instead I had to configure the proxy as: *httpsCl = http.Client{ Transport: &http.Transport{ Dial: func(network, addr string) (net.Conn, error){ return net.Dial("tcp", proxyUrl.Host) }, TLSClientConfig: tlsConfig }, } Thanks all! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272487,272493#msg-272493 From nginx-forum at forum.nginx.org Fri Feb 17 22:37:47 2017 From: nginx-forum at forum.nginx.org (agforte) Date: Fri, 17 Feb 2017 17:37:47 -0500 Subject: SSL Passthrough In-Reply-To: <20170217215120.GV2958@daoine.org> References: <20170217215120.GV2958@daoine.org> Message-ID: <7b4be39f32387c1589a81752e1bfb630.NginxMailingListEnglish@forum.nginx.org> Francis, thank you for your reply. It seems I have fixed the issue. However, I just wanted to say that in stream mode, you can use NGINX as a forward proxy as well. As you mention, in stream mode, NGINX behaves as a "TCP router" in the sense that it just relays segments or packets using the correct network interface as per routing tables. In other words, NGINX is not aware if it is proxying packets inside or outside, it is just "routing" them according to its config file and the OS IP tables (i.e., selecting the correct network interface). Ciao, -A. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272487,272494#msg-272494 From nginx-forum at forum.nginx.org Sat Feb 18 04:29:22 2017 From: nginx-forum at forum.nginx.org (kaustubh) Date: Fri, 17 Feb 2017 23:29:22 -0500 Subject: input required on proxy_next_upstream In-Reply-To: <21604c61-3dd7-3bf1-8dc4-5174622074cc@nginx.com> References: <21604c61-3dd7-3bf1-8dc4-5174622074cc@nginx.com> Message-ID: <768615df220f7120685e45cb6796118c.NginxMailingListEnglish@forum.nginx.org> Thanks for reply. But I checked upstreams and second instance is working fine but does not receive retry request. I did small setup where one upstream instance responds early with 503 and other instance processes requests, and I observe that the request never comes to working upstream server on early 503 from first upstream. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272440,272495#msg-272495 From francis at daoine.org Sat Feb 18 10:45:06 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 18 Feb 2017 10:45:06 +0000 Subject: input required on proxy_next_upstream In-Reply-To: References: Message-ID: <20170218104506.GW2958@daoine.org> On Wed, Feb 15, 2017 at 10:47:56PM +0530, Kaustubh Deorukhkar wrote: Hi there, > For some reason this is not working. Can someone suggest if am missing > something? It seems to work fine for me as-is for GET and PUT. And not for POST. > proxy_next_upstream error timeout invalid_header http_502 http_503 > http_504; http://nginx.org/r/proxy_next_upstream See "non_idempotent" for POST. > Note that, in my case one of the upstream server responds early for some > PUT request with 503 before entire request is read by upstream. I PUT is idempotent, so it should Just Work. == http { upstream local { server 127.0.0.2:8882; server 127.0.0.2:8883; } server { listen 8880; location / { proxy_pass http://local; proxy_next_upstream error timeout invalid_header http_502 http_503 http_504; } } server { listen 8882; access_log logs/503.log combined; return 503; } server { listen 8883; access_log logs/200.log combined; return 200 "Got $request\n"; } } == echo xx | curl -v -T - http://127.0.0.1:8880/one That gives me a http 200. The log files shows that the request went to 8882, where it got 503, and then went to 8883, where it got 200. Does that test case work for you? What part of your system is different from that test case? (It may be related to the actual upstream response; but once the difference is identified, that hints at where to do further investigation.) Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sun Feb 19 17:42:11 2017 From: nginx-forum at forum.nginx.org (weheartwebsites) Date: Sun, 19 Feb 2017 12:42:11 -0500 Subject: try_files does not have any effect on existing files Message-ID: usually you would have something like this in your config: location / { try_files $uri $uri/ /index.php } which works pretty good (1.11.10) - however it seems, that if you are requesting a physical file it will work anyway und the try_files gets ignored - so the following will work just as well: location / { try_files /foobar /index.php } This means, I can not for example overwrite an existing physical file location with a config like this: location / { try_files /$host$uri /index.php } Since if $uri exists under the root/alias it will be served directly without triggering that try_files directive. Am I doing something wrong - or is this expected behaviour? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272503,272503#msg-272503 From francis at daoine.org Sun Feb 19 19:41:36 2017 From: francis at daoine.org (Francis Daly) Date: Sun, 19 Feb 2017 19:41:36 +0000 Subject: try_files does not have any effect on existing files In-Reply-To: References: Message-ID: <20170219194136.GX2958@daoine.org> On Sun, Feb 19, 2017 at 12:42:11PM -0500, weheartwebsites wrote: Hi there, > This means, I can not for example overwrite an existing physical file > location with a config like this: > > location / { > try_files /$host$uri /index.php > } > > Since if $uri exists under the root/alias it will be served directly without > triggering that try_files directive. Can you give one specific example here, please? What request do you make; what response to do want; what response do you get instead? I imagine it will be something like "I request /abc; I want the contents of the file /usr/local/nginx/html/localhost/abc; but I get the contents of the file /usr/local/nginx/html/abc instead". But it will be good to be clear on what you expect to happen. Thanks, f -- Francis Daly francis at daoine.org From dewanggaba at xtremenitro.org Sun Feb 19 19:44:54 2017 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Mon, 20 Feb 2017 02:44:54 +0700 Subject: Question about proxy_cache_key In-Reply-To: <20170216124104.GT2958@daoine.org> References: <1390bd2f-1fd3-6878-2901-0605a17c907c@xtremenitro.org> <20170216124104.GT2958@daoine.org> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hello! Thanks Francis, yes it's about 'Vary' header should be ignored on proxy_ignore_headers. Thanks for the hints. On 02/16/2017 07:41 PM, Francis Daly wrote: > On Thu, Feb 16, 2017 at 01:08:35PM +0700, Dewangga Bachrul Alam > wrote: > > Hi there, > >> proxy_cache_key "$uri$is_args$args"; >> >> location / { proxy_ignore_headers Cache-Control Expires >> Set-Cookie; rewrite (?.*) $capt?$args&view=$mobile break; >> } > >> My question is, the URI >> https://subdomain.domain.tld/artikel/berita/a-pen-by-hair >> accessed from another browser (eg. Safari) and cache status was >> HIT, then I tried to invoke `PURGE` command from shell using >> cURL, it was failed and return cache_key not found > > I do not know the "fix"; but I suspect that the *reason* is due to > the Vary header that is sent in the response. > > If you know that "Vary" is not relevant in your case, then > possibly temporarily adding it to proxy_ignore_headers will let you > see whether that is related. > >> The following argument "&view=desktop?" should be follow. Is >> there any hints to match the proxy_cache_key? So I can purge the >> cache anywhere and anytime? > > I guess that the "view=desktop" part comes from your rewrite > directive; it may take more documentation-digging to find the > actual cache key that is used internally in nginx, so that the > purge facility can work with it. > > (Or possibly the ngx_cache_purge module could be updated to work > transparently with the current nginx cache key behaviour -- I do > not know whether that is possible.) > > Good luck with it, > > f > -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQI4BAEBCAAiBQJYqfWuGxxkZXdhbmdnYWJhQHh0cmVtZW5pdHJvLm9yZwAKCRDl f9IgoCjNcL1wD/42wuifFcihgZPlvjL0r1B7BcFrH8E42EixXt3X7mC6pcz+Rp+6 Pu+b3Rk6MV9HMO5mZKsIkGiYdQfxY3y8jqwXPMrCSprg7sOVI5mm8AA+sAaRMjkW gOdgOK422bWO3aQI11pVCrd1YSoFh6dCNMp6NnBpp/Keml59gQioBBW4jNO2Mp2p I1vOzIZlyCKPH5Oa6ZsR6GkHf5ZzocVZH9QyeZV+2DVK3KZVUIATmzjVVXIWeaPn dqVVv74Bok7bUTPEktimo3pFzIL6EsG6EIOffCAb9xt2EN0avtPFmcnNK/gUX6sm tBFq2mBBS7MKlS7C5mrujF5KYm6kHA8l1Bv7NMrjX9KiTYk8YRin/dkbXYnwxxZz GedU69c4LULjwuVI4/sUGoYQY3tblstMMLmsF67INgG+wtyeN6uAYD6BRdBoaWGA jz72PPICq0vRLuTMyBtSQ+1XmL7OP5UUSOONpEabE/07PfoBivh/8nwfxVqGdpeP yvVpgJe15hmuNr9UMyLKgVtagLI3xXfkBLdezkv6F+mG4gClt6a/Dxmz/Wm/8v+E rwILgD35lSu0JnuCxqIlo5F4Xamf2EhqcPG5L7Zs5KK4tq7RDUNfLyeIe74HbjUw v8BDrRnQXPYHaug+e7tztF4N71nfu4epArAs7cqK8ybckCGvtQTdMmsg/A== =I/z8 -----END PGP SIGNATURE----- From nginx-forum at forum.nginx.org Sun Feb 19 19:53:22 2017 From: nginx-forum at forum.nginx.org (JoakimR) Date: Sun, 19 Feb 2017 14:53:22 -0500 Subject: Cache only static files in sub/subfolder but not sub Message-ID: <927cf2d772e9024f3a4f0d78ab051154.NginxMailingListEnglish@forum.nginx.org> Hi, I'm having as so many other a subfolder with media files, but I've like to do a simple file caching of only one of the subfolders = /media//thumbs/embedded with path insite the domain.tld and serve them as media.domain.tld So what I have done is added this to my config and it's working fine when I out comments the second location directive, files are stored in the cache and served as expected. Now the trouble shooting: as noticed above, this only works when I out comments the second location, which is NOT to be cached at all. I have of course tried to switch between which location comes first. Even chose I recall it as first rule matching is served first. Any one who can tell me why this isn't working as i like it to? (for those who is curious, the location /thumbs/embedded holds about 4.200.000 files, where the rest is logically stored in same folder. The other folders is divided into sub1/sub2/sub3/sub4/sub5, this one isn't) location /thumbs/embedded { add_header X-Served-By "IDENT1"; add_header Cache-Control public; add_header Pragma 'public'; add_header X-Cache-Status $upstream_cache_status; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header HOST $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; error_page 404 = /image404.php; proxy_pass http://127.0.0.1:9001; } ##Match what's not in above location directive location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { #access_log on; #log_not_found on; aio on; sendfile on; expires max; add_header Cache-Control public; add_header Pragma 'public'; add_header X-Served-By "IDENT2"; #add_header X-Frame-Options SAMEORIGIN; error_page 404 = /image404.php; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272506,272506#msg-272506 From nginx-forum at forum.nginx.org Sun Feb 19 20:32:18 2017 From: nginx-forum at forum.nginx.org (JoakimR) Date: Sun, 19 Feb 2017 15:32:18 -0500 Subject: Client certificate fails with "unsupported certificate purpose" from iPad, works in desktop browsers In-Reply-To: <3f3b97eb181714bc93e642b272fc366d.NginxMailingListEnglish@forum.nginx.org> References: <3f3b97eb181714bc93e642b272fc366d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7b93131501a4fadb608653643e9ec621.NginxMailingListEnglish@forum.nginx.org> Whitout any configuretion it's imposible to do much rather than refer you to nginx.org documentation http://nginx.org/en/docs/http/ngx_http_ssl_module.html Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272444,272507#msg-272507 From nginx-forum at forum.nginx.org Sun Feb 19 20:41:52 2017 From: nginx-forum at forum.nginx.org (JoakimR) Date: Sun, 19 Feb 2017 15:41:52 -0500 Subject: Trouble with redirects from backend In-Reply-To: References: Message-ID: You need proxy_redirect.... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272358,272508#msg-272508 From nginx-forum at forum.nginx.org Sun Feb 19 22:40:03 2017 From: nginx-forum at forum.nginx.org (nrahl) Date: Sun, 19 Feb 2017 17:40:03 -0500 Subject: Client certificate fails with "unsupported certificate purpose" from iPad, works in desktop browsers In-Reply-To: <7b93131501a4fadb608653643e9ec621.NginxMailingListEnglish@forum.nginx.org> References: <3f3b97eb181714bc93e642b272fc366d.NginxMailingListEnglish@forum.nginx.org> <7b93131501a4fadb608653643e9ec621.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8d6b2ba096011496e7e3ec7ff47c959e.NginxMailingListEnglish@forum.nginx.org> JoakimR Wrote: ------------------------------------------------------- > Whitout any configuretion it's imposible to do much rather than refer > you to nginx.org documentation > http://nginx.org/en/docs/http/ngx_http_ssl_module.html The configuration in the vhost file is: ssl on; ssl_certificate /etc/ssl/private/server.crt; ssl_certificate_key /etc/ssl/private/server.key; ssl_client_certificate /path/to/application/var/ssl/ca/ca.comb; ssl_verify_client optional; ssl_verify_depth 2; This is the output of debug level logging for a failed request (sensitive info replaced with generic words): 2017/02/19 14:17:37 [debug] 20917#20917: post event 000055D4E36D71A0 2017/02/19 14:17:37 [debug] 20917#20917: delete posted event 000055D4E36D71A0 2017/02/19 14:17:37 [debug] 20917#20917: accept on 0.0.0.0:443, ready: 1 2017/02/19 14:17:37 [debug] 20917#20917: posix_memalign: 000055D4E368EA20:512 @16 2017/02/19 14:17:37 [debug] 20917#20917: *94 accept: xx.xx.xx.xx:62856 fd:10 2017/02/19 14:17:37 [debug] 20917#20917: *94 event timer add: 10: 60000:1487531917179 2017/02/19 14:17:37 [debug] 20917#20917: *94 reusable connection: 1 2017/02/19 14:17:37 [debug] 20917#20917: *94 epoll add event: fd:10 op:1 ev:80002001 2017/02/19 14:17:37 [debug] 20917#20917: accept() not ready (11: Resource temporarily unavailable) 2017/02/19 14:17:37 [debug] 20917#20917: *94 post event 000055D4E36D7380 2017/02/19 14:17:37 [debug] 20917#20917: *94 delete posted event 000055D4E36D7380 2017/02/19 14:17:37 [debug] 20917#20917: *94 http check ssl handshake 2017/02/19 14:17:37 [debug] 20917#20917: *94 http recv(): 1 2017/02/19 14:17:37 [debug] 20917#20917: *94 https ssl handshake: 0x16 2017/02/19 14:17:37 [debug] 20917#20917: *94 SSL ALPN supported by client: spdy/3.1 2017/02/19 14:17:37 [debug] 20917#20917: *94 SSL ALPN supported by client: spdy/3 2017/02/19 14:17:37 [debug] 20917#20917: *94 SSL ALPN supported by client: http/1.1 2017/02/19 14:17:37 [debug] 20917#20917: *94 SSL ALPN selected: http/1.1 2017/02/19 14:17:37 [debug] 20917#20917: *94 SSL server name: "our.server.com" 2017/02/19 14:17:37 [debug] 20917#20917: *94 SSL_do_handshake: -1 2017/02/19 14:17:37 [debug] 20917#20917: *94 SSL_get_error: 2 2017/02/19 14:17:37 [debug] 20917#20917: *94 reusable connection: 0 2017/02/19 14:17:37 [debug] 20917#20917: *94 post event 000055D4E36D7380 2017/02/19 14:17:37 [debug] 20917#20917: *94 delete posted event 000055D4E36D7380 2017/02/19 14:17:37 [debug] 20917#20917: *94 SSL handshake handler: 0 2017/02/19 14:17:37 [debug] 20917#20917: *94 SSL_do_handshake: -1 2017/02/19 14:17:37 [debug] 20917#20917: *94 SSL_get_error: 2 2017/02/19 14:17:37 [debug] 20917#20917: *94 post event 000055D4E36D7380 2017/02/19 14:17:37 [debug] 20917#20917: *94 delete posted event 000055D4E36D7380 2017/02/19 14:17:37 [debug] 20917#20917: *94 SSL handshake handler: 0 2017/02/19 14:17:37 [debug] 20917#20917: *94 verify:0, error:26, depth:1, subject:"/CN=OUR-COMPANY Client CA/ST=State/C=US/O=OUR-COMPANY Client CA", issuer:"/C=US/ST=State/L=City/O=OUR-COMPANY, Inc/CN=OUR-COMPANY, Inc" 2017/02/19 14:17:37 [debug] 20917#20917: *94 verify:1, error:26, depth:2, subject:"/C=US/ST=State/L=City/O=OUR-COMPANY, Inc/CN=OUR-COMPANY, Inc", issuer:"/C=US/ST=State/L=City/O=OUR-COMPANY, Inc/CN=OUR-COMPANY, Inc" 2017/02/19 14:17:37 [debug] 20917#20917: *94 verify:1, error:26, depth:1, subject:"/CN=OUR-COMPANY Client CA/ST=State/C=US/O=OUR-COMPANY Client CA", issuer:"/C=US/ST=State/L=City/O=OUR-COMPANY, Inc/CN=OUR-COMPANY, Inc" 2017/02/19 14:17:37 [debug] 20917#20917: *94 verify:1, error:26, depth:0, subject:"/C=US/ST=State/L=City/O=OUR-COMPANY Client Certificate/CN=OUR-COMPANY-MUS-58A9EEA5", issuer:"/CN=OUR-COMPANY Client CA/ST=State/C=US/O=OUR-COMPANY Client CA" 2017/02/19 14:17:37 [debug] 20917#20917: *94 ssl new session: F89EA5F8:32:1533 2017/02/19 14:17:37 [debug] 20917#20917: *94 SSL_do_handshake: 1 2017/02/19 14:17:37 [debug] 20917#20917: *94 SSL: TLSv1.2, cipher: "ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD" 2017/02/19 14:17:37 [debug] 20917#20917: *94 reusable connection: 1 2017/02/19 14:17:37 [debug] 20917#20917: *94 http wait request handler 2017/02/19 14:17:37 [debug] 20917#20917: *94 malloc: 000055D4E3702330:1024 2017/02/19 14:17:37 [debug] 20917#20917: *94 SSL_read: -1 2017/02/19 14:17:37 [debug] 20917#20917: *94 SSL_get_error: 2 2017/02/19 14:17:37 [debug] 20917#20917: *94 free: 000055D4E3702330 2017/02/19 14:17:37 [debug] 20917#20917: *94 post event 000055D4E36D7380 2017/02/19 14:17:37 [debug] 20917#20917: *94 delete posted event 000055D4E36D7380 2017/02/19 14:17:37 [debug] 20917#20917: *94 http wait request handler 2017/02/19 14:17:37 [debug] 20917#20917: *94 malloc: 000055D4E3702330:1024 2017/02/19 14:17:37 [debug] 20917#20917: *94 SSL_read: 313 2017/02/19 14:17:37 [debug] 20917#20917: *94 SSL_read: -1 2017/02/19 14:17:37 [debug] 20917#20917: *94 SSL_get_error: 2 2017/02/19 14:17:37 [debug] 20917#20917: *94 reusable connection: 0 2017/02/19 14:17:37 [debug] 20917#20917: *94 posix_memalign: 000055D4E369B8A0:4096 @16 2017/02/19 14:17:37 [debug] 20917#20917: *94 http process request line 2017/02/19 14:17:37 [debug] 20917#20917: *94 http request line: "GET / HTTP/1.1" 2017/02/19 14:17:37 [debug] 20917#20917: *94 http uri: "/" 2017/02/19 14:17:37 [debug] 20917#20917: *94 http args: "" 2017/02/19 14:17:37 [debug] 20917#20917: *94 http exten: "" 2017/02/19 14:17:37 [debug] 20917#20917: *94 posix_memalign: 000055D4E3703F20:4096 @16 2017/02/19 14:17:37 [debug] 20917#20917: *94 http process request header line 2017/02/19 14:17:37 [debug] 20917#20917: *94 http header: "Host: our.server.com" 2017/02/19 14:17:37 [debug] 20917#20917: *94 http header: "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" 2017/02/19 14:17:37 [debug] 20917#20917: *94 http header: "Accept-Language: en-us" 2017/02/19 14:17:37 [debug] 20917#20917: *94 http header: "Connection: keep-alive" 2017/02/19 14:17:37 [debug] 20917#20917: *94 http header: "Accept-Encoding: gzip, deflate" 2017/02/19 14:17:37 [debug] 20917#20917: *94 http header: "User-Agent: Mozilla/5.0 (iPad; CPU OS 9_3_5 like Mac OS X) AppleWebKit/601.1.46 (KHTML, like Gecko) Mobile/13G36" 2017/02/19 14:17:37 [debug] 20917#20917: *94 http header done 2017/02/19 14:17:37 [info] 20917#20917: *94 client SSL certificate verify error: (26:unsupported certificate purpose) while reading client request headers, client: xx.xx.xx.xx, server: our.server.com, request: "GET / HTTP/1.1", host: "our.server.com" 2017/02/19 14:17:37 [debug] 20917#20917: *94 http finalize request: 495, "/?" a:1, c:1 2017/02/19 14:17:37 [debug] 20917#20917: *94 event timer del: 10: 1487531917179 2017/02/19 14:17:37 [debug] 20917#20917: *94 http special response: 495, "/?" 2017/02/19 14:17:37 [debug] 20917#20917: *94 http set discard body 2017/02/19 14:17:37 [debug] 20917#20917: *94 xslt filter header 2017/02/19 14:17:37 [debug] 20917#20917: *94 HTTP/1.1 400 Bad Request Server: nginx Date: Sun, 19 Feb 2017 19:17:37 GMT Content-Type: text/html Content-Length: 224 Connection: close Beyond that, what information would be helpful? Is there any way to get more info about the SSL problem beyond log level debug? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272444,272509#msg-272509 From nginx-forum at forum.nginx.org Sun Feb 19 22:58:16 2017 From: nginx-forum at forum.nginx.org (weheartwebsites) Date: Sun, 19 Feb 2017 17:58:16 -0500 Subject: try_files does not have any effect on existing files In-Reply-To: <20170219194136.GX2958@daoine.org> References: <20170219194136.GX2958@daoine.org> Message-ID: <927f96500cba47a5b7b62d9116c06752.NginxMailingListEnglish@forum.nginx.org> ouch sorry all good - I had a special location = /robots.txt block which was causing the try_files directive not to be called. removing that (or testing with a different filename) worked :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272503,272510#msg-272510 From peter_booth at me.com Mon Feb 20 05:36:03 2017 From: peter_booth at me.com (Peter Booth) Date: Mon, 20 Feb 2017 00:36:03 -0500 Subject: nginx cache mounted on tmpf getting fulled In-Reply-To: References: <147309f3a8a04c645b863e86c0ef62f9.NginxMailingListEnglish@forum.nginx.org> <0ba04a5c-ee07-125f-948c-16569116ac1e@nginx.com> <2F398FCE-924D-4D8D-AC78-C00383A7F1B2@me.com> Message-ID: <23746E08-7F6A-42B0-B0C7-8A3A4D079B43@me.com> Just to be pedantic. It?s counterintuitive but, in general, tmpfs is not faster than local storage, for the use case of caching static content for web servers. Sounds weird? Here?s why: tmpfs is a file system view of all of the system?s virtual memory - that is both physical memory and swap space. If you use local storage for your cache store then every time a file is requested for the first time it gets read from local storage and written to the page cache. Subsequent requests are served from page cache.. The OS manages memory allocation so as to maximize the size of the page cache (subject to config settings snappiness, min_free_kbytes, and the watermark setting for kswapd. The fact that you are using a cache suggests that you have expensive back-end queries that your cache is front of. So the cost of reading a cached file from disk << cost of recreating dynamic content. With real-world web systems the probability distribution of requests for different resources is never uniform, There are clusters of popular resources. Any popular requests that get served from your disk cache will in fact be being served from page cache (memory), so there is no reason to interfere with the OS. In general its likely that you have less physical memory than disk space., so that using tmpfs for your nginx cache could mean that you?re serving cached files from swap space. Allowing the OS to do its job (with the page cache) means that you will already get the tmpfs like latencies for the popular resources - which is what you want to maximize performance across your entire site. This is another example of why with issues of web performance, its usually better to test theories than to rely on logical reasoning. Peter Booth > On 16 Jan 2017, at 10:34 PM, steve wrote: > > Hi, > > > On 01/17/2017 02:37 PM, Peter Booth wrote: >> I'm curious, why are you using tmpfs for your cache store? With fast local storage bring so cheap, why don't you devote a few TB to your cache? >> >> When I look at the techempower benchmarks I see that openresty (an nginx build that comes with lots of lua value add) can serve 440,000 JSON responses per sec with 3ms latency. That's on five year old E7-4850 Westmere hardware at 2.0GHz, with 10G NICs. The min latency to get a packet from nginx through the kernel stack and onto the wire is about 4uS for a NIC of that vintage, dropping to 2uS with openonload (sokarflare's kernel bypass). >> >> As ippolitiv suggests, your cache already has room for 1.6M items- that's a huge amount. What kind of hit rate are you seeing for your cache? >> >> One way to manage cache size is to only cache popular items- if you set proxy_cache_min_uses =4 then only objects that are requested four times will be cached, which will increase your hit rates and reduce the space needed for the cache. >> >> Peter >> >> Sent from my iPhone >> >>> On Jan 16, 2017, at 4:41 AM, Igor A. Ippolitov wrote: >>> >>> Hello, >>> >>> Your cache have 200m space for keys. This is around 1.6M items, isn't it? >>> How much files do you have in your cache? May we have a look at >>> `df -i ` and `du -s /cache/123` output, please? >>> >>>> On 06.01.2017 08:40, omkar_jadhav_20 wrote: >>>> Hi, >>>> >>>> I am using nginx as webserver with nginx version: nginx/1.10.2. For faster >>>> access we have mounted cache of nginx of different application on RAM.But >>>> even after giving enough buffer of size , now and then cache is getting >>>> filled , below are few details of files for your reference : >>>> maximum size given in nginx conf file is 500G , while mouting we have given >>>> 600G of space i.e. 100G of buffer.But still it is getting filled 100%. >>>> >>>> fstab entries : >>>> tmpfs /cache/123 tmpfs defaults,size=600G >>>> 0 0 >>>> tmpfs /cache/456 tmpfs defaults,size=60G >>>> 0 0 >>>> tmpfs /cache/789 tmpfs defaults,size=110G >>>> 0 0 >>>> >>>> cache getting filled , df output: >>>> >>>> tmpfs tmpfs 60G 17G 44G 28% >>>> /cache/456 >>>> tmpfs tmpfs 110G 323M 110G 1% >>>> /cache/789 >>>> tmpfs tmpfs 600G 600G 0 100% >>>> /cache/123 >>>> >>>> nginx conf details : >>>> >>>> proxy_cache_path /cache/123 keys_zone=a123:200m levels=1:2 max_size=500g >>>> inactive=3d; >>>> >>>> server{ >>>> listen 80; >>>> server_name dvr.catchup.com; >>>> location ~.*.m3u8 { >>>> access_log /var/log/nginx/access_123.log access; >>>> proxy_cache off; >>>> root /xyz/123; >>>> if (!-e $request_filename) { >>>> #origin url will be used if content is not available on DS >>>> proxy_pass http://10.10.10.1X; >>>> } >>>> } >>>> location / { >>>> access_log /var/log/nginx/access_123.log access; >>>> proxy_cache_valid 3d; >>>> proxy_cache a123; >>>> root /xyz/123; >>>> if (!-e $request_filename) { >>>> #origin url will be used if content is not available on server >>>> proxy_pass http://10.10.10.1X; >>>> } >>>> proxy_cache_key $proxy_host$uri; >>>> } >>>> } >>>> >>>> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271842,271842#msg-271842 >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > So that's a total of 1TB of memory allocated to caches. Do you have that much spare on your server? Linux will allocate *up to* the specified amount *as long as it's spare*. It would be worth looking at your server to ensure that 1TB memory is spare before blaming nginx. > > You can further improve performance and safety by mounting them > > nodev,noexec,nosuid,noatime,async,size=xxxM,mode=0755,uid=xx,gid=xx > > To answer this poster... memory is even faster! > > Steve > > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Linkedin: http://www.linkedin.com/in/steveholdoway > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From 0815 at lenhardt.in Mon Feb 20 09:45:14 2017 From: 0815 at lenhardt.in (Marco Lenhardt) Date: Mon, 20 Feb 2017 10:45:14 +0100 Subject: massive deleted open files in proxy cache Message-ID: Hi! We are useing Ubuntu 16.04 with nginx version 1.10.0-0ubuntu0.16.04.4. nginx.conf: user nginx; worker_processes auto; worker_rlimit_nofile 20480; # ulimit open files per worker process events { # Performance worker_connections 2048; # openfilelimits beachten multi_accept on; use epoll; } http { open_file_cache max=10000 inactive=1d; open_file_cache_valid 1d; open_file_cache_min_uses 1; open_file_cache_errors off; proxy_cache_path /var/cache/nginx/proxy_cache levels=1:2 keys_zone=html_cache:30m max_size=8192m inactive=4h use_temp_path=off; proxy_cache_path /var/cache/nginx/wordpress_cache levels=1:2 keys_zone=wordpress_cache:1m max_size=256m inactive=24h use_temp_path=off; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; } # df -h |grep cache tmpfs 300M 74M 227M 25% /var/cache/nginx/wordpress_cache tmpfs 9.0G 4.1G 5.0G 45% /var/cache/nginx/proxy_cache # df -i |grep cache tmpfs 20632978 5457 20627521 1% /var/cache/nginx/wordpress_cache tmpfs 20632978 74613 20558365 1% /var/cache/nginx/proxy_cache # grep cache /etc/fstab tmpfs /var/cache/nginx/proxy_cache/ tmpfs rw,uid=109,gid=117,size=9G,mode=0755 0 0 tmpfs /var/cache/nginx/wordpress_cache/ tmpfs rw,uid=109,gid=117,size=300M,mode=0755 0 0 # free -m total used free shared buff/cache available Mem: 161195 112884 1321 4626 46988 42519 Swap: 3903 211 3692 Problem: ======== We got massive open file handles from nginx user located inside proxy_cache_path with status deleted: # lsof -E -M -T > lsof.`date +"%Y%d%m-%H%M%S"`.out nginx 3613 nginx 48r REG 0,42 148664 29697 /var/cache/nginx/proxy_cache/temp/5/23/04ca8002edd2daa3c538ada5b202d6eb (deleted) nginx 3613 nginx 50r REG 0,42 161618 19416 /var/cache/nginx/proxy_cache/temp/1/40/d8f0a3563d18af4fbf43242e19283b15 (deleted) # grep nginx lsof.20172002-085328.out |wc -l 69003 # grep nginx lsof.20172002-085328.out |grep deleted |wc -l 36312 # grep nginx lsof.20172002-085328.out |grep deleted |grep "/var/cache/nginx/proxy_cache/temp" |wc -l 32004 The most of the 36k deleted files are located inside the temp cache folder. My question is, why we got so much deleted files inside the cache. Why is nginx not freeing these files? Is there a problem with the proxy_cache_path option use_temp_path=off ? I am in worry that the cache file system will be filled up with deleted files, and will reach open files limits. Or do we have a nginx misconfiguration somewhere? Additionally we were often visited from the oom-killer, (in face of 20G free memory). If I restart nginx before we reach 80k open ngnix files, the oom-killer will not visit us! Has anybody simmilar findings regarding deleted open files? br, Marco From nginx-forum at forum.nginx.org Mon Feb 20 12:52:10 2017 From: nginx-forum at forum.nginx.org (JoakimR) Date: Mon, 20 Feb 2017 07:52:10 -0500 Subject: Cache only static files in sub/subfolder but not sub In-Reply-To: <927cf2d772e9024f3a4f0d78ab051154.NginxMailingListEnglish@forum.nginx.org> References: <927cf2d772e9024f3a4f0d78ab051154.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0b62618b7ae573b4145de642cb598a8c.NginxMailingListEnglish@forum.nginx.org> Found the answer to my question here http://nginx.org/en/docs/http/ngx_http_core_module.html#location "If a location is defined by a prefix string that ends with the slash character, and requests are processed by one of proxy_pass, fastcgi_pass, uwsgi_pass, scgi_pass, or memcached_pass, then the special processing is performed. In response to a request with URI equal to this string, but without the trailing slash, a permanent redirect with the code 301 will be returned to the requested URI with the slash appended. If this is not desired, an exact match of the URI and location could be defined like this" Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272506,272523#msg-272523 From francis at daoine.org Mon Feb 20 19:06:14 2017 From: francis at daoine.org (Francis Daly) Date: Mon, 20 Feb 2017 19:06:14 +0000 Subject: Cache only static files in sub/subfolder but not sub In-Reply-To: <927cf2d772e9024f3a4f0d78ab051154.NginxMailingListEnglish@forum.nginx.org> References: <927cf2d772e9024f3a4f0d78ab051154.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170220190614.GY2958@daoine.org> On Sun, Feb 19, 2017 at 02:53:22PM -0500, JoakimR wrote: Hi there, > Now the trouble shooting: as noticed above, this only works when I out > comments the second location, which is NOT to be cached at all. I have of > course tried to switch between which location comes first. Even chose I > recall it as first rule matching is served first. > > Any one who can tell me why this isn't working as i like it to? > location /thumbs/embedded { > ##Match what's not in above location directive > location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { It looks to me like you want "match the prefix location, and then do not check the regex locations". There's a squiggle for that. http://nginx.org/r/location f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Feb 21 09:45:09 2017 From: nginx-forum at forum.nginx.org (kaustubh) Date: Tue, 21 Feb 2017 04:45:09 -0500 Subject: input required on proxy_next_upstream In-Reply-To: <20170218104506.GW2958@daoine.org> References: <20170218104506.GW2958@daoine.org> Message-ID: Thanks Francis! I was able to test that above works. But problem is when we have proxy buffering off and when we try to send large file say 1gb, it fails with 502 without trying next instance. proxy_request_buffering off; proxy_http_version 1.1; Docs say so, http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering "When buffering is disabled, the request body is sent to the proxied server immediately as it is received. In this case, the request cannot be passed to the next server if nginx already started sending the request body." Problem is we want best of both worlds, proxy buffering to be off so it works like streaming and some way to try next instance on 503 from first instance. Any suggestions? May be is there a way where nginx can expect-100 from upstream before starting to send data to it, so if expect-100 fails, it can try next instance without losing data already sent otherwise? Here is the error with nginx.conf and command where it fails nginx.conf http { client_max_body_size 5G; upstream local { server 127.0.0.2:8008; server 127.0.0.2:8009; } server { listen 80; location / { proxy_pass http://local; proxy_next_upstream error timeout invalid_header http_502 http_503 http_504; proxy_http_version 1.1; proxy_request_buffering off; } } server { listen 8008; access_log /var/log/nginx/503.log combined; return 503; } server { listen 8009; access_log /var/log/nginx/200.log combined; return 200 "Got $request on port 8009\n"; } } $ cat 1gb.img | curl -H "Expect:" -v -T - http://127.0.0.1/one * About to connect() to 127.0.0.1 port 80 (#0) * Trying 127.0.0.1... * Connected to 127.0.0.1 (127.0.0.1) port 80 (#0) > PUT /one HTTP/1.1 > User-Agent: curl/7.29.0 > Host: 127.0.0.1 > Accept: */* > Transfer-Encoding: chunked > < HTTP/1.1 502 Bad Gateway < Server: nginx/1.9.15 < Date: Tue, 21 Feb 2017 09:23:09 GMT < Content-Type: text/html < Content-Length: 173 < Connection: keep-alive * HTTP error before end of send, stop sending < 502 Bad Gateway

502 Bad Gateway


nginx/1.9.15
* Closing connection 0 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272440,272543#msg-272543 From alan.orth at gmail.com Tue Feb 21 15:01:12 2017 From: alan.orth at gmail.com (Alan Orth) Date: Tue, 21 Feb 2017 15:01:12 +0000 Subject: Proxying and static files In-Reply-To: <20170216213821.GU2958@daoine.org> References: <6acabfb909c12b183c72a2207702bd60.NginxMailingListEnglish@forum.nginx.org> <20170216213821.GU2958@daoine.org> Message-ID: Hi, The try_files directive is great for this[0]. But like Francis pointed out, you need to have a pattern that can be matched for static files, and then nginx can look for the files on disk (relative to the root) before proxying the request back to the dynamic application. Regards, [0] http://nginx.org/en/docs/http/ngx_http_core_module.html#try_files On Thu, Feb 16, 2017 at 11:38 PM Francis Daly wrote: > On Thu, Feb 16, 2017 at 08:26:35AM -0500, epoch1 wrote: > > Hi there, > > > I've tried something like the following but can't get it work for each > app: > > location ~* /(images|css|js|files)/ { > > root /home/username/app1/public/; > > } > > > > If I request app1/js/script.js for example it goes to > > /home/username/app1/public/app1/js/script.js rather than > > /home/username/app1/public/js/script.js > > If the web request /one/two/thr.ee does not correspond to a file > /docroot/one/two/thr.ee, then you probably want to use "alias" > (http://nginx.org/r/alias) instead of "root". > > It's not clear to me what your exact pattern for recognising "static" > vs "dynamic" files is; perhaps something like one of > > location /app1/images/ { > alias /home/username/app1/public/images/; > } > > (with similar things for the other directories); or > > location ~* ^/app1/(images|css|js|files)/(.*) { > alias /home/username/app1/public/$1/$2; > } > > or > > location ~* ^/app1/(.*.(js|css))$ { > alias /home/username/app1/public/$1; > } > > In each case, the "location"s (particularly the regex one) could be > nested within the matching "main" location that you already have. > > God luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Alan Orth alan.orth at gmail.com https://englishbulgaria.net https://alaninkenya.org https://mjanja.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Feb 21 20:18:37 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 21 Feb 2017 20:18:37 +0000 Subject: input required on proxy_next_upstream In-Reply-To: References: <20170218104506.GW2958@daoine.org> Message-ID: <20170221201837.GZ2958@daoine.org> On Tue, Feb 21, 2017 at 04:45:09AM -0500, kaustubh wrote: Hi there, > Thanks Francis! I was able to test that above works. Good stuff -- at least we can see that things are fundamentally correct. > But problem is when we have proxy buffering off and when we try to send > large file say 1gb, it fails with 502 without trying next instance. Ah, yes, as you point out: that's the documented current expected behaviour. > Any suggestions? May be is there a way where nginx can expect-100 from > upstream before starting to send data to it, so if expect-100 fails, it can > try next instance without losing data already sent otherwise? I'm pretty sure that that is not doable just in nginx.conf with the current stock code. You'll want to have some new code to either do that (issue the request headers, wait long enough for it to have a chance to fail before streaming the request body (while presumably buffering the start of the request body somewhere)); or to always buffer the request body but simultaneously send it upstream, so that on failure the body can be re-sent from the start. Maybe that can be done with one of the embedded-language modules, but it sounds to me like it would best be done in the proxy module itself. The usual path there is to decide how important that behaviour is to you, and then either write the code to do it, or encourage someone else to write the code for you. If the design and code are clean, and the owner is happy to share, I suspect that it could be a welcome addition to the stock code. Good luck with it, f -- Francis Daly francis at daoine.org From litichevskij.vova at gmail.com Wed Feb 22 01:55:48 2017 From: litichevskij.vova at gmail.com (Litichevskij Vova) Date: Wed, 22 Feb 2017 03:55:48 +0200 Subject: how can I use external URI with the auth_request module Message-ID: <6E560B6A-249B-4317-BC4A-C4485F6EE46B@gmail.com> Hello! up vote <> down vote <>favorite I'm trying to use nginx's ngx_http_auth_request_module in such way: server { location / { auth_request http://external.url; proxy_pass http://protected.resource; } } It doesn't work, the error is: 2017/02/21 02:45:36 [error] 17917#0: *17 open() "/usr/local/htmlhttp://external.url" failed (2: No such file or directory), ... Or in this way with named location: server { location / { auth_request @auth; proxy_pass http://protected.resource; } location @auth { proxy_pass http://external.url; } } In this case the error is almost the same: 2017/02/22 03:13:25 [error] 25476#0: *34 open() "/usr/local/html at auth" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET / HTTP/1.1", subrequest: "@auth", host: "127.0.0.1" I know there is a way like this: server { location / { auth_request /_auth_check; proxy_pass http://protected.resource; } location /_auth_check { internal; proxy_pass http://external.url; } } But in this case the http://protected.resource can not use the /_auth_check path. Is there a way to use an external URI as a parameter for the auth_request directive without overlapping the http://protected.resource routing? If not, why? It looks a little bit strange to look for the auth_request's URI through static files (/usr/local/html). -------------- next part -------------- An HTML attachment was scrubbed... URL: From captwiggum at gmail.com Wed Feb 22 02:40:23 2017 From: captwiggum at gmail.com (Captain Wiggum) Date: Tue, 21 Feb 2017 19:40:23 -0700 Subject: Image Maps Message-ID: Hi All, I have searched the archives in hopes of answering this myself. But no luck. My html was recently migrated from apache to nginx. It worked fine on apache. The html uses image maps, such as: html v1 style:
or newer css style: Neither seem to work with my nginx-1.10.1 on Fedora (really Amazon Linux). (I believe this is an entirely different subject than the nginx maps module.) The image map looks something like this: rect /cgi-bin/picview.cgi?london01s.jpg 0,0 99,99 rect /cgi-bin/picview.cgi?london02s.jpg 100,0 199,99 rect /cgi-bin/picview.cgi?london03s.jpg 200,0 299,99 rect /cgi-bin/picview.cgi?london04s.jpg 300,0 399,99 rect /cgi-bin/picview.cgi?london05s.jpg 400,0 499,99 Any tips appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From filip.francis at exitas.be Wed Feb 22 05:29:56 2017 From: filip.francis at exitas.be (Filip Francis) Date: Wed, 22 Feb 2017 06:29:56 +0100 Subject: nginx as reverse proxy to several backends Message-ID: Hi all, I am trying to set-up a reverse proxy with nginx so that based on the server_name it goes to the correct backend. I have been looking in to examples but no luck to get it actually working. So this is want I want to do when user type xxxx.yyy.be as normal http it redirects to https and then forwards it to the backend nummer 1 but when user type zzzz.yyy.be also as normal http it redrects it to https and forwards it to the correct backend (so here it would be backend nummer 2) so in sites-enabled i put several files that is being loaded but nothing is working so i would like to see an example that works as i can not found a complete example to work with. So please advice. So here is my nginx.conf file user www; worker_processes auto; pid /var/run/nginx.pid; events { worker_connections 768; multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; more_set_headers "Server: Your_New_Server_Name"; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /opt/local/etc/nginx/mime.types; default_type application/octet-stream; ## # SSL Settings ## #ssl on; ssl_protocols TLSv1.2; ssl_ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!aNULL:!MD5:!3DES:!CAMELLIA:!AES128; ssl_prefer_server_ciphers on; ssl_certificate /opt/local/etc/nginx/certs/fullchain.pem; ssl_certificate_key /opt/local/etc/nginx/certs/key.pem; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_stapling on; ssl_stapling_verify on; ## Enable HSTS add_header Strict-Transport-Security max-age=63072000; # Do not allow this site to be displayed in iframes add_header X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; add_header X-Frame-Options "SAMEORIGIN" always; # Do not permit Content-Type sniffing. add_header X-Content-Type-Options nosniff; ## # Logging Settings ## rewrite_log on; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; #gzip_vary on; #gzip_proxied any; #gzip_comp_level 6; #gzip_buffers 16 8k; #gzip_http_version 1.1; #gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## include /opt/local/etc/nginx/sites-enabled/*; } and then in sites-enabled there are following files: owncloud and mattermost here is the content: owncloud: upstream owncloud { server 192.168.1.51:80; } server { listen 80; server_name xxxx.yyy.be; return 301 https://$server_name$request_uri; #rewrite ^/.*$ https://$host$request_uri? permanent; more_set_headers "Server: None of Your Business"; server_tokens off; } server { listen 443 ssl http2; server_name xxxx.yyy.be; more_set_headers "Server: None of Your Business"; server_tokens off; location / { client_max_body_size 0; proxy_set_header Connection ""; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Frame-Options SAMEORIGIN; proxy_buffers 256 16k; proxy_buffer_size 16k; proxy_read_timeout 600s; proxy_cache owncloud_cache; proxy_cache_revalidate on; proxy_cache_min_uses 2; proxy_cache_use_stale timeout; proxy_cache_lock on; proxy_pass http://192.168.1.51; } # Lets Encrypt Override location '/.well-known/acme-challenge' { root /var/www/proxy; auth_basic off; } } and mattermost: server { listen 80; server_name zzzz.yyy.be; location / { return 301 https://$server_name$request_uri; } } server { listen 443; server_name zzzz.yyy.be; location / { client_max_body_size 0; proxy_set_header Connection ""; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Frame-Options SAMEORIGIN; proxy_buffers 256 16k; proxy_buffer_size 16k; proxy_read_timeout 600s; proxy_cache mattermost_cache; proxy_cache_revalidate on; proxy_cache_min_uses 2; proxy_cache_use_stale timeout; proxy_cache_lock on; proxy_pass http://192.168.1.95:8065; } } This is working (more or less) but if i start moving the ssl bit into the owncloud or mattermost its simply is not working any more getting each time that i type http://zzzz.yyy.be i get 400 bad request The plain HTTP request was sent to HTTPS port Thanks Filip Francis From francis at daoine.org Wed Feb 22 12:27:45 2017 From: francis at daoine.org (Francis Daly) Date: Wed, 22 Feb 2017 12:27:45 +0000 Subject: Image Maps In-Reply-To: References: Message-ID: <20170222122745.GA2958@daoine.org> On Tue, Feb 21, 2017 at 07:40:23PM -0700, Captain Wiggum wrote: Hi there, > The html uses image maps, such as: > html v1 style:
> or newer css style: > > Neither seem to work with my nginx-1.10.1 on Fedora (really Amazon Linux). Can you see which part is failing? As I understand it, client-side image maps are unrelated to the web server. The client decides what url to request depending on where in the image is clicked. > The image map looks something like this: > > rect /cgi-bin/picview.cgi?london01s.jpg 0,0 99,99 > rect /cgi-bin/picview.cgi?london02s.jpg 100,0 199,99 > rect /cgi-bin/picview.cgi?london03s.jpg 200,0 299,99 > rect /cgi-bin/picview.cgi?london04s.jpg 300,0 399,99 > rect /cgi-bin/picview.cgi?london05s.jpg 400,0 499,99 > So - if you see in your access log that the browser is not requesting /cgi-bin/picview.cgi?london01s.jpg, you probably have a html or client problem to fix. If you see that the browser is requesting /cgi-bin/picview.cgi?london01s.jpg, then you will want to see how you have configured your nginx to deal with that url. Note that nginx does not "do" cgi. So whatever your plan is, it can't be to have nginx "do" cgi itself. Perhaps it should proxy_pass to a http server; perhaps it should fastcgi_pass to a fastcgi server that knows how to handle the request. Good luck with it, f -- Francis Daly francis at daoine.org From mail at kilian-ries.de Wed Feb 22 14:55:06 2017 From: mail at kilian-ries.de (Kilian Ries) Date: Wed, 22 Feb 2017 14:55:06 +0000 Subject: Nginx multiple upstream with different protocols Message-ID: Hi, i'm trying to setup two Nginx upstreams (one with HTTP and one with HTTPS) and the proxy_pass module should decide which of the upstreams is serving "valid" content. The config should look like this: upstream proxy_backend { server xxx.xx.188.53; server xxx.xx.188.53:443; } server { listen 443 ssl; ... location / { proxy_pass http://proxy_backend; #proxy_pass https://proxy_backend; } } The Problem is that i don't know if the upstream is serving the content via http or https. Is there any possibility to tell nginx to change the protocol from the proxy_pass directive? Because if i set proxy_pass to https, i get an error (502 / 400) if the upstream website is running on http and vice versa. So i'm searching for a way to let Nginx decide if he should proxy_pass via http or https. Can anybody help me with that configuration? Thanks Greets Kilian -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.cox at kroger.com Wed Feb 22 14:58:26 2017 From: eric.cox at kroger.com (Cox, Eric S) Date: Wed, 22 Feb 2017 14:58:26 +0000 Subject: Nginx multiple upstream with different protocols In-Reply-To: References: Message-ID: <74A4D440E25E6843BC8E324E67BB3E39456D115E@N060XBOXP38.kroger.com> If you are SSL on the frontend (server directive) why would you want to proxy between ssl/non-ssl on the upstreams? Can they not be the same? I don't get what you are trying to solve? From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Kilian Ries Sent: Wednesday, February 22, 2017 9:55 AM To: nginx at nginx.org Subject: Nginx multiple upstream with different protocols Hi, i'm trying to setup two Nginx upstreams (one with HTTP and one with HTTPS) and the proxy_pass module should decide which of the upstreams is serving "valid" content. The config should look like this: upstream proxy_backend { server xxx.xx.188.53; server xxx.xx.188.53:443; } server { listen 443 ssl; ... location / { proxy_pass http://proxy_backend; #proxy_pass https://proxy_backend; } } The Problem is that i don't know if the upstream is serving the content via http or https. Is there any possibility to tell nginx to change the protocol from the proxy_pass directive? Because if i set proxy_pass to https, i get an error (502 / 400) if the upstream website is running on http and vice versa. So i'm searching for a way to let Nginx decide if he should proxy_pass via http or https. Can anybody help me with that configuration? Thanks Greets Kilian ________________________________ This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is confidential and protected by law from unauthorized disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Feb 22 16:01:02 2017 From: nginx-forum at forum.nginx.org (JoakimR) Date: Wed, 22 Feb 2017 11:01:02 -0500 Subject: Cache only static files in sub/subfolder but not sub In-Reply-To: <20170220190614.GY2958@daoine.org> References: <20170220190614.GY2958@daoine.org> Message-ID: Hi Francis You're right I have overseen the ^~ for the location. So for others, the solution to "force" the location directives is.. location ^~ /thumbs/embedded { add_header X-Served-By "IDENT1"; add_header Cache-Control public; add_header Pragma 'public'; add_header X-Cache-Status $upstream_cache_status; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header HOST $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; error_page 404 = /image404.php; proxy_pass http://127.0.0.1:9001; } ##Match what's not in above location directive location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { #access_log on; #log_not_found on; aio on; sendfile on; expires max; add_header Cache-Control public; add_header Pragma 'public'; add_header X-Served-By "IDENT2"; #add_header X-Frame-Options SAMEORIGIN; error_page 404 = /image404.php; } The quick view is that location /thumbs/embedded { change into location ^~ /thumbs/embedded { Pretty cool actually :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272506,272563#msg-272563 From mail at kilian-ries.de Wed Feb 22 16:08:21 2017 From: mail at kilian-ries.de (Kilian Ries) Date: Wed, 22 Feb 2017 16:08:21 +0000 Subject: AW: Nginx multiple upstream with different protocols In-Reply-To: <74A4D440E25E6843BC8E324E67BB3E39456D115E@N060XBOXP38.kroger.com> References: , <74A4D440E25E6843BC8E324E67BB3E39456D115E@N060XBOXP38.kroger.com> Message-ID: <68681223e80c4e58b4d16b1b4b8981ab@kilian-ries.de> No they cannot be the same (sadly) because i dont't know how the upstream is serving the content. Think of a situation where i am not in control of the upstream backends and they may change from http to https over time. ________________________________ Von: nginx im Auftrag von Cox, Eric S Gesendet: Mittwoch, 22. Februar 2017 15:58:26 An: nginx at nginx.org Betreff: RE: Nginx multiple upstream with different protocols If you are SSL on the frontend (server directive) why would you want to proxy between ssl/non-ssl on the upstreams? Can they not be the same? I don't get what you are trying to solve? From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Kilian Ries Sent: Wednesday, February 22, 2017 9:55 AM To: nginx at nginx.org Subject: Nginx multiple upstream with different protocols Hi, i'm trying to setup two Nginx upstreams (one with HTTP and one with HTTPS) and the proxy_pass module should decide which of the upstreams is serving "valid" content. The config should look like this: upstream proxy_backend { server xxx.xx.188.53; server xxx.xx.188.53:443; } server { listen 443 ssl; ... location / { proxy_pass http://proxy_backend; #proxy_pass https://proxy_backend; } } The Problem is that i don't know if the upstream is serving the content via http or https. Is there any possibility to tell nginx to change the protocol from the proxy_pass directive? Because if i set proxy_pass to https, i get an error (502 / 400) if the upstream website is running on http and vice versa. So i'm searching for a way to let Nginx decide if he should proxy_pass via http or https. Can anybody help me with that configuration? Thanks Greets Kilian ________________________________ This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is confidential and protected by law from unauthorized disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Feb 22 16:55:09 2017 From: nginx-forum at forum.nginx.org (kaustubh) Date: Wed, 22 Feb 2017 11:55:09 -0500 Subject: input required on proxy_next_upstream In-Reply-To: <20170221201837.GZ2958@daoine.org> References: <20170221201837.GZ2958@daoine.org> Message-ID: Thanks again for detailed reply. Yeah it would have been good to have this feature in nginx upstream module. Its an important feature, will try out your suggestions and will share. Thanks a lot for sharing inputs! Cheers, Kaustubh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272440,272568#msg-272568 From reallfqq-nginx at yahoo.fr Wed Feb 22 17:52:00 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 22 Feb 2017 18:52:00 +0100 Subject: Nginx multiple upstream with different protocols In-Reply-To: <68681223e80c4e58b4d16b1b4b8981ab@kilian-ries.de> References: <74A4D440E25E6843BC8E324E67BB3E39456D115E@N060XBOXP38.kroger.com> <68681223e80c4e58b4d16b1b4b8981ab@kilian-ries.de> Message-ID: I suggest you proxy traffic to an upstream group, and then use failure/timeout parameters there with proper tuning to retry requests on the second upstream in case the first in the list fails. ?It will have an overhead if the 1st entry of the upstream group is invalid on initial connection, but hopefully the 'down' status will help limiting that overhead on average.? --- *B. R.* On Wed, Feb 22, 2017 at 5:08 PM, Kilian Ries wrote: > No they cannot be the same (sadly) because i dont't know how the upstream > is serving the content. Think of a situation where i am not in control of > the upstream backends and they may change from http to https over time. > ------------------------------ > *Von:* nginx im Auftrag von Cox, Eric S < > eric.cox at kroger.com> > *Gesendet:* Mittwoch, 22. Februar 2017 15:58:26 > *An:* nginx at nginx.org > *Betreff:* RE: Nginx multiple upstream with different protocols > > > If you are SSL on the frontend (server directive) why would you want to > proxy between ssl/non-ssl on the upstreams? Can they not be the same? I > don?t get what you are trying to solve? > > > > *From:* nginx [mailto:nginx-bounces at nginx.org] *On Behalf Of *Kilian Ries > *Sent:* Wednesday, February 22, 2017 9:55 AM > *To:* nginx at nginx.org > *Subject:* Nginx multiple upstream with different protocols > > > > Hi, > > > > i'm trying to setup two Nginx upstreams (one with HTTP and one with HTTPS) > and the proxy_pass module should decide which of the upstreams is serving > "valid" content. > > > > The config should look like this: > > > > upstream proxy_backend { > > server xxx.xx.188.53; > > server xxx.xx.188.53:443; > > } > > > > server { > > listen 443 ssl; > > ... > > location / { > > proxy_pass http://proxy_backend > > ; > > #proxy_pass https://proxy_backend > > ; > > } > > } > > > > > > The Problem is that i don't know if the upstream is serving the content > via http or https. Is there any possibility to tell nginx to change the > protocol from the proxy_pass directive? Because if i set proxy_pass to > https, i get an error (502 / 400) if the upstream website is running on > http and vice versa. > > > > So i'm searching for a way to let Nginx decide if he should proxy_pass via > http or https. Can anybody help me with that configuration? > > > > Thanks > > Greets > > Kilian > > ------------------------------ > > This e-mail message, including any attachments, is for the sole use of the > intended recipient(s) and may contain information that is confidential and > protected by law from unauthorized disclosure. Any unauthorized review, > use, disclosure or distribution is prohibited. If you are not the intended > recipient, please contact the sender by reply e-mail and destroy all copies > of the original message. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Feb 22 18:36:49 2017 From: nginx-forum at forum.nginx.org (sum-it) Date: Wed, 22 Feb 2017 13:36:49 -0500 Subject: another "bind() to 0.0.0.0:80 failed (98: Address already in use)" issue Message-ID: <3c9d83557beb16559f631aecae8510e0.NginxMailingListEnglish@forum.nginx.org> Hello All, I have another "bind() to 0.0.0.0:80 failed (98: Address already in use)" issue. I am working on a minimal system including nginx only. System startup time, and readiness time are important points. Whilte testing I figured out sometime system boots up within 500ms and sometimes it takes around 3 second. On further probing I find out nginx is taking different time to start up which costs me extra 2.5 Seconds. So I tested and figured out that error in those cases is "bind() to 0.0.0.0:80 failed (98: Address already in use). Few of my observation here are, 1. No other process is using that port, there is no other web server or application running on the system. 2. The case is not only limited to nginx restart, where nginx might not be shutdown correctly and itself might be using that port. Nginx even fails during system start, in cases where it has caused longer boot time. 3. I use customized kernels, but that kernel shouldn't be culprit either because sometimes it works on that kernel as well. Another point here is failure in customized kernel is more often as compared to stock kernel. The ratio of failure in stock kernel is around 30% and in customized is 70% but system works on both and fails on both. 4. Start/Stop scripts always exit with success status "0". 5. I tested nginx in a restart loop, with a 1 second sleep before and after start and stop. Failure is random. 6. Worse, nginx is actually running even though error said bind failed. I can connect to it, access default web page, and it is listed in netstat as listening as well. Output of netstat -ntl is at: http://pastebin.com/26b6KNAZ Error Log is at: http://pastebin.com/w0y8aa9p This is one of the customized system, a derivative of debian, I am working on. System wise, everything is consistent. I use same kernel, same system image with same parameters and it works sometime and fails otherwise. nginx -t gives nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful So configuration shouldn't be a problem. configuration file is default and available at: http://pastebin.com/iRFfW3UE Process listing after nginx startup: http://pastebin.com/0vB19rLq Process listing after nginx stop: http://pastebin.com/iQafxjiF Any pointer to debug the issue would be very helpful. Regards, sum-it Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272570,272570#msg-272570 From francis at daoine.org Wed Feb 22 20:14:40 2017 From: francis at daoine.org (Francis Daly) Date: Wed, 22 Feb 2017 20:14:40 +0000 Subject: how can I use external URI with the auth_request module In-Reply-To: <6E560B6A-249B-4317-BC4A-C4485F6EE46B@gmail.com> References: <6E560B6A-249B-4317-BC4A-C4485F6EE46B@gmail.com> Message-ID: <20170222201440.GB2958@daoine.org> On Wed, Feb 22, 2017 at 03:55:48AM +0200, Litichevskij Vova wrote: Hi there, > Or in this way with named location: > > server { > > location / { > auth_request @auth; > proxy_pass http://protected.resource; > } > > location @auth { > proxy_pass http://external.url; > } > } > In this case the error is almost the same: > > 2017/02/22 03:13:25 [error] 25476#0: *34 open() "/usr/local/html at auth" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET / HTTP/1.1", subrequest: "@auth", host: "127.0.0.1" I would (naively?) have expected the named location to Just Work. But clearly it doesn't. > I know there is a way like this: > > server { > > location / { > auth_request /_auth_check; > proxy_pass http://protected.resource; > } > > location /_auth_check { > internal; > proxy_pass http://external.url; > } > } > But in this case the http://protected.resource can not use the /_auth_check path. You can instead use "location = /_auth_check" if you are happy to reserve exactly one url for internal use. (You'ld probably want to add a uri part to the hostname in the proxy_pass directive.) Or you could play games, and use a location which looks like it is a named location, but actually is not, and is just a location that is unlikely to be accessed directly, such as "location = @auth". > Is there a way to use an external URI as a parameter for the auth_request directive without overlapping the http://protected.resource routing? auth_request takes an argument which is a local uri. > It looks a little bit strange to look for the auth_request's URI through static files (/usr/local/html). It does whatever you configured nginx to do with that uri. (Apart from the "@named" piece, which I'm not sure about.) Cheers, f -- Francis Daly francis at daoine.org From vukomir at ianculov.ro Wed Feb 22 20:24:27 2017 From: vukomir at ianculov.ro (Vucomir Ianculov) Date: Wed, 22 Feb 2017 21:24:27 +0100 (CET) Subject: nginx X-backend upstream hostname In-Reply-To: <1925303615.60.1487793823365.JavaMail.vukomir@DESKTOP-OI0VS1P> Message-ID: <1768822972.100.1487795067154.JavaMail.vukomir@DESKTOP-OI0VS1P> Hi, I'm have following configuration. nginx 1.10.3 installed on ubuntu 16.04 one upstream upstream backend { server app01.local.net:81; server app02.local.net:81; server app03.local.net:81; } one vhost that dose proxy_pass http://backend; i also have a old nginx setup that was done some time ago, on every request its add a header X-Backend:app01, depending on with backed the request is sent. i tried to reproduce the setup with no success, i checked all files but did not find any configuration that is setting X-Backend on the nginx side. can someone help me with this issue. Thanks. Vuko -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Feb 22 20:49:13 2017 From: francis at daoine.org (Francis Daly) Date: Wed, 22 Feb 2017 20:49:13 +0000 Subject: nginx as reverse proxy to several backends In-Reply-To: References: Message-ID: <20170222204913.GC2958@daoine.org> On Wed, Feb 22, 2017 at 06:29:56AM +0100, Filip Francis wrote: Hi there, I haven't tested any of this, but... > I am trying to set-up a reverse proxy with nginx so that based on > the server_name it goes to the correct backend. That should be straightforward; and it looks to me like you almost have it working. > when user type xxxx.yyy.be as normal http it redirects to https and > then forwards it to the backend nummer 1 It may be worth being explicit there: http://xxxx.yyy.be is redirected to https://xxxx.yyy.be https://xxxx.yyy.be is proxy_pass'ed to backend1 > but when user type zzzz.yyy.be also as normal http it redrects it to > https and forwards it to the correct backend (so here it would be > backend nummer 2) And the same for zzzz.yyy.be, but eventually to backend2. > so in sites-enabled i put several files that is being loaded but > nothing is working Does "nothing is working" include "curl -v http://xxxx.yyy.be" getting something other than a 301 redirect to https://xxxx.yyy.be ? If so - what does it get instead? > include /opt/local/etc/nginx/sites-enabled/*; > here is the content: > > owncloud: > server { > listen 443 ssl http2; ssl is on, but there is no "default_server" set explicitly here. > and mattermost: > server { > listen 443; ssl is not on, and there is no "default_server" set explicitly here. Alphabetically, I think that this server{} will be the default for any connections to port 443, and I'm not sure what will happen when "ssl" is not set here but is set elsewhere on a port 443 listener. > This is working (more or less) but if i start moving the ssl bit > into the owncloud or mattermost its simply is not working any more I don't understand what you mean by this, I'm afraid. The config you show does work, but a config you do not show does not work? Or something else? > getting each time that i type http://zzzz.yyy.be i get 400 bad > request The plain HTTP request was sent to HTTPS port If the problem is "missing ssl on the mattermost listen directive", then I would expect a https request to be going to a http port. http request to a https port looks like it would need your upstream (192.168.1.95:8065) to be listening for https. Just for clarity, could you show the responses you get for a "curl -v" request to the first http:// address, and then to the (presumably) returned 301 Location? That may make it more obvious what is happening, compared to what should be happening; and may make the solution clear to somebody. Cheers, f -- Francis Daly francis at daoine.org From jeff.dyke at gmail.com Wed Feb 22 21:47:06 2017 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Wed, 22 Feb 2017 16:47:06 -0500 Subject: another "bind() to 0.0.0.0:80 failed (98: Address already in use)" issue In-Reply-To: <3c9d83557beb16559f631aecae8510e0.NginxMailingListEnglish@forum.nginx.org> References: <3c9d83557beb16559f631aecae8510e0.NginxMailingListEnglish@forum.nginx.org> Message-ID: depending on the version you may want to look for /etc/nginx/conf.d/default.conf, when i have been building servers (i use salt for configuration management) i have in my state file that includes file.absent: - /etc/nginx/conf.d/default.conf which will ensure the file does not exist immediately after install, and when i startup my virtual hosts all is well. Based on your statements that may/not be your issue, but something that bit me and regardless...something is listening on port 80, when you get that what does `sudo netstat -nap | grep LISTEN` show HTH Jeff On Wed, Feb 22, 2017 at 1:36 PM, sum-it wrote: > Hello All, > > I have another "bind() to 0.0.0.0:80 failed (98: Address already in use)" > issue. > > I am working on a minimal system including nginx only. System startup time, > and readiness time are important points. Whilte testing I figured out > sometime system boots up within 500ms and sometimes it takes around 3 > second. On further probing I find out nginx is taking different time to > start up which costs me extra 2.5 Seconds. So I tested and figured out that > error in those cases is "bind() to 0.0.0.0:80 failed (98: Address already > in > use). > > Few of my observation here are, > 1. No other process is using that port, there is no other web server or > application running on the system. > 2. The case is not only limited to nginx restart, where nginx might not be > shutdown correctly and itself might be using that port. Nginx even fails > during system start, in cases where it has caused longer boot time. > 3. I use customized kernels, but that kernel shouldn't be culprit either > because sometimes it works on that kernel as well. Another point here is > failure in customized kernel is more often as compared to stock kernel. The > ratio of failure in stock kernel is around 30% and in customized is 70% but > system works on both and fails on both. > 4. Start/Stop scripts always exit with success status "0". > 5. I tested nginx in a restart loop, with a 1 second sleep before and after > start and stop. Failure is random. > 6. Worse, nginx is actually running even though error said bind failed. I > can connect to it, access default web page, and it is listed in netstat as > listening as well. > > Output of netstat -ntl is at: http://pastebin.com/26b6KNAZ > > Error Log is at: http://pastebin.com/w0y8aa9p > > This is one of the customized system, a derivative of debian, I am working > on. System wise, everything is consistent. I use same kernel, same system > image with same parameters and it works sometime and fails otherwise. > > nginx -t gives > nginx: the configuration file /etc/nginx/nginx.conf syntax is ok > nginx: configuration file /etc/nginx/nginx.conf test is successful > So configuration shouldn't be a problem. > > configuration file is default and available at: > http://pastebin.com/iRFfW3UE > > Process listing after nginx startup: http://pastebin.com/0vB19rLq > Process listing after nginx stop: http://pastebin.com/iQafxjiF > > Any pointer to debug the issue would be very helpful. > > Regards, > sum-it > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,272570,272570#msg-272570 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Feb 22 21:58:50 2017 From: nginx-forum at forum.nginx.org (xxdesmus) Date: Wed, 22 Feb 2017 16:58:50 -0500 Subject: realip and remote_port In-Reply-To: References: Message-ID: I can confirm I see the same thing. If you use set_real_ip_from, real_ip_header, etc then it wipes out the $server_port value. Is there any work around to this? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,261668,272576#msg-272576 From mail at kilian-ries.de Thu Feb 23 10:38:42 2017 From: mail at kilian-ries.de (Kilian Ries) Date: Thu, 23 Feb 2017 10:38:42 +0000 Subject: AW: Nginx multiple upstream with different protocols In-Reply-To: References: <74A4D440E25E6843BC8E324E67BB3E39456D115E@N060XBOXP38.kroger.com> <68681223e80c4e58b4d16b1b4b8981ab@kilian-ries.de>, Message-ID: I think i already tried what you suggested, but that doesn't work because i have to set a specific protocol in the proxy_pass command (http or https). If i have a mixed upstream group like upstream proxy_backend { server xxx.xx.188.53; server xxx.xx.188.53:443; } i always get protocol errors like 502 or 400 because i cannot switch between http and https in the proxy_pass command ________________________________ Von: nginx im Auftrag von B.R. via nginx Gesendet: Mittwoch, 22. Februar 2017 18:52:00 An: nginx ML Cc: B.R. Betreff: Re: Nginx multiple upstream with different protocols I suggest you proxy traffic to an upstream group, and then use failure/timeout parameters there with proper tuning to retry requests on the second upstream in case the first in the list fails. ?It will have an overhead if the 1st entry of the upstream group is invalid on initial connection, but hopefully the 'down' status will help limiting that overhead on average.? --- B. R. On Wed, Feb 22, 2017 at 5:08 PM, Kilian Ries > wrote: No they cannot be the same (sadly) because i dont't know how the upstream is serving the content. Think of a situation where i am not in control of the upstream backends and they may change from http to https over time. ________________________________ Von: nginx > im Auftrag von Cox, Eric S > Gesendet: Mittwoch, 22. Februar 2017 15:58:26 An: nginx at nginx.org Betreff: RE: Nginx multiple upstream with different protocols If you are SSL on the frontend (server directive) why would you want to proxy between ssl/non-ssl on the upstreams? Can they not be the same? I don't get what you are trying to solve? From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Kilian Ries Sent: Wednesday, February 22, 2017 9:55 AM To: nginx at nginx.org Subject: Nginx multiple upstream with different protocols Hi, i'm trying to setup two Nginx upstreams (one with HTTP and one with HTTPS) and the proxy_pass module should decide which of the upstreams is serving "valid" content. The config should look like this: upstream proxy_backend { server xxx.xx.188.53; server xxx.xx.188.53:443; } server { listen 443 ssl; ... location / { proxy_pass http://proxy_backend; #proxy_pass https://proxy_backend; } } The Problem is that i don't know if the upstream is serving the content via http or https. Is there any possibility to tell nginx to change the protocol from the proxy_pass directive? Because if i set proxy_pass to https, i get an error (502 / 400) if the upstream website is running on http and vice versa. So i'm searching for a way to let Nginx decide if he should proxy_pass via http or https. Can anybody help me with that configuration? Thanks Greets Kilian ________________________________ This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is confidential and protected by law from unauthorized disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Thu Feb 23 12:12:17 2017 From: lagged at gmail.com (Andrei) Date: Thu, 23 Feb 2017 06:12:17 -0600 Subject: Nginx multiple upstream with different protocols In-Reply-To: References: <74A4D440E25E6843BC8E324E67BB3E39456D115E@N060XBOXP38.kroger.com> <68681223e80c4e58b4d16b1b4b8981ab@kilian-ries.de> Message-ID: I suggest splitting your upstreams by protocol, then proxying requests depending on HTTPS headers to the apropriate group. There's an example on how to detect HTTPs at http://serverfault.com/questions/527780/nginx-detect-https-connection-using-a-header On Thu, Feb 23, 2017 at 4:38 AM, Kilian Ries wrote: > I think i already tried what you suggested, but that doesn't work because > i have to set a specific protocol in the proxy_pass command (http or > https). If i have a mixed upstream group like > > upstream proxy_backend { > server xxx.xx.188.53; > server xxx.xx.188.53:443; > > } > > i always get protocol errors like 502 or 400 because i cannot switch > between http and https in the proxy_pass command > ------------------------------ > *Von:* nginx im Auftrag von B.R. via nginx < > nginx at nginx.org> > *Gesendet:* Mittwoch, 22. Februar 2017 18:52:00 > *An:* nginx ML > *Cc:* B.R. > *Betreff:* Re: Nginx multiple upstream with different protocols > > I suggest you proxy traffic to an upstream group, and then use > failure/timeout parameters there with proper tuning to retry requests on > the second upstream in case the first in the list fails. > ?It will have an overhead if the 1st entry of the upstream group is > invalid on initial connection, but hopefully the 'down' status will help > limiting that overhead on average.? > --- > *B. R.* > > On Wed, Feb 22, 2017 at 5:08 PM, Kilian Ries wrote: > >> No they cannot be the same (sadly) because i dont't know how the upstream >> is serving the content. Think of a situation where i am not in control of >> the upstream backends and they may change from http to https over time. >> ------------------------------ >> *Von:* nginx im Auftrag von Cox, Eric S < >> eric.cox at kroger.com> >> *Gesendet:* Mittwoch, 22. Februar 2017 15:58:26 >> *An:* nginx at nginx.org >> *Betreff:* RE: Nginx multiple upstream with different protocols >> >> >> If you are SSL on the frontend (server directive) why would you want to >> proxy between ssl/non-ssl on the upstreams? Can they not be the same? I >> don?t get what you are trying to solve? >> >> >> >> *From:* nginx [mailto:nginx-bounces at nginx.org] *On Behalf Of *Kilian Ries >> *Sent:* Wednesday, February 22, 2017 9:55 AM >> *To:* nginx at nginx.org >> *Subject:* Nginx multiple upstream with different protocols >> >> >> >> Hi, >> >> >> >> i'm trying to setup two Nginx upstreams (one with HTTP and one with >> HTTPS) and the proxy_pass module should decide which of the upstreams is >> serving "valid" content. >> >> >> >> The config should look like this: >> >> >> >> upstream proxy_backend { >> >> server xxx.xx.188.53; >> >> server xxx.xx.188.53:443; >> >> } >> >> >> >> server { >> >> listen 443 ssl; >> >> ... >> >> location / { >> >> proxy_pass http://proxy_backend >> >> ; >> >> #proxy_pass https://proxy_backend >> >> ; >> >> } >> >> } >> >> >> >> >> >> The Problem is that i don't know if the upstream is serving the content >> via http or https. Is there any possibility to tell nginx to change the >> protocol from the proxy_pass directive? Because if i set proxy_pass to >> https, i get an error (502 / 400) if the upstream website is running on >> http and vice versa. >> >> >> >> So i'm searching for a way to let Nginx decide if he should proxy_pass >> via http or https. Can anybody help me with that configuration? >> >> >> >> Thanks >> >> Greets >> >> Kilian >> >> ------------------------------ >> >> This e-mail message, including any attachments, is for the sole use of >> the intended recipient(s) and may contain information that is confidential >> and protected by law from unauthorized disclosure. Any unauthorized review, >> use, disclosure or distribution is prohibited. If you are not the intended >> recipient, please contact the sender by reply e-mail and destroy all copies >> of the original message. >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Feb 23 12:38:06 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Thu, 23 Feb 2017 07:38:06 -0500 Subject: AW: Nginx multiple upstream with different protocols In-Reply-To: References: Message-ID: <9490ecd7de2e85176e81623db3e40f82.NginxMailingListEnglish@forum.nginx.org> For a server {} that you want to make both universally compatible with both http port 80 and https port 443 ssl requests. This was my solution for my own sites. #inside http block upstream proxy_web_rack { #port 80 unsecured requests server 172.16.0.1:80; } upstream proxy_web_rack_ssl { #port 443 secured requests server 172.16.0.1:443; } #end http block #Server block server { listen 80; listen 443 ssl; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; if ($scheme = "http") { proxy_pass $scheme://proxy_web_rack; #if scheme was http send to port 80 } if ($scheme = "https") { proxy_pass $scheme://proxy_web_rack_ssl; #if scheme was https send to port 443 } } #end location } #end server block That way if the recieved request from client is a https secured one proxy_pass will make sure that it goes over port 443 and remains secured. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272561,272589#msg-272589 From nginx-forum at forum.nginx.org Thu Feb 23 13:55:41 2017 From: nginx-forum at forum.nginx.org (yarix) Date: Thu, 23 Feb 2017 08:55:41 -0500 Subject: limit_req based on custom code In-Reply-To: <53fa6ab48c83816088ebf3e7b47631e6.NginxMailingListEnglish@forum.nginx.org> References: <53fa6ab48c83816088ebf3e7b47631e6.NginxMailingListEnglish@forum.nginx.org> Message-ID: anyone, please? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272281,272590#msg-272590 From nginx-forum at forum.nginx.org Thu Feb 23 15:58:14 2017 From: nginx-forum at forum.nginx.org (sum-it) Date: Thu, 23 Feb 2017 10:58:14 -0500 Subject: another "bind() to 0.0.0.0:80 failed (98: Address already in use)" issue In-Reply-To: References: Message-ID: <78d1ab60c6a96d1950e87b7c000c369d.NginxMailingListEnglish@forum.nginx.org> Hello Jeff, Thank you for your help. I tested your suggestion. This isn't the case. I also tested in normal debian system and this wasn't the case there either. I believe following section is the culprit. Start-stop-daemon tests if daemon was already running, however it doesn't relinquish the socket before actually starting the daemon. This is good in case of normal service startup where it returns "1" in those cases. For my case this costs me extra 2.5seconds which is huge for my otherwise normal .5 second system boot. start-stop-daemon --start --quiet --pidfile $PID --exec $DAEMON --test > /dev/null \ || return 1 start-stop-daemon --start --quiet --pidfile $PID --exec $DAEMON -- \ $DAEMON_OPTS 2>/dev/null \ || return 2 For now I commented first test since I am sure that no service is running in my case, this is system boot and nginx will be only service running on this system at port 80. For now everything works fine and results seems good. I will post again if this broke anything. Best, Sum-it Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272570,272591#msg-272591 From litichevskij.vova at gmail.com Thu Feb 23 17:03:08 2017 From: litichevskij.vova at gmail.com (Litichevskij Vova) Date: Thu, 23 Feb 2017 19:03:08 +0200 Subject: how can I use external URI with the auth_request module In-Reply-To: <20170222201440.GB2958@daoine.org> References: <6E560B6A-249B-4317-BC4A-C4485F6EE46B@gmail.com> <20170222201440.GB2958@daoine.org> Message-ID: <8267ABAA-B05F-4DCF-B1A6-636034308AAC@gmail.com> Thank you, Francis, for your answer. The question is more metaphysical, actually, ? why the module what declares that it "implements client authorization based on the result of a subrequest" does not allow to use direct external URI as value for the address of this "sbrequest"; and the subrequest's address occupies path of the protected resource. Anyway, thank you! > On Feb 22, 2017, at 10:14 PM, Francis Daly wrote: > > On Wed, Feb 22, 2017 at 03:55:48AM +0200, Litichevskij Vova wrote: > > Hi there, > >> Or in this way with named location: >> >> server { >> >> location / { >> auth_request @auth; >> proxy_pass http://protected.resource; >> } >> >> location @auth { >> proxy_pass http://external.url; >> } >> } >> In this case the error is almost the same: >> >> 2017/02/22 03:13:25 [error] 25476#0: *34 open() "/usr/local/html at auth" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET / HTTP/1.1", subrequest: "@auth", host: "127.0.0.1" > > I would (naively?) have expected the named location to Just Work. But > clearly it doesn't. > >> I know there is a way like this: >> >> server { >> >> location / { >> auth_request /_auth_check; >> proxy_pass http://protected.resource; >> } >> >> location /_auth_check { >> internal; >> proxy_pass http://external.url; >> } >> } >> But in this case the http://protected.resource can not use the /_auth_check path. > > You can instead use "location = /_auth_check" if you are happy to reserve > exactly one url for internal use. (You'ld probably want to add a uri > part to the hostname in the proxy_pass directive.) > > Or you could play games, and use a location which looks like it is a > named location, but actually is not, and is just a location that is > unlikely to be accessed directly, such as "location = @auth". > >> Is there a way to use an external URI as a parameter for the auth_request directive without overlapping the http://protected.resource routing? > > auth_request takes an argument which is a local uri. > >> It looks a little bit strange to look for the auth_request's URI through static files (/usr/local/html). > > It does whatever you configured nginx to do with that uri. (Apart from > the "@named" piece, which I'm not sure about.) > > Cheers, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Feb 24 06:24:11 2017 From: nginx-forum at forum.nginx.org (ashoba) Date: Fri, 24 Feb 2017 01:24:11 -0500 Subject: Websocket, set Sec-Websocket-Protocol Message-ID: Hello,guys! How I can set Sec-WebSocket-Protocol in config? I've tried proxy_set_header Sec-WebSocket-Protocol "v10.stomp, v11.stomp"; and add_header Sec-WebSocket-Protocol "v10.stomp, v11.stomp". In response, I'm not getting 'Sec-WebSocket-Protocol' header. What can be wrong? P.S. Nginx 1.6.2 Best regards, Arthur. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272596,272596#msg-272596 From nginx-forum at forum.nginx.org Fri Feb 24 10:07:22 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Fri, 24 Feb 2017 05:07:22 -0500 Subject: Nginx proxy_pass HTTPS/SSL/HTTP2 keepalive Message-ID: <05a223bc7b7c47f8306beabc0ec01e50.NginxMailingListEnglish@forum.nginx.org> So the Nginx documentation says this http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive For HTTP, the proxy_http_version directive should be set to ?1.1? and the ?Connection? header field should be cleared: upstream http_backend { server 127.0.0.1:8080; keepalive 16; } server { ... location /http/ { proxy_pass http://http_backend; proxy_http_version 1.1; proxy_set_header Connection ""; ... } } But does it also apply for HTTPS/HTTP2 because proxy_http_version gets set to 1.1 ? Example : upstream https_backend { server 127.0.0.1:443; keepalive 16; } server { listen 443 ssl http2; location /https/ { proxy_pass https://https_backend; proxy_http_version 1.1; proxy_set_header Connection ""; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272601,272601#msg-272601 From nginx-forum at forum.nginx.org Fri Feb 24 10:50:07 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Fri, 24 Feb 2017 05:50:07 -0500 Subject: Nginx proxy_pass HTTPS/SSL/HTTP2 keepalive In-Reply-To: <05a223bc7b7c47f8306beabc0ec01e50.NginxMailingListEnglish@forum.nginx.org> References: <05a223bc7b7c47f8306beabc0ec01e50.NginxMailingListEnglish@forum.nginx.org> Message-ID: <55a1608fccfeb809b8d77d260cdb817b.NginxMailingListEnglish@forum.nginx.org> I think from my understanding the proxy_http_version 1.1; is ignored over https since everything works and that directive does what it states proxy_HTTP_version for unsecured requests only it will be version 1.1 so i don't think it has any negative impact on HTTP2/SSL. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272601,272602#msg-272602 From nginx-forum at forum.nginx.org Fri Feb 24 11:17:51 2017 From: nginx-forum at forum.nginx.org (p0lak) Date: Fri, 24 Feb 2017 06:17:51 -0500 Subject: One NGINX server to 2 backend servers Message-ID: Hello everybody, I have installed a dedicated server for NGINX based on Ubuntu Core I want to define this behavior NGINX Server (Public IP) >>> listening on port 80 >>> redirect to a LAN Server on port 80 (http://mylocalserver/virtualhost1) NGINX Server (Public IP) >>> listening on port 81 >>> redirect to a LAN Server on port 80 (http://mylocalserver/virtualhost2) My local virtualhost on my backend server is reachable (ie: http://mylocalserver/virtualhost1) but my second virtualhost is not reachable (ie: http://mylocalserver/virtualhost2) it is like the network port is closed but my firewall is accepting the flow. here is my configuration if you have any idea why my second virtualhost is not reachable ##NGINX.CONF## user www-data; worker_processes 2; events { worker_connections 19000; } worker_rlimit_nofile 40000; http { client_body_timeout 5s; client_header_timeout 5s; keepalive_timeout 75s; send_timeout 15s; gzip on; gzip_disable "msie6"; gzip_http_version 1.1; gzip_comp_level 5; gzip_min_length 256; gzip_proxied any; gzip_vary on; gzip_types application/atom+xml application/javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component; client_max_body_size 100k; client_body_buffer_size 128k; client_body_in_single_buffer on; client_body_temp_path /var/nginx/client_body_temp; client_header_buffer_size 1k; large_client_header_buffers 4 4k; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } #PROXY.CONF# proxy_redirect on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; #VIRTUALHOST 1 server { listen 80; server_name virtualhost1; } #VIRTUALHOST 2 server { listen 81; server_name virtualhost2; } Could you please help me regarding my issue, Thanks so much, Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272606,272606#msg-272606 From nginx-forum at forum.nginx.org Fri Feb 24 12:33:19 2017 From: nginx-forum at forum.nginx.org (0liver) Date: Fri, 24 Feb 2017 07:33:19 -0500 Subject: How to cache image urls with query strings? Message-ID: <547156602b6ad415d4e04574b73d59f4.NginxMailingListEnglish@forum.nginx.org> We've recently started delivering image urls with query strings for cropping, like http://images-camping.info/CampsiteImages/116914_Large.jpg?width=453&height=302&mode=crop We've also successfully been using the NGINX cache for our images *before* adding the query strings. Unfortunately, with the query strings added, caching does not work anymore and all requests to above URL are passed to the upstream server. You can see this by inspecting the HTTP response headers for above url: X-Cache-Status is always MISS. Can anybody point to me to the necessary pieces of information to get caching for resources with query strings to work? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272608,272608#msg-272608 From nelsonmarcos at gmail.com Fri Feb 24 13:50:10 2017 From: nelsonmarcos at gmail.com (Nelson Marcos) Date: Fri, 24 Feb 2017 10:50:10 -0300 Subject: How to cache image urls with query strings? In-Reply-To: <547156602b6ad415d4e04574b73d59f4.NginxMailingListEnglish@forum.nginx.org> References: <547156602b6ad415d4e04574b73d59f4.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi Oliver! How is your proxy_cache_key configured? Um abra?o, NM 2017-02-24 9:33 GMT-03:00 0liver : > We've recently started delivering image urls with query strings for > cropping, like > > http://images-camping.info/CampsiteImages/116914_Large. > jpg?width=453&height=302&mode=crop > > We've also successfully been using the NGINX cache for our images *before* > adding the query strings. > > Unfortunately, with the query strings added, caching does not work anymore > and all requests to above URL are passed to the upstream server. You can > see > this by inspecting the HTTP response headers for above url: X-Cache-Status > is always MISS. > > Can anybody point to me to the necessary pieces of information to get > caching for resources with query strings to work? > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,272608,272608#msg-272608 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Feb 24 14:23:55 2017 From: nginx-forum at forum.nginx.org (0liver) Date: Fri, 24 Feb 2017 09:23:55 -0500 Subject: How to cache image urls with query strings? In-Reply-To: References: Message-ID: <3a392f98da14df9352469211c65bfb47.NginxMailingListEnglish@forum.nginx.org> We don't explicitly set the proxy_cache_key - we use the default value provided by NGINX. >From the docs at http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_key I understood that the query string is part of the cache key - I'm reaching out for help because it doesn't work as I'd expect. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272608,272611#msg-272611 From dewanggaba at xtremenitro.org Fri Feb 24 14:42:24 2017 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Fri, 24 Feb 2017 21:42:24 +0700 Subject: How to cache image urls with query strings? In-Reply-To: <547156602b6ad415d4e04574b73d59f4.NginxMailingListEnglish@forum.nginx.org> References: <547156602b6ad415d4e04574b73d59f4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0a502e5f-2042-15ce-87cd-82718003f1c7@xtremenitro.org> Hello! On 02/24/2017 07:33 PM, 0liver wrote: > We've recently started delivering image urls with query strings for > cropping, like > > http://images-camping.info/CampsiteImages/116914_Large.jpg?width=453&height=302&mode=crop > Try add: proxy_ignore_headers Cache-Control Expires; Ref: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ignore_headers > We've also successfully been using the NGINX cache for our images *before* > adding the query strings. > > Unfortunately, with the query strings added, caching does not work anymore > and all requests to above URL are passed to the upstream server. You can see > this by inspecting the HTTP response headers for above url: X-Cache-Status > is always MISS. > > Can anybody point to me to the necessary pieces of information to get > caching for resources with query strings to work? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272608,272608#msg-272608 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From telvinanto at gmail.com Fri Feb 24 18:03:50 2017 From: telvinanto at gmail.com (Anto) Date: Fri, 24 Feb 2017 23:33:50 +0530 Subject: No subject Message-ID: Hi Team , Would like to know how i can configure Ngnix LB with SSL termination ? In addition to the above would like to configure LB with multiple httpd's with single IP . Can you guide me how i can do the same with proxy pass ? Note : I have a single OHS server with 2 different httpd.conf files listening to two different ports. Need to configure LB with SSL termination to redirect to same servers . Need a step by step guideline help - thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Fri Feb 24 22:59:55 2017 From: al-nginx at none.at (Aleksandar Lazic) Date: Fri, 24 Feb 2017 23:59:55 +0100 Subject: [no subject] In-Reply-To: References: Message-ID: Hi Anto. Am 24-02-2017 19:03, schrieb Anto: > Hi Team , > > Would like to know how i can configure Ngnix LB with SSL termination ? > In addition to the above would like to configure LB with multiple > httpd's with single IP. > Can you guide me how i can do the same with proxy pass ? > > Note : I have a single OHS server with 2 different httpd.conf files > listening to two different ports. > Need to configure LB with SSL termination to redirect to same servers . > > Need a step by step guideline help - thanks How about to start with this blog post. https://www.nginx.com/resources/admin-guide/nginx-https-upstreams/ Best regards Aleks From mdounin at mdounin.ru Sat Feb 25 23:16:43 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 26 Feb 2017 02:16:43 +0300 Subject: realip and remote_port In-Reply-To: References: Message-ID: <20170225231643.GA34777@mdounin.ru> Hello! On Wed, Feb 22, 2017 at 04:58:50PM -0500, xxdesmus wrote: > I can confirm I see the same thing. > > If you use set_real_ip_from, real_ip_header, etc then it wipes out the > $server_port value. The $server_port variable is not expected to be changed by the realip module. If you see it changed - you may want to provide more details. The original question you are answering to, as asked two years ago, was about $remote_port. The $remote_port is somewhat expected to be cleared as long as the address of the client as seen in the header configured only contains the address itself, and not the port. And this is usually the case. Starting with nginx 1.11.0, the realip module is able to recognize and use addresses with ports from the header, see CHANGES: *) Feature: the ngx_http_realip_module is now able to set the client port in addition to the address. But it is not something usually configured and/or available from normal proxies in the X-Forwarded-For header. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Sat Feb 25 23:32:13 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 26 Feb 2017 02:32:13 +0300 Subject: Nginx proxy_pass HTTPS/SSL/HTTP2 keepalive In-Reply-To: <05a223bc7b7c47f8306beabc0ec01e50.NginxMailingListEnglish@forum.nginx.org> References: <05a223bc7b7c47f8306beabc0ec01e50.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170225233212.GB34777@mdounin.ru> Hello! On Fri, Feb 24, 2017 at 05:07:22AM -0500, c0nw0nk wrote: > So the Nginx documentation says this > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive > > For HTTP, the proxy_http_version directive should be set to ?1.1? and the > ?Connection? header field should be cleared: > > upstream http_backend { > server 127.0.0.1:8080; > > keepalive 16; > } > > server { > ... > > location /http/ { > proxy_pass http://http_backend; > proxy_http_version 1.1; > proxy_set_header Connection ""; > ... > } > } > > > But does it also apply for HTTPS/HTTP2 because proxy_http_version gets set > to 1.1 ? The HTTPS isn't really a separate protocol, but rather a HTTP inside a SSL/TLS connection. In this context, anything about HTTP applies to HTTPS as well. The HTTP/2 is a separate protocol (again, normally used inside a SSL/TLS connection). And this protocol is not supported by the proxy module. All connections with backends using proxy_pass use HTTP/1.0 or HTTP/1.1 depending on proxy_http_version. > Example : > > upstream https_backend { > server 127.0.0.1:443; > > keepalive 16; > } > > server { > listen 443 ssl http2; > > location /https/ { > proxy_pass https://https_backend; > proxy_http_version 1.1; > proxy_set_header Connection ""; In this example, nginx will accept connections on the port 443 using SSL, with either HTTP/0.9, HTTP/1.0, HTTP/1.1, or HTTP/2 inside an SSL connection. Requests under the "/https/" prefix will be forwarded to 127.0.0.1:443 using SSL and HTTP/1.1. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Sun Feb 26 05:00:02 2017 From: nginx-forum at forum.nginx.org (xxdesmus) Date: Sun, 26 Feb 2017 00:00:02 -0500 Subject: realip and remote_port In-Reply-To: <20170225231643.GA34777@mdounin.ru> References: <20170225231643.GA34777@mdounin.ru> Message-ID: <8962dd35fd5b1506badae0946daff1c4.NginxMailingListEnglish@forum.nginx.org> Sorry, I meant $remote_port -- my apologies. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,261668,272619#msg-272619 From aldernetwork at gmail.com Sun Feb 26 05:24:51 2017 From: aldernetwork at gmail.com (Alder Netw) Date: Sat, 25 Feb 2017 21:24:51 -0800 Subject: Nginx with a ICAP-like front-end Message-ID: Hi I want to add an ICAP-like front-end validation server V with nginx. The user scenario is like this: The client will usually access the real app server R via nginx, but with a validation server V, the client request will first pass to V, V will dp certain validation and upon sucess the request will be forwarded to R and R will return directly to clients; Upon failure, the request will be denied. Is there any easy nginx config which can achieve this? Thanks, - Alder -------------- next part -------------- An HTML attachment was scrubbed... URL: From pratyush at hostindya.com Sun Feb 26 05:56:39 2017 From: pratyush at hostindya.com (Pratyush Kumar) Date: Sun, 26 Feb 2017 11:26:39 +0530 Subject: One NGINX server to 2 backend servers In-Reply-To: Message-ID: <81022ab8-41c6-4d4d-8cad-20e9136d0745@email.android.com> An HTML attachment was scrubbed... URL: From hemelaar at desikkel.nl Sun Feb 26 09:43:53 2017 From: hemelaar at desikkel.nl (Jean-Paul Hemelaar) Date: Sun, 26 Feb 2017 10:43:53 +0100 Subject: Using proxy_cache_background_update Message-ID: Hi all, I tested the new proxy_cache_background_update function to serve stale content while fetching an update in the background. I ran into the following issue: - PHP application running on www.example.com - Root document lives on /index.php As soon as the cache has expired: - A client requests http://www.example.com/ - Nginx returns the stale response - In the background Nginx will fetch http://www.mybackend.com/index.html (index.html instead of index.php or just /) - The backend server returns a 404 (which is normal) - The root document remains in stale state as Nginx is unable to fetch it properly As a workaround I included "rewrite ^/index.html$ / break;" to rewrite the /index.html call to a simple / for the backend server. This works, but is not ideal. Is there a better way to tell Nginx to just fetch "/"? Thanks, Jean-Paul Hemelaar -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Feb 26 10:12:53 2017 From: nginx-forum at forum.nginx.org (jeanpaul) Date: Sun, 26 Feb 2017 05:12:53 -0500 Subject: How to cache image urls with query strings? In-Reply-To: <3a392f98da14df9352469211c65bfb47.NginxMailingListEnglish@forum.nginx.org> References: <3a392f98da14df9352469211c65bfb47.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, The proxy_cache_key uses request parameters by default. As stated in http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_key it uses $scheme$proxy_host$request_uri by default. The $request_uti do contain the request parameters: http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_uri $request_uri full original request URI (with arguments) So a way to deal with this is using a self-made cache_key and strip the arguments with a regex: set $cacheuri $request_uri; if ($cacheuri ~ /example/images/([^\?]*)) { set $cacheuri /example/images/$1; } proxy_cache_key $cacheuri; Note: the regex is untested, but just to give you an idea. JP Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272608,272623#msg-272623 From mdounin at mdounin.ru Sun Feb 26 13:14:33 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 26 Feb 2017 16:14:33 +0300 Subject: Using proxy_cache_background_update In-Reply-To: References: Message-ID: <20170226131433.GF34777@mdounin.ru> Hello! On Sun, Feb 26, 2017 at 10:43:53AM +0100, Jean-Paul Hemelaar wrote: > Hi all, > > I tested the new proxy_cache_background_update function to serve stale > content while fetching an update in the background. > > I ran into the following issue: > - PHP application running on www.example.com > - Root document lives on /index.php > > As soon as the cache has expired: > - A client requests http://www.example.com/ > - Nginx returns the stale response > - In the background Nginx will fetch > http://www.mybackend.com/index.html (index.html > instead of index.php or just /) > - The backend server returns a 404 (which is normal) > - The root document remains in stale state as Nginx is unable to fetch it > properly > > As a workaround I included "rewrite ^/index.html$ / break;" to rewrite the > /index.html call to a simple / for the backend server. > This works, but is not ideal. > > Is there a better way to tell Nginx to just fetch "/"? Please show your configuration. In the background nginx is expected to fetch the same resource that was originally requested. It uses a separate subrequest though, so it might end up requesting a different resource when using some non-trivial configurations where a resource requested depends on additional factors which are different in the subrequest. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Sun Feb 26 14:00:07 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Sun, 26 Feb 2017 09:00:07 -0500 Subject: Nginx limit_conn and limit_req for static .js (javascript) .css (stylesheets) images Message-ID: So in the documentation and from what I see online everyone is limiting requests to prevent flooding on dynamic pages and video streams etc. But when you visit a HTML page the HTML page loads up allot of various different elements like .css .js .png .ico .jpg files. To prevent those elements also being flooded by bots or malicious traffic. I was going to to the following. #In http block limit_conn_zone $binary_remote_addr zone=addr1:100m; limit_req_zone $binary_remote_addr zone=two2:100m rate=100r/s; #style sheets javascript etc #end http block #in server location block location ~* \.(ico|png|jpg|jpeg|gif|swf|css|js)$ { limit_conn addr1 10; #Limit open connections from same ip limit_req zone=two2 burst=5; #Limit max number of requests from same ip expires max; } #end server location block Because on my sites I know that all together in a single HTML page request there will never be any more than 100 of those static elements that could be requested in a single page. I set the limit_req rate as "rate=100r/s;" For 100 requests a second. Does anyone have any recommended limits for these element types if my value is perhaps to high or to low I set it according to roughly how many media files I know can get requested each time a HTML page gets rendered. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272628,272628#msg-272628 From nginx-forum at forum.nginx.org Sun Feb 26 18:08:02 2017 From: nginx-forum at forum.nginx.org (jeanpaul) Date: Sun, 26 Feb 2017 13:08:02 -0500 Subject: Using proxy_cache_background_update In-Reply-To: <20170226131433.GF34777@mdounin.ru> References: <20170226131433.GF34777@mdounin.ru> Message-ID: <81c6bcb2c8cf5b2ff667d0573629dc39.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, I stripped down my configuration and removed 'unneeded' parts to reproduce. I'm able to reproduce it with the following settings: location / { # Added to mitigate the issue. Removed for testing #rewrite ^/index.html$ / break; proxy_pass http://backends; proxy_next_upstream error timeout invalid_header; proxy_buffering on; proxy_connect_timeout 1; proxy_read_timeout 30; proxy_cache_background_update on; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_cache STATIC; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Accept-Encoding ""; set $no_cache ""; proxy_ignore_headers Cache-Control Expires Vary; # Removing the if construction and leaving the "expires" in place solves the issue! if ($no_cache = "") { expires 1s; } proxy_cache_valid 200 3s; } To test the call I used curl and wget: curl --verbose --header "host: www.example.com" -o /dev/null http://1.2.3.4/ wget --header "host: www.example.com" --output-document=/dev/null http://1.2.3.4/ The Apache logs show: example.com 1.2.3.4 127.0.0.1 - - [26/Feb/2017:18:49:17 +0100] "GET /index.html HTTP/1.1" 404 43193 "-" "curl/7.26.0" example.com 1.2.3.4 127.0.0.1 - - [26/Feb/2017:18:53:22 +0100] "GET /index.html HTTP/1.1" 404 43194 "-" "Wget/1.13.4 (linux-gnu)" I captured traffic using tcpdump and Wireshark shows the following: Original request with curl GET / HTTP/1.1 User-Agent: curl/7.26.0 Accept: */* host: www.example.com Resulting request from Nginx: GET /index.html HTTP/1.1 Host: www.example.com X-Real-IP: 1.2.3.4 X-Forwarded-For: 1.2.3.4 User-Agent: curl/7.26.0 Accept: */* HTTP/1.1 404 Not Found Date: Sun, 26 Feb 2017 18:04:24 GMT Server: Apache Thanks in advance, JP Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272622,272629#msg-272629 From aldernetwork at gmail.com Sun Feb 26 22:11:08 2017 From: aldernetwork at gmail.com (Alder Netw) Date: Sun, 26 Feb 2017 14:11:08 -0800 Subject: Nginx with a ICAP-like front-end In-Reply-To: References: Message-ID: Or is there any existing module that can be adapted to achieve this? Appreciate if someone can shed some light. Thx, - Alder On Sat, Feb 25, 2017 at 9:24 PM, Alder Netw wrote: > Hi I want to add an ICAP-like front-end validation server V with nginx. > The user scenario is like this: > > The client will usually access the real app server R via nginx, but with > a validation server V, the client request will first pass to V, V will dp > certain > validation and upon sucess the request will be forwarded to R and R will > return directly to clients; Upon failure, the request will be denied. > > Is there any easy nginx config which can achieve this? Thanks, > > - Alder > -------------- next part -------------- An HTML attachment was scrubbed... URL: From telvinanto at gmail.com Sun Feb 26 22:32:53 2017 From: telvinanto at gmail.com (Anto) Date: Mon, 27 Feb 2017 04:02:53 +0530 Subject: [no subject] In-Reply-To: References: Message-ID: Hi Aleksandar, Thank you , my requirement is i need LB to redirect to same OHS server where i have multiple httpd server's running. Regards "" Anto Telvin Mathew "" On Sat, Feb 25, 2017 at 4:29 AM, Aleksandar Lazic wrote: > Hi Anto. > > > Am 24-02-2017 19:03, schrieb Anto: > > Hi Team , >> >> Would like to know how i can configure Ngnix LB with SSL termination ? >> In addition to the above would like to configure LB with multiple httpd's >> with single IP. >> Can you guide me how i can do the same with proxy pass ? >> >> Note : I have a single OHS server with 2 different httpd.conf files >> listening to two different ports. >> Need to configure LB with SSL termination to redirect to same servers . >> >> Need a step by step guideline help - thanks >> > > How about to start with this blog post. > > https://www.nginx.com/resources/admin-guide/nginx-https-upstreams/ > > Best regards > Aleks > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Feb 27 03:02:07 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 27 Feb 2017 06:02:07 +0300 Subject: Using proxy_cache_background_update In-Reply-To: <81c6bcb2c8cf5b2ff667d0573629dc39.NginxMailingListEnglish@forum.nginx.org> References: <20170226131433.GF34777@mdounin.ru> <81c6bcb2c8cf5b2ff667d0573629dc39.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170227030207.GH34777@mdounin.ru> Hello! On Sun, Feb 26, 2017 at 01:08:02PM -0500, jeanpaul wrote: > I stripped down my configuration and removed 'unneeded' parts to reproduce. > > I'm able to reproduce it with the following settings: > > location / { > # Added to mitigate the issue. Removed for testing > #rewrite ^/index.html$ / break; > > proxy_pass http://backends; [...] > # Removing the if construction and leaving the "expires" in place > solves the issue! > if ($no_cache = "") { > expires 1s; > } Ok, thanks for tracing this, looks clear enough now. Please try the following patch: diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c +++ b/src/http/ngx_http_core_module.c @@ -2571,6 +2571,7 @@ ngx_http_subrequest(ngx_http_request_t * sr->method_name = r->method_name; sr->loc_conf = r->loc_conf; sr->valid_location = r->valid_location; + sr->content_handler = r->content_handler; sr->phase_handler = r->phase_handler; sr->write_event_handler = ngx_http_core_run_phases; -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Mon Feb 27 06:28:21 2017 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Mon, 27 Feb 2017 01:28:21 -0500 Subject: nginx stopping abruptly at fix time (2:00 am) repeatedly on Cent OS 7.2 Message-ID: <81de53faefb0de80a9d6c3ac0294b6c8.NginxMailingListEnglish@forum.nginx.org> Hi , Please note that we are using nginx v 1.10.2 and on one of our webserver (centos 7.2) we are observing below error and sudden stopping of nginx service repeatedly at fix time i.e. at 2:00 am. Below are error lines for your reference : 2017/02/26 02:00:01 [alert] 57550#57550: *131331605 open socket #97 left in connection 453 2017/02/26 02:00:01 [alert] 57550#57550: *131334225 open socket #126 left in connection 510 2017/02/26 02:00:01 [alert] 57550#57550: *131334479 open socket #160 left in connection 532 2017/02/26 02:00:01 [alert] 57550#57550: *131334797 open socket #121 left in connection 542 2017/02/26 02:00:01 [alert] 57550#57550: *131334478 open socket #159 left in connection 552 2017/02/26 02:00:01 [alert] 57550#57550: *131334802 open socket #194 left in connection 633 2017/02/26 02:00:01 [alert] 57570#57570: aborting 2017/02/26 02:00:01 [alert] 57553#57553: aborting 2017/02/26 02:00:01 [alert] 57539#57539: aborting 2017/02/26 02:00:01 [alert] 57550#57550: aborting Also find below nginx conf files for your reference : worker_processes auto; events { worker_connections 4096; use epoll; multi_accept on; } worker_rlimit_nofile 100001; http { include mime.types; default_type video/mp4; proxy_buffering on; proxy_buffer_size 4096k; proxy_buffers 5 4096k; sendfile on; keepalive_timeout 30; tcp_nodelay on; tcp_nopush on; reset_timedout_connection on; gzip off; server_tokens off; log_format access '$remote_addr $http_x_forwarded_for $host [$time_local] ' '$upstream_cache_status ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" $request_time' Also note that we have similar servers with exact same nginx config running but those servers are not giving any such errors. Also we are not running any script or cron at this point of time. Kindly help us to resolve this issue. Also let me know in case any other details are required from my end. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272633,272633#msg-272633 From rajeev.sebastian at gmail.com Mon Feb 27 07:48:06 2017 From: rajeev.sebastian at gmail.com (Rajeev J Sebastian) Date: Mon, 27 Feb 2017 13:18:06 +0530 Subject: Nginx with a ICAP-like front-end In-Reply-To: References: Message-ID: Not sure if this is foolproof ... but maybe you can use the error_page fallback by responding with a special status_code. http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page location / { proxy_pass http://validator; error_page 510 = @success; } location @success { proxy_pass http://realbackend; } On Mon, Feb 27, 2017 at 3:41 AM, Alder Netw wrote: > Or is there any existing module that can be adapted to achieve this? > Appreciate if someone can shed some light. Thx, > - Alder > > On Sat, Feb 25, 2017 at 9:24 PM, Alder Netw > wrote: > >> Hi I want to add an ICAP-like front-end validation server V with nginx. >> The user scenario is like this: >> >> The client will usually access the real app server R via nginx, but with >> a validation server V, the client request will first pass to V, V will dp >> certain >> validation and upon sucess the request will be forwarded to R and R will >> return directly to clients; Upon failure, the request will be denied. >> >> Is there any easy nginx config which can achieve this? Thanks, >> >> - Alder >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Feb 27 10:53:30 2017 From: nginx-forum at forum.nginx.org (jeanpaul) Date: Mon, 27 Feb 2017 05:53:30 -0500 Subject: Using proxy_cache_background_update In-Reply-To: <20170227030207.GH34777@mdounin.ru> References: <20170227030207.GH34777@mdounin.ru> Message-ID: <6421e0f8cbcd382239ed26d80808a081.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, I verified the patch and it seems to work! Thanks for your prompt solution on this. JP Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272622,272638#msg-272638 From reallfqq-nginx at yahoo.fr Mon Feb 27 11:39:44 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 27 Feb 2017 12:39:44 +0100 Subject: nginx stopping abruptly at fix time (2:00 am) repeatedly on Cent OS 7.2 In-Reply-To: <81de53faefb0de80a9d6c3ac0294b6c8.NginxMailingListEnglish@forum.nginx.org> References: <81de53faefb0de80a9d6c3ac0294b6c8.NginxMailingListEnglish@forum.nginx.org> Message-ID: You did not provide any server block configuration. The configuration snippet you provided is incomplete. Old tickets suggest a link to SPDY, then HTTP/2 options: https://trac.nginx.org/nginx/ticket/626. You might want to reproduce the problem on a configuration as minimalist as possible, using the latest version of your branch (v1.10.3 atm). --- *B. R.* On Mon, Feb 27, 2017 at 7:28 AM, omkar_jadhav_20 < nginx-forum at forum.nginx.org> wrote: > Hi , > > Please note that we are using nginx v 1.10.2 and on one of our webserver > (centos 7.2) we are observing below error and sudden stopping of nginx > service repeatedly at fix time i.e. at 2:00 am. Below are error lines for > your reference : > > 2017/02/26 02:00:01 [alert] 57550#57550: *131331605 open socket #97 left in > connection 453 > 2017/02/26 02:00:01 [alert] 57550#57550: *131334225 open socket #126 left > in > connection 510 > 2017/02/26 02:00:01 [alert] 57550#57550: *131334479 open socket #160 left > in > connection 532 > 2017/02/26 02:00:01 [alert] 57550#57550: *131334797 open socket #121 left > in > connection 542 > 2017/02/26 02:00:01 [alert] 57550#57550: *131334478 open socket #159 left > in > connection 552 > 2017/02/26 02:00:01 [alert] 57550#57550: *131334802 open socket #194 left > in > connection 633 > 2017/02/26 02:00:01 [alert] 57570#57570: aborting > 2017/02/26 02:00:01 [alert] 57553#57553: aborting > 2017/02/26 02:00:01 [alert] 57539#57539: aborting > 2017/02/26 02:00:01 [alert] 57550#57550: aborting > > Also find below nginx conf files for your reference : > > worker_processes auto; > events { > worker_connections 4096; > use epoll; > multi_accept on; > } > worker_rlimit_nofile 100001; > > http { > include mime.types; > default_type video/mp4; > proxy_buffering on; > proxy_buffer_size 4096k; > proxy_buffers 5 4096k; > sendfile on; > keepalive_timeout 30; > tcp_nodelay on; > tcp_nopush on; > reset_timedout_connection on; > gzip off; > server_tokens off; > log_format access '$remote_addr $http_x_forwarded_for $host [$time_local] ' > '$upstream_cache_status ' '"$request" $status $body_bytes_sent ' > '"$http_referer" "$http_user_agent" $request_time' > > Also note that we have similar servers with exact same nginx config running > but those servers are not giving any such errors. Also we are not running > any script or cron at this point of time. > Kindly help us to resolve this issue. Also let me know in case any other > details are required from my end. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,272633,272633#msg-272633 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Mon Feb 27 11:51:50 2017 From: anoopalias01 at gmail.com (Anoop Alias) Date: Mon, 27 Feb 2017 17:21:50 +0530 Subject: nginx stopping abruptly at fix time (2:00 am) repeatedly on Cent OS 7.2 In-Reply-To: <81de53faefb0de80a9d6c3ac0294b6c8.NginxMailingListEnglish@forum.nginx.org> References: <81de53faefb0de80a9d6c3ac0294b6c8.NginxMailingListEnglish@forum.nginx.org> Message-ID: What does the error log say when it is stopping? On Mon, Feb 27, 2017 at 11:58 AM, omkar_jadhav_20 < nginx-forum at forum.nginx.org> wrote: > Hi , > > Please note that we are using nginx v 1.10.2 and on one of our webserver > (centos 7.2) we are observing below error and sudden stopping of nginx > service repeatedly at fix time i.e. at 2:00 am. Below are error lines for > your reference : > > 2017/02/26 02:00:01 [alert] 57550#57550: *131331605 open socket #97 left in > connection 453 > 2017/02/26 02:00:01 [alert] 57550#57550: *131334225 open socket #126 left > in > connection 510 > 2017/02/26 02:00:01 [alert] 57550#57550: *131334479 open socket #160 left > in > connection 532 > 2017/02/26 02:00:01 [alert] 57550#57550: *131334797 open socket #121 left > in > connection 542 > 2017/02/26 02:00:01 [alert] 57550#57550: *131334478 open socket #159 left > in > connection 552 > 2017/02/26 02:00:01 [alert] 57550#57550: *131334802 open socket #194 left > in > connection 633 > 2017/02/26 02:00:01 [alert] 57570#57570: aborting > 2017/02/26 02:00:01 [alert] 57553#57553: aborting > 2017/02/26 02:00:01 [alert] 57539#57539: aborting > 2017/02/26 02:00:01 [alert] 57550#57550: aborting > > Also find below nginx conf files for your reference : > > worker_processes auto; > events { > worker_connections 4096; > use epoll; > multi_accept on; > } > worker_rlimit_nofile 100001; > > http { > include mime.types; > default_type video/mp4; > proxy_buffering on; > proxy_buffer_size 4096k; > proxy_buffers 5 4096k; > sendfile on; > keepalive_timeout 30; > tcp_nodelay on; > tcp_nopush on; > reset_timedout_connection on; > gzip off; > server_tokens off; > log_format access '$remote_addr $http_x_forwarded_for $host [$time_local] ' > '$upstream_cache_status ' '"$request" $status $body_bytes_sent ' > '"$http_referer" "$http_user_agent" $request_time' > > Also note that we have similar servers with exact same nginx config running > but those servers are not giving any such errors. Also we are not running > any script or cron at this point of time. > Kindly help us to resolve this issue. Also let me know in case any other > details are required from my end. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,272633,272633#msg-272633 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Feb 27 14:26:29 2017 From: nginx-forum at forum.nginx.org (p0lak) Date: Mon, 27 Feb 2017 09:26:29 -0500 Subject: One NGINX server to 2 backend servers In-Reply-To: <81022ab8-41c6-4d4d-8cad-20e9136d0745@email.android.com> References: <81022ab8-41c6-4d4d-8cad-20e9136d0745@email.android.com> Message-ID: <1bc81cb08408502c035dcffd3468e0cb.NginxMailingListEnglish@forum.nginx.org> There is nothing in your reply Pratyush :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272606,272643#msg-272643 From pratyush at hostindya.com Mon Feb 27 14:49:43 2017 From: pratyush at hostindya.com (Pratyush Kumar) Date: Mon, 27 Feb 2017 20:19:43 +0530 Subject: One NGINX server to 2 backend servers Message-ID: An HTML attachment was scrubbed... URL: From rajeev.sebastian at gmail.com Mon Feb 27 16:58:46 2017 From: rajeev.sebastian at gmail.com (Rajeev J Sebastian) Date: Mon, 27 Feb 2017 22:28:46 +0530 Subject: One NGINX server to 2 backend servers In-Reply-To: References: Message-ID: I may be wrong, but where is your proxy_pass statement? On Fri, Feb 24, 2017 at 4:47 PM, p0lak wrote: > Hello everybody, > > I have installed a dedicated server for NGINX based on Ubuntu Core > > I want to define this behavior > > NGINX Server (Public IP) >>> listening on port 80 >>> redirect to a LAN > Server on port 80 (http://mylocalserver/virtualhost1) > NGINX Server (Public IP) >>> listening on port 81 >>> redirect to a LAN > Server on port 80 (http://mylocalserver/virtualhost2) > > My local virtualhost on my backend server is reachable (ie: > http://mylocalserver/virtualhost1) > but my second virtualhost is not reachable (ie: > http://mylocalserver/virtualhost2) > > it is like the network port is closed but my firewall is accepting the > flow. > > here is my configuration if you have any idea why my second virtualhost is > not reachable > > ##NGINX.CONF## > > user www-data; > worker_processes 2; > events { > worker_connections 19000; > } > worker_rlimit_nofile 40000; > http { > > client_body_timeout 5s; > client_header_timeout 5s; > keepalive_timeout 75s; > send_timeout 15s; > gzip on; > gzip_disable "msie6"; > gzip_http_version 1.1; > gzip_comp_level 5; > gzip_min_length 256; > gzip_proxied any; > gzip_vary on; > gzip_types > application/atom+xml > application/javascript > application/json > application/rss+xml > application/vnd.ms-fontobject > application/x-font-ttf > application/x-web-app-manifest+json > application/xhtml+xml > application/xml > font/opentype > image/svg+xml > image/x-icon > text/css > text/plain > text/x-component; > > client_max_body_size 100k; > client_body_buffer_size 128k; > client_body_in_single_buffer on; > client_body_temp_path /var/nginx/client_body_temp; > client_header_buffer_size 1k; > large_client_header_buffers 4 4k; > > include /etc/nginx/conf.d/*.conf; > include /etc/nginx/sites-enabled/*; > } > > #PROXY.CONF# > proxy_redirect on; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_hide_header X-Powered-By; > proxy_intercept_errors on; > proxy_buffering on; > > > proxy_cache_key "$scheme://$host$request_uri"; > proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m > inactive=7d > max_size=700m; > > > #VIRTUALHOST 1 > server { > listen 80; > server_name virtualhost1; > } > > #VIRTUALHOST 2 > server { > listen 81; > server_name virtualhost2; > } > > > Could you please help me regarding my issue, > > Thanks so much, > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,272606,272606#msg-272606 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aldernetwork at gmail.com Mon Feb 27 17:00:13 2017 From: aldernetwork at gmail.com (Alder Netw) Date: Mon, 27 Feb 2017 09:00:13 -0800 Subject: Nginx with a ICAP-like front-end In-Reply-To: References: Message-ID: Thanks Rajeev for the recipe, I was looking into using subrequest but found subrequest only works for filter module. This looks much simpler! The only concern is error_page is only for GET/HEAD not for POST? On Sun, Feb 26, 2017 at 11:48 PM, Rajeev J Sebastian < rajeev.sebastian at gmail.com> wrote: > Not sure if this is foolproof ... but maybe you can use the error_page > fallback by responding with a special status_code. > > http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page > > location / { > > proxy_pass http://validator; > error_page 510 = @success; > } > > location @success { > proxy_pass http://realbackend; > } > > > On Mon, Feb 27, 2017 at 3:41 AM, Alder Netw > wrote: > >> Or is there any existing module that can be adapted to achieve this? >> Appreciate if someone can shed some light. Thx, >> - Alder >> >> On Sat, Feb 25, 2017 at 9:24 PM, Alder Netw >> wrote: >> >>> Hi I want to add an ICAP-like front-end validation server V with nginx. >>> The user scenario is like this: >>> >>> The client will usually access the real app server R via nginx, but with >>> a validation server V, the client request will first pass to V, V will >>> dp certain >>> validation and upon sucess the request will be forwarded to R and R will >>> return directly to clients; Upon failure, the request will be denied. >>> >>> Is there any easy nginx config which can achieve this? Thanks, >>> >>> - Alder >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajeev.sebastian at gmail.com Mon Feb 27 17:11:30 2017 From: rajeev.sebastian at gmail.com (Rajeev J Sebastian) Date: Mon, 27 Feb 2017 22:41:30 +0530 Subject: Nginx with a ICAP-like front-end In-Reply-To: References: Message-ID: >From the docs it seems that this will work for all requests EXCEPT that, the @success fallback request will always be GET. On Mon, Feb 27, 2017 at 10:30 PM, Alder Netw wrote: > Thanks Rajeev for the recipe, I was looking into using subrequest but > found subrequest > only works for filter module. This looks much simpler! The only concern is > error_page > is only for GET/HEAD not for POST? > > > > On Sun, Feb 26, 2017 at 11:48 PM, Rajeev J Sebastian < > rajeev.sebastian at gmail.com> wrote: > >> Not sure if this is foolproof ... but maybe you can use the error_page >> fallback by responding with a special status_code. >> >> http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page >> >> location / { >> >> proxy_pass http://validator; >> error_page 510 = @success; >> } >> >> location @success { >> proxy_pass http://realbackend; >> } >> >> >> On Mon, Feb 27, 2017 at 3:41 AM, Alder Netw >> wrote: >> >>> Or is there any existing module that can be adapted to achieve this? >>> Appreciate if someone can shed some light. Thx, >>> - Alder >>> >>> On Sat, Feb 25, 2017 at 9:24 PM, Alder Netw >>> wrote: >>> >>>> Hi I want to add an ICAP-like front-end validation server V with nginx. >>>> The user scenario is like this: >>>> >>>> The client will usually access the real app server R via nginx, but >>>> with >>>> a validation server V, the client request will first pass to V, V will >>>> dp certain >>>> validation and upon sucess the request will be forwarded to R and R >>>> will >>>> return directly to clients; Upon failure, the request will be denied. >>>> >>>> Is there any easy nginx config which can achieve this? Thanks, >>>> >>>> - Alder >>>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajeev.sebastian at gmail.com Mon Feb 27 17:14:58 2017 From: rajeev.sebastian at gmail.com (Rajeev J Sebastian) Date: Mon, 27 Feb 2017 22:44:58 +0530 Subject: Nginx with a ICAP-like front-end In-Reply-To: References: Message-ID: Adler, maybe you should try X-Accel-Redirect to avoid this conversion of POST to GET? On Mon, Feb 27, 2017 at 10:41 PM, Rajeev J Sebastian < rajeev.sebastian at gmail.com> wrote: > From the docs it seems that this will work for all requests EXCEPT that, > the @success fallback request will always be GET. > > On Mon, Feb 27, 2017 at 10:30 PM, Alder Netw > wrote: > >> Thanks Rajeev for the recipe, I was looking into using subrequest but >> found subrequest >> only works for filter module. This looks much simpler! The only concern >> is error_page >> is only for GET/HEAD not for POST? >> >> >> >> On Sun, Feb 26, 2017 at 11:48 PM, Rajeev J Sebastian < >> rajeev.sebastian at gmail.com> wrote: >> >>> Not sure if this is foolproof ... but maybe you can use the error_page >>> fallback by responding with a special status_code. >>> >>> http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page >>> >>> location / { >>> >>> proxy_pass http://validator; >>> error_page 510 = @success; >>> } >>> >>> location @success { >>> proxy_pass http://realbackend; >>> } >>> >>> >>> On Mon, Feb 27, 2017 at 3:41 AM, Alder Netw >>> wrote: >>> >>>> Or is there any existing module that can be adapted to achieve this? >>>> Appreciate if someone can shed some light. Thx, >>>> - Alder >>>> >>>> On Sat, Feb 25, 2017 at 9:24 PM, Alder Netw >>>> wrote: >>>> >>>>> Hi I want to add an ICAP-like front-end validation server V with >>>>> nginx. >>>>> The user scenario is like this: >>>>> >>>>> The client will usually access the real app server R via nginx, but >>>>> with >>>>> a validation server V, the client request will first pass to V, V will >>>>> dp certain >>>>> validation and upon sucess the request will be forwarded to R and R >>>>> will >>>>> return directly to clients; Upon failure, the request will be denied. >>>>> >>>>> Is there any easy nginx config which can achieve this? Thanks, >>>>> >>>>> - Alder >>>>> >>>> >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Feb 27 22:51:31 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 28 Feb 2017 01:51:31 +0300 Subject: Using proxy_cache_background_update In-Reply-To: <6421e0f8cbcd382239ed26d80808a081.NginxMailingListEnglish@forum.nginx.org> References: <20170227030207.GH34777@mdounin.ru> <6421e0f8cbcd382239ed26d80808a081.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170227225131.GJ34777@mdounin.ru> Hello! On Mon, Feb 27, 2017 at 05:53:30AM -0500, jeanpaul wrote: > I verified the patch and it seems to work! > Thanks for your prompt solution on this. Committed, thanks for testing. http://hg.nginx.org/nginx/rev/8b7fd958c59f -- Maxim Dounin http://nginx.org/ From minoru.nishikubo at lyz.jp Tue Feb 28 00:58:05 2017 From: minoru.nishikubo at lyz.jp (Nishikubo Minoru) Date: Tue, 28 Feb 2017 09:58:05 +0900 Subject: set_real_ip_from,real_ip_header directive in ngx_http_realip_module Message-ID: Hello, I tried to limit an IPv4 Address with ngx_http_limit_req module and ngx_realip_module via Akamai would send True-Client-IP headers. According to the document ngx_http_readip_module( http://nginx.org/en/docs/http/ngx_http_realip_module.html), we can write set_real_ip_from and real-_ip_header directive in http, server, location context. But, in the above case(ngx_http_limit_req module is defined the key in http context), directives on ngx_http_realip_module must be defined before the keys(a.k.a replaced IPv4 adress by ngx_http_realip_module) and followed limit_req_zone directive in http context. I think it better that the document explained ngx_http_realip_module directive is configured before ngx_http_limit_req module configuration. Our environment is Amazon Linux on AWS EC2 package and nginx version was 1.10.1. If you already plan to improve the documentation and you know, please let me know and I will check it out. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Feb 28 09:41:11 2017 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Tue, 28 Feb 2017 04:41:11 -0500 Subject: nginx stopping abruptly at fix time (2:00 am) repeatedly on Cent OS 7.2 In-Reply-To: References: Message-ID: Hi Anoop and B.R ... Post through troubleshooting we have found out that it was due to OS. Thanks for your concern , please discard this issue. Issue has been resolved. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272633,272656#msg-272656 From nginx-forum at forum.nginx.org Tue Feb 28 11:59:00 2017 From: nginx-forum at forum.nginx.org (larsg) Date: Tue, 28 Feb 2017 06:59:00 -0500 Subject: Nginx using variables / split_clients is very slow Message-ID: <2ac6b3cdf05d843a4e5a9e18f88b3853.NginxMailingListEnglish@forum.nginx.org> Hi, I want to use nginx as reverse proxy for an A/B testing scenario. The nginx should route to two versions of one backend service. The two versions provides this service via different URL paths. Example: * x% of all requests https://edgeservice/myservice should be routed to https://1.2.3.4/myservice, * the other 100-x% should be routed to https://4.3.2.1/myservice/withAnothePath. For that I wanted to use the split_client directive that perfectly matches our requirements. We have a general nginx configuration that reaches a high throughput (at least 2.000 requests / sec) - unless we specify the routing target via nginx variables. So, when specifying the routing target "hard coded" in the proxy_pass directive (variant 0) or when using the upstream directive (variant 1), the nginx routes very fast (at least 2.000 req/sec). Once we use split_clients directive to specify the routing target (variant 2) or we set a variable statically (variant 3), the nginx ist very slow and reaches only 20-50 requests / sec. All other config parameters are the same for all variants. We did some research (nginx config reference, google, this forum...) to find a solution for this problem. Now that we do not find any approach I wanted to ask the mailing list if you have got any idea? Is there a solution to increase performance when using split_clients so that we can reach at least 1.000 requests / sec? Or did we already reach maximum performance for this scenario? It would be great if we could used split_clients since we are very flexible in defining routing rules and we can route to backend services with different URL paths. Kind Regards Lars nginx 1.10.3 running on Ubunutu trusy nginx.conf: ... http { ... # variant 1 upstream backend1 { ip_hash; server 1.2.3.4; } # variant 2 split_clients $remote_addr $backend2 { 50% https://1.2.3.4/myservice/; 50% https://4.3.2.1/myservice/withAnotherPath; } server { listen 443 ssl backlog=163840; # variant 3 set $backend3 https://1.2.3.4/myservice; location /myservice { # V0) this is fast proxy_pass https://1.2.3.4/myservice; # V1) this is fast proxy_pass https://backend1; # V2) this is slow proxy_pass $backend2; # V3) this is slow proxy_pass $backend3; } } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272657,272657#msg-272657 From mdounin at mdounin.ru Tue Feb 28 12:46:53 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 28 Feb 2017 15:46:53 +0300 Subject: Nginx using variables / split_clients is very slow In-Reply-To: <2ac6b3cdf05d843a4e5a9e18f88b3853.NginxMailingListEnglish@forum.nginx.org> References: <2ac6b3cdf05d843a4e5a9e18f88b3853.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170228124653.GK34777@mdounin.ru> Hello! On Tue, Feb 28, 2017 at 06:59:00AM -0500, larsg wrote: > I want to use nginx as reverse proxy for an A/B testing scenario. > The nginx should route to two versions of one backend service. The two > versions provides this service via different URL paths. > Example: > * x% of all requests https://edgeservice/myservice should be routed to > https://1.2.3.4/myservice, > * the other 100-x% should be routed to > https://4.3.2.1/myservice/withAnothePath. > For that I wanted to use the split_client directive that perfectly matches > our requirements. > > We have a general nginx configuration that reaches a high throughput (at > least 2.000 requests / sec) - unless we specify the routing target via nginx > variables. > > So, when specifying the routing target "hard coded" in the proxy_pass > directive (variant 0) or when using the upstream directive (variant 1), the > nginx routes very fast (at least 2.000 req/sec). > Once we use split_clients directive to specify the routing target (variant > 2) or we set a variable statically (variant 3), the nginx ist very slow and > reaches only 20-50 requests / sec. All other config parameters are the same > for all variants. [...] > # variant 1 > upstream backend1 { > ip_hash; > server 1.2.3.4; > } > > # variant 2 > split_clients $remote_addr $backend2 { > 50% https://1.2.3.4/myservice/; > 50% https://4.3.2.1/myservice/withAnotherPath; > } > > server { > listen 443 ssl backlog=163840; > > # variant 3 > set $backend3 https://1.2.3.4/myservice; > > location /myservice { > # V0) this is fast > proxy_pass https://1.2.3.4/myservice; > > # V1) this is fast > proxy_pass https://backend1; > > # V2) this is slow > proxy_pass $backend2; > > # V3) this is slow > proxy_pass $backend3; The problem is that your configuration with variables and an IP address implies run-time parsing of the address to create a run-time implicit upstream server group nginx will work with. This group will only be used for a single request. But you use SSL to connect to upstream servers, and here comes the problem: SSL sessions are cached within the server group data. As such, your V2 and V3 configurations does not use SSL session caching, and hence slow due to a full SSL handshake on each request. Solution is to use predefined upstream blocks within your split_clients paths, e.g.: split_clients $remote_addr $backend { 50% https://backend1; 50% https://backend2; } upstream backend1 { sever 10.0.0.1; } upstream backend2 { sever 10.0.0.2; } proxy_pass $backend; This way nginx will be able to choose an upstream server at run-time according to split_clients, and will still be able to cache SSL sessions (or even connections, if configured, see http://nginx.org/r/keepalive). Note well that specifing URI in proxy_pass using variables may not do what you expect. When using variables in proxy_pass, if an URI part is specified, it means full URI to be used in a request to a backend, not a replacement for a location prefix matched. See http://nginx.org/r/proxy_pass for details. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Feb 28 13:40:15 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 28 Feb 2017 16:40:15 +0300 Subject: set_real_ip_from, real_ip_header directive in ngx_http_realip_module In-Reply-To: References: Message-ID: <20170228134015.GL34777@mdounin.ru> Hello! On Tue, Feb 28, 2017 at 09:58:05AM +0900, Nishikubo Minoru wrote: > Hello, > I tried to limit an IPv4 Address with ngx_http_limit_req module and > ngx_realip_module via Akamai would send True-Client-IP headers. > > According to the document ngx_http_readip_module( > http://nginx.org/en/docs/http/ngx_http_realip_module.html), > we can write set_real_ip_from and real-_ip_header directive in http, > server, location context. > > But, in the above case(ngx_http_limit_req module is defined the key in http > context), directives on ngx_http_realip_module must be defined before the > keys(a.k.a replaced IPv4 adress by ngx_http_realip_module) and followed > limit_req_zone directive in http context. Not really. There is no such requirement, that is, there is need to place limit_req_zone and set_real_ip_from on the same level or even in a particular order. For example, the following configuration will work perfectly: limit_req_zone $remote_addr zone=limit:1m rate=1r/m; limit_req zone=limit; server { listen 80; location / { set_real_ip_from 127.0.0.1; real_ip_header X-Real-IP; } } A problem may happen though if you configured the realip module in a location context, but use the address in different contexts. For example, the following will limit requests based on the connection's address, not the one set with realip: limit_req_zone $remote_addr zone=limit:1m rate=1r/m; limit_req zone=limit; server { listen 80; location / { try_files $uri @fallback; } location @fallback { set_real_ip_from 127.0.0.1; real_ip_header X-Real-IP; proxy_pass ... } } In the above configuration, limit_req will work at the "location /" context, and the realip module in "location @fallback" won't be effective. For more confusion, the $remote_addr variable will be cached once used by limit_req, and attempts to use it even in the location @fallback will return the original value, not changed by the realip module. Summing up the above, it is certainly possible to use the realip module with limit_req regardless of levels. They may interact unexpectedly in complex configurations though, and hence it is a good idea to avoid using set_real_ip_from / real_ip_header in location context unless you understand what you are doing. -- Maxim Dounin http://nginx.org/ From dewanggaba at xtremenitro.org Tue Feb 28 15:57:01 2017 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Tue, 28 Feb 2017 22:57:01 +0700 Subject: IPv6 upstream problem Message-ID: <871d877c-abea-5f01-4285-dd6ecde0bb4b@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hello! Currently I have problem with upstream with IPv6. For example I have an origin with subdomain dual-stack-ipv4-ipv6.xtremenitro.org. dual-stack-ipv4-ipv6.xtremenitro.org IN A 192.168.1.1 dual-stack-ipv4-ipv6.xtremenitro.org IN AAAA 2001:xx:xx::1; My configuration are like this : $ nginx -V nginx version: nginx/1.11.10 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) built with LibreSSL 2.4.5 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx - --modules-path=/usr/lib64/nginx/modules - --conf-path=/etc/nginx/nginx.conf - --error-log-path=/var/log/nginx/error.log - --http-log-path=/var/log/nginx/access.log - --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock - --http-client-body-temp-path=/var/cache/nginx/client_temp - --http-proxy-temp-path=/var/cache/nginx/proxy_temp - --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp - --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp - --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx - --group=nginx --with-http_ssl_module --with-openssl=libressl-2.4.5 - --with-http_realip_module --with-http_addition_module - --with-http_sub_module --with-http_gunzip_module - --with-http_gzip_static_module --with-http_random_index_module - --with-http_stub_status_module --with-http_auth_request_module - --with-http_image_filter_module=dynamic - --with-http_geoip_module=dynamic --with-http_perl_module=dynamic - --with-http_xslt_module=dynamic --add-dynamic-module=ngx_cache_purge - --add-dynamic-module=nginx-module-vts - --add-dynamic-module=headers-more-nginx-module - --add-dynamic-module=ngx_small_light --add-dynamic-module=ngx_brotli - --add-dynamic-module=nginx_upstream_check_module --with-threads - --with-stream=dynamic --with-stream_ssl_module - --with-http_slice_module --with-mail=dynamic --with-mail_ssl_module - --with-file-aio --with-ipv6 --with-http_v2_module --with-cc-opt='-g - -Ofast -march=native -ffast-math -fstack-protector-strong -Wformat - -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2' .. resolver 103.52.3.72 ipv6=off; upstream cf { server dual-stack-ipv4-ipv6.xtremenitro.org; } ... snip ... location ~ \.(jpe?g|gif|png|JPE?G|GIF|PNG)$ { proxy_pass http://cf; proxy_cache_background_update on; proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; proxy_cache_valid 200 302 301 60m; proxy_cache images; proxy_cache_valid any 3s; proxy_cache_lock on; proxy_cache_lock_timeout 60s; proxy_cache_min_uses 1; proxy_ignore_headers Cache-Control Expires; proxy_hide_header X-Cache; proxy_hide_header Via; proxy_hide_header ETag; } I see on error log, all error was came from IPv6 upstream. 2017/02/28 22:13:15 [error] 24079#24079: *429979 upstream timed out (110: Connection timed out) while connecting to upstream, client: 114.120.233.8, server: dual-stack-ipv4-ipv6.xtremenitro.org, request: "GET /2015-09/thumbnail_360/wd/d7d63419f8ac6b6981cec72c8a6644ea.jpg HTTP/2.0", subrequest: "/2015-09/thumbnail_360/wd/d7d63419f8ac6b6981cec72c8a6644ea.jpg", upstream: "http://[2600:9000:2031:4000:6:24ba:3100:93a1]:80/2015-09/thumbnail_360/ wd/d7d63419f8ac6b6981cec72c8a6644ea.jpg?of=webp&q=50", host: "dual-stack-ipv4-ipv6.xtremenitro.org", referrer: "[REMOVED]" 2017/02/28 22:13:20 [error] 24080#24080: *432226 upstream timed out (110: Connection timed out) while connecting to upstream, client: 124.153.33.23, server: dual-stack-ipv4-ipv6.xtremenitro.org, request: "GET /2016-02/thumbnail_360/wd/df4f88d6a5d62427c11e746e187ba527.jpg HTTP/1.1", subrequest: "/2016-02/thumbnail_360/wd/df4f88d6a5d62427c11e746e187ba527.jpg", upstream: "http://[2600:9000:2031:7e00:6:24ba:3100:93a1]:80/2016-02/thumbnail_360/ wd/df4f88d6a5d62427c11e746e187ba527.jpg?of=webp&q=50", host: "dual-stack-ipv4-ipv6.xtremenitro.org", referrer: "[REMOVED]" Any hints, clue or help are very appreciated. Thanks in advance -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQI4BAEBCAAiBQJYtZ3JGxxkZXdhbmdnYWJhQHh0cmVtZW5pdHJvLm9yZwAKCRDl f9IgoCjNcFiXD/46SeZToPFxfwaG2SwFtbMCsa3e2aelQOdjl36o893zgN7EkgkU NIiLBTuydSke0I2tF6uof2eCpJdKaxP1R+iWPa3FE1rfn8s3gE32CnJZetBzaPn2 /6j1S5s5ZfT8n+91URAvAzEvBzhWfqErJqWH+Q8JYvrW57eEn/6DoIqcyyqw287m ZbSovx+bkTj3q+hClxURyU+oHq8g1TaiGimp8eBWmdyciTn+vk8L5qUZ8rgFniBS 75zVoZvim3yO7qpnCi98gFv1N+ghlEnqRtO/xNoC+I7cCbp93OoWfQi8z6T9Ljyu pkg7ptNZ8slIHhcsjxf6V3wW6Uuih0q/BFdc8WVmNzkL/tfW6cwBDzz2kymcaOBl hB+KRMsS5yTj4uVpnabzqDMRANUw/mvaM+t+4XWcvXWVhQY1pHT+pynD1kVzXnug EGszUcA71ZNMPqH9fGLrN7igaBRRt1GMn7/sqNQKmY54GwjSJAziE0edpapBrP7I aWMQaLdc7DBudlR4rMNaXt9bGh/2oQm1T4/xImK8sp9SHBFKyZBkMZq+UnGdPlGZ UyU/XOJrDca//ipsI2g3G6LUBpUKJtoE6bMsTRhakMaU8K3T0s1sgB1oBYsNQGyb YpLDfnZMxXk/Jn2ttXG22E1b8MtQsDdn946hsRrGddIWg+4bucgzEbAYLA== =Cs9Q -----END PGP SIGNATURE----- From mdounin at mdounin.ru Tue Feb 28 17:15:25 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 28 Feb 2017 20:15:25 +0300 Subject: IPv6 upstream problem In-Reply-To: <871d877c-abea-5f01-4285-dd6ecde0bb4b@xtremenitro.org> References: <871d877c-abea-5f01-4285-dd6ecde0bb4b@xtremenitro.org> Message-ID: <20170228171524.GN34777@mdounin.ru> Hello! On Tue, Feb 28, 2017 at 10:57:01PM +0700, Dewangga Bachrul Alam wrote: > Currently I have problem with upstream with IPv6. For example I have > an origin with subdomain dual-stack-ipv4-ipv6.xtremenitro.org. > > dual-stack-ipv4-ipv6.xtremenitro.org IN A 192.168.1.1 > dual-stack-ipv4-ipv6.xtremenitro.org IN AAAA 2001:xx:xx::1; [...] > resolver 103.52.3.72 ipv6=off; > > upstream cf { > server dual-stack-ipv4-ipv6.xtremenitro.org; [...] > I see on error log, all error was came from IPv6 upstream. > > 2017/02/28 22:13:15 [error] 24079#24079: *429979 upstream timed out > (110: Connection timed out) while connecting to upstream, client: > 114.120.233.8, server: dual-stack-ipv4-ipv6.xtremenitro.org, request: > "GET /2015-09/thumbnail_360/wd/d7d63419f8ac6b6981cec72c8a6644ea.jpg > HTTP/2.0", subrequest: > "/2015-09/thumbnail_360/wd/d7d63419f8ac6b6981cec72c8a6644ea.jpg", > upstream: > "http://[2600:9000:2031:4000:6:24ba:3100:93a1]:80/2015-09/thumbnail_360/ > wd/d7d63419f8ac6b6981cec72c8a6644ea.jpg?of=webp&q=50", > host: "dual-stack-ipv4-ipv6.xtremenitro.org", referrer: "[REMOVED]" It looks like you assume that "resolver ... ipv6=off" is expected to prevent nginx from using IPv6 addresses of all names written in the configuration. This is not how it works though. The "resolver" directive is only used for dynamic resolution of names not known during configuration parsing. Most notably, it is used to resolve names in proxy_pass with variables. The name in the "server dual-stack-ipv4-ipv6.xtremenitro.org;" is known while configuration parsing, and nginx will simply use the getaddrinfo() function to resolve it. You have to tune your OS name resolution settings if you want it to return IPv4 addresses only. -- Maxim Dounin http://nginx.org/ From dewanggaba at xtremenitro.org Tue Feb 28 17:24:34 2017 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Wed, 1 Mar 2017 00:24:34 +0700 Subject: IPv6 upstream problem In-Reply-To: <20170228171524.GN34777@mdounin.ru> References: <871d877c-abea-5f01-4285-dd6ecde0bb4b@xtremenitro.org> <20170228171524.GN34777@mdounin.ru> Message-ID: <44b29027-0666-bc2d-7442-5184caca747f@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hello! On 03/01/2017 12:15 AM, Maxim Dounin wrote: > Hello! > > On Tue, Feb 28, 2017 at 10:57:01PM +0700, Dewangga Bachrul Alam > wrote: > >> Currently I have problem with upstream with IPv6. For example I >> have an origin with subdomain >> dual-stack-ipv4-ipv6.xtremenitro.org. >> >> dual-stack-ipv4-ipv6.xtremenitro.org IN A 192.168.1.1 >> dual-stack-ipv4-ipv6.xtremenitro.org IN AAAA 2001:xx:xx::1; > > [...] > >> resolver 103.52.3.72 ipv6=off; >> >> upstream cf { server dual-stack-ipv4-ipv6.xtremenitro.org; > > [...] > >> I see on error log, all error was came from IPv6 upstream. >> >> 2017/02/28 22:13:15 [error] 24079#24079: *429979 upstream timed >> out (110: Connection timed out) while connecting to upstream, >> client: 114.120.233.8, server: >> dual-stack-ipv4-ipv6.xtremenitro.org, request: "GET >> /2015-09/thumbnail_360/wd/d7d63419f8ac6b6981cec72c8a6644ea.jpg >> HTTP/2.0", subrequest: >> "/2015-09/thumbnail_360/wd/d7d63419f8ac6b6981cec72c8a6644ea.jpg", >> >> upstream: >> "http://[2600:9000:2031:4000:6:24ba:3100:93a1]:80/2015-09/thumbnail_3 60/ >> >> wd/d7d63419f8ac6b6981cec72c8a6644ea.jpg?of=webp&q=50", >> host: "dual-stack-ipv4-ipv6.xtremenitro.org", referrer: >> "[REMOVED]" > [..] > It looks like you assume that "resolver ... ipv6=off" is expected > to prevent nginx from using IPv6 addresses of all names written in > the configuration. This is not how it works though. Yes > > The "resolver" directive is only used for dynamic resolution of > names not known during configuration parsing. Most notably, it is > used to resolve names in proxy_pass with variables. > [..] > The name in the "server dual-stack-ipv4-ipv6.xtremenitro.org;" is > known while configuration parsing, and nginx will simply use the > getaddrinfo() function to resolve it. You have to tune your OS > name resolution settings if you want it to return IPv4 addresses > only. > Ah! I was thought nginx can handle whether the upstream using fqdn could be forced only IPv4 and/or IPv6. Thanks for the hints, Maxim. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQI4BAEBCAAiBQJYtbJNGxxkZXdhbmdnYWJhQHh0cmVtZW5pdHJvLm9yZwAKCRDl f9IgoCjNcG2AD/wKBQ27TVpnShX3/FFzNT0BH1EzvbthVc05KDsc8wRI8LnA+zLG nrtvnoCGW3mL/9z85uU78soYpPzewlIo12k75NQnaxkk99lb6wq2UrQI+X6ZEZuE dT6y/ILKnY/VpawKj/6V14WKmv/1MYTY08/yv9RpcK4VvabGBwF6E1b6hiiM+tUn 9BXBYomBJTI6B6HCbMPQBI/5haPrWHg952w0BqbcrinrXJ3670pZfDxt1Q3zCyTn KvTw/sVnMGHpf5yE8zh+CkgQWiyTBegC7BdLH6uPEjJQ4x/6Zt1P3K8LzlBjwwd/ Jb0FdlK+CgYilZ1n8JSi56gwxaDgUl+Cxf0PCliMCPr1Gn7JOxumvJes6VDBSTZu J2wNVLJC+JOnWYVKgtCMc4DHB8s8M7JXqHhsL0tET7Q+G/cl1Fg76aSVhutu5mKu v9tBFyyKYu6gLtODw3ust7K3Jt0NS/sldXN1ZVXfZEdmeBCT4TRB22Q0CO0a8/Uv 5IdjXE7mx7PDEzzryZuIztEzKhUl7KD3HijRZbKZzS3UUOAIHalhp2MnGrHqjLdA OKXv8coW9b0hY1POe34eVKgfK0cz2y3QlHZzIhARZExIzRsmPCBB6Pi2U+TzWYcV qQNEFzIjZGbOKYtzw6ummf5uFBPTUIyQqPgB8wUI7fANXLcG9pHs+MoUiw== =Bw12 -----END PGP SIGNATURE----- From nginx-forum at forum.nginx.org Tue Feb 28 18:59:59 2017 From: nginx-forum at forum.nginx.org (larsg) Date: Tue, 28 Feb 2017 13:59:59 -0500 Subject: Nginx using variables / split_clients is very slow In-Reply-To: <20170228124653.GK34777@mdounin.ru> References: <20170228124653.GK34777@mdounin.ru> Message-ID: Dear Maxim, thank you very much. This solved my problem! Great! Kind Regards Lars Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272657,272663#msg-272663