From nginx-forum at forum.nginx.org Sat Jul 1 02:14:55 2017 From: nginx-forum at forum.nginx.org (ptcell) Date: Fri, 30 Jun 2017 22:14:55 -0400 Subject: ngx_http_sub_module causes requests to hang on a simple match. Message-ID: <75023c41f6cd332bcf9dc70c4a23cd05.NginxMailingListEnglish@forum.nginx.org> I've built with the sub filter enabled and I'm finding it hangs requests if there is a match. It is a very simple substitution/replace. I've resorted to following the request in GDB and the sub module completes and calls the next body filter (which in my case appears to be the charset module). I have no other odd modules enabled other than using threads with a thread pool size of two (shouldn't matter, right?). Pausing all the threads in GDB shows no obvious place it is hanging. If I change the match string to something that doesn't match anything, the request works fine. Here is my config: location / { root html; index index.html index.htm; sub_filter '' 'xxx'; sub_filter_once on; } nginx -V nginx version: nginx/1.7.11 built by gcc 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4) configure arguments: --with-http_sub_module --with-debug --with-threads --with-cc-opt='-O0 -g' Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275245,275245#msg-275245 From soracchi at multidialogo.it Sat Jul 1 12:04:08 2017 From: soracchi at multidialogo.it (Andrea Soracchi) Date: Sat, 1 Jul 2017 14:04:08 +0200 (CEST) Subject: Strange issue after nginx update In-Reply-To: <20170630163426.GB3000@daoine.org> References: <7255090.1031.1498662518798.JavaMail.sorry@sorry-Dell-System-XPS-L322X> <3523972.333.1498730493247.JavaMail.sorry@sorry-Dell-System-XPS-L322X> <1514956751.155023111.1498779972105.JavaMail.zimbra@netbuilder.it> <30825680.131.1498809638062.JavaMail.sorry@sorry-Dell-System-XPS-L322X> <15750487.938.1498831830132.JavaMail.sorry@sorry-Dell-System-XPS-L322X> <20170630163426.GB3000@daoine.org> Message-ID: <471079259.161284295.1498910648639.JavaMail.zimbra@netbuilder.it> Hi Francis, the problem is with the last ubuntu 16.04 LTS with the php 5.6 ( repository ondrej/php5-5.6). I try to downgrade to ubuntu 14.04 LTS... Thanks, Andrea ANDREA SORACCHI +39 329 0512704 System Engineer +39 0521 24 77 91 soracchi at netbuilder.it Da: "Francis Daly" A: "nginx" Inviato: Venerd?, 30 giugno 2017 18:34:26 Oggetto: Re: Strange issue after nginx update On Fri, Jun 30, 2017 at 04:10:32PM +0200, Andrea Soracchi wrote: Hi there, you suggest that with the old unknown version of nginx and the old unknown version of php, things worked. Now with the new updated version of nginx and the new updated version of php, things fail. And with the same updated version of nginx and a different updated version of php, things work again. The only pair there where you have one piece the same and one piece different, is new nginx and two different versions of php - one works, one fails. So you probably want to compare the php versions and their configuration to see what is different. (If you have one version of php and two versions of nginx showing one set working and one set failing, then you should compare the nginx versions for differences. But that is not what you have reported here, if I am reading it right.) Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -- Questo messaggio e' stato analizzato ed e' risultato non infetto. This message was scanned and is believed to be clean. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Sun Jul 2 12:26:53 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 02 Jul 2017 15:26:53 +0300 Subject: ngx_http_sub_module causes requests to hang on a simple match. In-Reply-To: <75023c41f6cd332bcf9dc70c4a23cd05.NginxMailingListEnglish@forum.nginx.org> References: <75023c41f6cd332bcf9dc70c4a23cd05.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1762729.p3EqeqSY3a@vbart-workstation> On Friday 30 June 2017 22:14:55 ptcell wrote: > I've built with the sub filter enabled and I'm finding it hangs requests if > there is a match. It is a very simple substitution/replace. I've > resorted to following the request in GDB and the sub module completes and > calls the next body filter (which in my case appears to be the charset > module). I have no other odd modules enabled other than using threads with > a thread pool size of two (shouldn't matter, right?). Pausing all the > threads in GDB shows no obvious place it is hanging. > > If I change the match string to something that doesn't match anything, the > request works fine. > > Here is my config: > > location / { > root html; > index index.html index.htm; > sub_filter '' 'xxx'; > sub_filter_once on; > } > > nginx -V > nginx version: nginx/1.7.11 > built by gcc 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4) > configure arguments: --with-http_sub_module --with-debug --with-threads > --with-cc-opt='-O0 -g' > > Thanks! > This is very old version of nginx. First of all, you should update up to the supported version. There are a bunch of bugs have been fixed. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Mon Jul 3 06:46:39 2017 From: nginx-forum at forum.nginx.org (Vishnu Priya Matha) Date: Mon, 03 Jul 2017 02:46:39 -0400 Subject: Issues with limit_req_zone. In-Reply-To: <20170509224250.GB10157@daoine.org> References: <20170509224250.GB10157@daoine.org> Message-ID: Then how does the burst_size play a role here ? How is the burst_size be calculated ? Since requests_per_sec is 100/s => 1 request per 0.01 sec - then does that mean 50 is also 50 per 0.01 sec or is it 1 per 0.02 sec ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274089,275256#msg-275256 From nginx-forum at forum.nginx.org Mon Jul 3 08:57:31 2017 From: nginx-forum at forum.nginx.org (foxgab) Date: Mon, 03 Jul 2017 04:57:31 -0400 Subject: set_real_ip_from, real_ip_header directive in ngx_http_realip_module In-Reply-To: <20170629153302.GL55433@mdounin.ru> References: <20170629153302.GL55433@mdounin.ru> Message-ID: <59f46e8d9a8e4fc1abc1686d1f5a087e.NginxMailingListEnglish@forum.nginx.org> hi Maxim, Thinks for you reply. i got a problem on http_realip_module, as what you said, duplicate addresses occurred in that header. if i want to get the real ip for access limiting, and append the last hop proxy address in X-Forwarded-Fro header at the same time, what should i do? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272653,275258#msg-275258 From shahzaib.cb at gmail.com Mon Jul 3 10:45:44 2017 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Mon, 3 Jul 2017 15:45:44 +0500 Subject: Client timed out errors !! Message-ID: Hi, We're seeing following info logs during serving mp4 videos via nginx : 2017/07/03 15:42:10 [info] 14725#100906: *964419 client timed out (60: Operation timed out) while sending mp4 to client, Is there anything we can do to fix it ? Thanks in advance !! Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From georgi at serversolution.info Mon Jul 3 13:03:31 2017 From: georgi at serversolution.info (Georgi Georgiev) Date: Mon, 3 Jul 2017 16:03:31 +0300 Subject: Custom error_log format Message-ID: <473E91C4-2D31-4797-990D-23083A0CB118@serversolution.info> Hello, I would like to have / to add the $request_id variable in the error_log, but I read that the only possible way is to add it to the source code. Has anyone here an experience with that, which file and what should I add? Or some other workaround? From mdounin at mdounin.ru Mon Jul 3 13:09:19 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 3 Jul 2017 16:09:19 +0300 Subject: set_real_ip_from, real_ip_header directive in ngx_http_realip_module In-Reply-To: <59f46e8d9a8e4fc1abc1686d1f5a087e.NginxMailingListEnglish@forum.nginx.org> References: <20170629153302.GL55433@mdounin.ru> <59f46e8d9a8e4fc1abc1686d1f5a087e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170703130919.GT55433@mdounin.ru> Hello! On Mon, Jul 03, 2017 at 04:57:31AM -0400, foxgab wrote: > Thinks for you reply. > i got a problem on http_realip_module, as what you said, duplicate addresses > occurred in that header. > if i want to get the real ip for access limiting, and append the last hop > proxy address in X-Forwarded-Fro header at the same time, what should i do? There are two basic options: 1. Avoid using the realip module. You can still do access checks using the "if" directive - and, for example, appropriate geo blocks. 2. Avoid using $proxy_add_x_forwarded_for. You may add the original address yourself, using the $realip_remote_addr variable and appropriate map{} blocks. Alternatively, you may want to rethink your setup to simply avoid one or the another. -- Maxim Dounin http://nginx.org/ From peter_booth at me.com Mon Jul 3 15:01:15 2017 From: peter_booth at me.com (Peter Booth) Date: Mon, 03 Jul 2017 11:01:15 -0400 Subject: ngx_http_sub_module causes requests to hang on a simple match. In-Reply-To: <1762729.p3EqeqSY3a@vbart-workstation> References: <75023c41f6cd332bcf9dc70c4a23cd05.NginxMailingListEnglish@forum.nginx.org> <1762729.p3EqeqSY3a@vbart-workstation> Message-ID: <1048653C-5D61-4EAD-8AC0-F9FD597AC514@me.com> What happens if you simplify the match string to only contain characters? Something like >> sub_filter 'xxx' 'yyy'; Can it ever do a substitute? Sent from my iPad > On Jul 2, 2017, at 8:26 AM, Valentin V. Bartenev wrote: > >> On Friday 30 June 2017 22:14:55 ptcell wrote: >> I've built with the sub filter enabled and I'm finding it hangs requests if >> there is a match. It is a very simple substitution/replace. I've >> resorted to following the request in GDB and the sub module completes and >> calls the next body filter (which in my case appears to be the charset >> module). I have no other odd modules enabled other than using threads with >> a thread pool size of two (shouldn't matter, right?). Pausing all the >> threads in GDB shows no obvious place it is hanging. >> >> If I change the match string to something that doesn't match anything, the >> request works fine. >> >> Here is my config: >> >> location / { >> root html; >> index index.html index.htm; >> sub_filter '' 'xxx'; >> sub_filter_once on; >> } >> >> nginx -V >> nginx version: nginx/1.7.11 >> built by gcc 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4) >> configure arguments: --with-http_sub_module --with-debug --with-threads >> --with-cc-opt='-O0 -g' >> >> Thanks! >> > > This is very old version of nginx. First of all, you should update up to > the supported version. There are a bunch of bugs have been fixed. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nelsonmarcos at gmail.com Mon Jul 3 20:09:28 2017 From: nelsonmarcos at gmail.com (Nelson Marcos) Date: Mon, 3 Jul 2017 17:09:28 -0300 Subject: Does Nginx supports If-Range ? Message-ID: Hello everyone! I don't know if it is an expected behaviour or a bug: Scenario 1(OK): If I perform a request *with the header Range*, Nginx serves the *partial content(HTTP 206)*. Scenario 2 (NOT OK): If I perform a request *with the header Range AND the header "If-Range" *with the Etag, Nginx serves the *entire file*(200). Why not serve the partial content if its cached version matches the If-Range header? In both scenarios the file is already cached. Here is my conf: https://pastebin.com/gQQ0GSg6 Here are my requests and my files: https://pastebin.com/rxLwYaSK The error happened on my server(Nginx 1.10.2) but I was also able to reproduce it on my Macbook (nginx 1.12.0). Thanks for any help :) Kinds, NM -------------- next part -------------- An HTML attachment was scrubbed... URL: From zchao1995 at gmail.com Tue Jul 4 01:25:34 2017 From: zchao1995 at gmail.com (Zhang Chao) Date: Mon, 3 Jul 2017 18:25:34 -0700 Subject: Does Nginx supports If-Range ? In-Reply-To: References: Message-ID: Hi! I don?t know if it is an expected behaviour or a bug: Scenario 1(OK): If I perform a request with the header Range, Nginx serves the partial content(HTTP 206). Scenario 2 (NOT OK): If I perform a request with the header Range AND the header ?If-Range? with the Etag, Nginx serves theentire file(200). Why not serve the partial content if its cached version matches the If-Range header? Make sure that the ETag value in the header If-Range is same as the entity ETag, don?t forget the double quotation marks. Maybe you need to support the simple demo. On 4 July 2017 at 04:10:01, Nelson Marcos (nelsonmarcos at gmail.com) wrote: Hello everyone! I don't know if it is an expected behaviour or a bug: Scenario 1(OK): If I perform a request *with the header Range*, Nginx serves the *partial content(HTTP 206)*. Scenario 2 (NOT OK): If I perform a request *with the header Range AND the header "If-Range"* with the Etag, Nginx serves the *entire file*(200). Why not serve the partial content if its cached version matches the If-Range header? In both scenarios the file is already cached. Here is my conf: https://pastebin.com/gQQ0GSg6 Here are my requests and my files: https://pastebin.com/rxLwYaSK The error happened on my server(Nginx 1.10.2) but I was also able to reproduce it on my Macbook (nginx 1.12.0). Thanks for any help :) Kinds, NM _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jul 4 07:33:36 2017 From: nginx-forum at forum.nginx.org (charlesparasa) Date: Tue, 04 Jul 2017 03:33:36 -0400 Subject: prxy_buffering Message-ID: <87cb3e2d7901cd9d269a45a8ac0173a8.NginxMailingListEnglish@forum.nginx.org> I am trying to check the functionality of proxy_buffering . can you please provide me with some sample test scenario. Basically I want to visualise the behaviours of nginx when proxy_buffering is on and off . And also please share what are the parameter gets effected when proxy_buffering is on or off . Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275275,275275#msg-275275 From nginx-forum at forum.nginx.org Tue Jul 4 08:01:44 2017 From: nginx-forum at forum.nginx.org (guruprasads) Date: Tue, 04 Jul 2017 04:01:44 -0400 Subject: Nginx Tuning Message-ID: <3679e1e4c5dd08e1950ac67d7f01ca05.NginxMailingListEnglish@forum.nginx.org> Hi, I am trying to tune nginx server. I want to restrict number of client connection per server and restrict bandwidth. I tried worker_connections 2; for max connections in nginx.conf file. but its connecting only after worker_connection value set to 7. my conf file look like below. user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 7; } thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275276,275276#msg-275276 From vfclists at gmail.com Tue Jul 4 11:20:52 2017 From: vfclists at gmail.com (vfclists .) Date: Tue, 4 Jul 2017 12:20:52 +0100 Subject: Is there a module that can prettify the page output before it is send to the requesting client? Message-ID: Is there a module that can prettify the HTML before it gets sent to the client? Some think like that must be run before the zip stage. -- Frank Church ======================= http://devblog.brahmancreations.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jul 4 12:21:14 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Jul 2017 15:21:14 +0300 Subject: Does Nginx supports If-Range ? In-Reply-To: References: Message-ID: <20170704122114.GW55433@mdounin.ru> Hello! On Mon, Jul 03, 2017 at 05:09:28PM -0300, Nelson Marcos wrote: > I don't know if it is an expected behaviour or a bug: > > > Scenario 1(OK): If I perform a request *with the header Range*, Nginx > serves the *partial content(HTTP 206)*. > > Scenario 2 (NOT OK): If I perform a request *with the header Range AND the > header "If-Range" *with the Etag, Nginx serves the *entire file*(200). Why > not serve the partial content if its cached version matches the If-Range > header? > > In both scenarios the file is already cached. > > Here is my conf: https://pastebin.com/gQQ0GSg6 > > Here are my requests and my files: https://pastebin.com/rxLwYaSK The "ETag" header in the response is invalid, as well as "If-Range" in the request. Quoting from the second link: : Etag: 345a2dd5c8f22e9ffaf250151ea820df : If-Range: 345a2dd5c8f22e9ffaf250151ea820df In both cases entity tag should be in double quotes, see https://tools.ietf.org/html/rfc2616#section-14.19. Fixing your backend to return correct ETag will make things work. Alternatively, you can use Last-Modified date in the If-Range request header instead. -- Maxim Dounin http://nginx.org/ From r1ch+nginx at teamliquid.net Tue Jul 4 12:23:57 2017 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 4 Jul 2017 14:23:57 +0200 Subject: Nginx Tuning In-Reply-To: <3679e1e4c5dd08e1950ac67d7f01ca05.NginxMailingListEnglish@forum.nginx.org> References: <3679e1e4c5dd08e1950ac67d7f01ca05.NginxMailingListEnglish@forum.nginx.org> Message-ID: You shouldn't be changing worker_connections, this is the total number of connections (of any type) permitted per worker. Take a look at the documentation at http://nginx.org/en/docs/ Of interest to you are http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html and http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate On Tue, Jul 4, 2017 at 10:01 AM, guruprasads wrote: > Hi, > > I am trying to tune nginx server. > I want to restrict number of client connection per server and restrict > bandwidth. > I tried > worker_connections 2; > for max connections in nginx.conf file. > but its connecting only after worker_connection value set to 7. > > my conf file look like below. > > user nginx; > worker_processes auto; > error_log /var/log/nginx/error.log; > pid /run/nginx.pid; > > # Load dynamic modules. See /usr/share/nginx/README.dynamic. > include /usr/share/nginx/modules/*.conf; > > events { > worker_connections 7; > } > > thanks. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,275276,275276#msg-275276 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jul 4 12:27:47 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Jul 2017 15:27:47 +0300 Subject: Nginx Tuning In-Reply-To: <3679e1e4c5dd08e1950ac67d7f01ca05.NginxMailingListEnglish@forum.nginx.org> References: <3679e1e4c5dd08e1950ac67d7f01ca05.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170704122746.GX55433@mdounin.ru> Hello! On Tue, Jul 04, 2017 at 04:01:44AM -0400, guruprasads wrote: > Hi, > > I am trying to tune nginx server. > I want to restrict number of client connection per server and restrict > bandwidth. > I tried > worker_connections 2; > for max connections in nginx.conf file. > but its connecting only after worker_connection value set to 7. > > my conf file look like below. > > user nginx; > worker_processes auto; > error_log /var/log/nginx/error.log; > pid /run/nginx.pid; > > # Load dynamic modules. See /usr/share/nginx/README.dynamic. > include /usr/share/nginx/modules/*.conf; > > events { > worker_connections 7; > } > > thanks. The worker_connections directive is to control number of internal connection structures in nginx. It should not be set to low values, as these structures are used in many places in nginx - including listening sockets, client connections, connections to upstream servers, and so on. And shortage of these structures will result in fatal errors. Instead, you may want to use large numbers if you want to handle reasonable load, as default 512 is very small in the modern world. If you want to limit connections, consider using the limit_conn module instead see here: http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html To restrict bandwidth on a per-connection basis, there is the limit_rate directive (http://nginx.org/r/limit_rate). -- Maxim Dounin http://nginx.org/ From nelsonmarcos at gmail.com Tue Jul 4 14:29:45 2017 From: nelsonmarcos at gmail.com (Nelson Marcos) Date: Tue, 4 Jul 2017 11:29:45 -0300 Subject: Does Nginx supports If-Range ? In-Reply-To: <20170704122114.GW55433@mdounin.ru> References: <20170704122114.GW55433@mdounin.ru> Message-ID: Thanks Zhang and Maxim! I'm checking how to fix that on my backend. Kind, NM 2017-07-04 9:21 GMT-03:00 Maxim Dounin : > Hello! > > On Mon, Jul 03, 2017 at 05:09:28PM -0300, Nelson Marcos wrote: > > > I don't know if it is an expected behaviour or a bug: > > > > > > Scenario 1(OK): If I perform a request *with the header Range*, Nginx > > serves the *partial content(HTTP 206)*. > > > > Scenario 2 (NOT OK): If I perform a request *with the header Range AND > the > > header "If-Range" *with the Etag, Nginx serves the *entire file*(200). > Why > > not serve the partial content if its cached version matches the If-Range > > header? > > > > In both scenarios the file is already cached. > > > > Here is my conf: https://pastebin.com/gQQ0GSg6 > > > > Here are my requests and my files: https://pastebin.com/rxLwYaSK > > The "ETag" header in the response is invalid, as well as > "If-Range" in the request. Quoting from the second link: > > : Etag: 345a2dd5c8f22e9ffaf250151ea820df > : If-Range: 345a2dd5c8f22e9ffaf250151ea820df > > In both cases entity tag should be in double quotes, see > https://tools.ietf.org/html/rfc2616#section-14.19. > > Fixing your backend to return correct ETag will make things work. > Alternatively, you can use Last-Modified date in the If-Range > request header instead. > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From finid at vivaldi.net Tue Jul 4 14:37:37 2017 From: finid at vivaldi.net (Dan Edwards) Date: Tue, 04 Jul 2017 09:37:37 -0500 Subject: map configuration Message-ID: <64754e6db825ef109c65f6d48ddb9d38@vivaldi.net> Hello: Need help understanding this piece of Nginx configuration: === map $http_upgrade $connection_upgrade { default upgrade; '' close; } === What does the last line mean? TIA, -- -Dan Edwards- From michael.salmon at ericsson.com Tue Jul 4 14:50:37 2017 From: michael.salmon at ericsson.com (Michael Salmon) Date: Tue, 4 Jul 2017 14:50:37 +0000 Subject: map configuration In-Reply-To: <64754e6db825ef109c65f6d48ddb9d38@vivaldi.net> References: <64754e6db825ef109c65f6d48ddb9d38@vivaldi.net> Message-ID: <921F4EE1-E49C-4D8E-BB83-F0767DB98A1A@ericsson.com> The map is similar to $connection_upgrade = ($http_upgrade == '') ? 'close' : 'upgrade'; Each line in a map is a match and a value with default being used if nothing matches. /Michael Salmon SE KI73 03 366 (OLC: 9FFVCX53+C5Q8) +46 722 184 909 > On 4 Jul 2017, at 16:37:37, Dan Edwards wrote: > > Hello: > > Need help understanding this piece of Nginx configuration: > > > === > map $http_upgrade $connection_upgrade { > default upgrade; > '' close; > } > === > > > What does the last line mean? > > TIA, > > > > -- > -Dan Edwards- > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From peter_booth at me.com Tue Jul 4 16:55:55 2017 From: peter_booth at me.com (Peter Booth) Date: Tue, 04 Jul 2017 12:55:55 -0400 Subject: Nginx Tuning In-Reply-To: <3679e1e4c5dd08e1950ac67d7f01ca05.NginxMailingListEnglish@forum.nginx.org> References: <3679e1e4c5dd08e1950ac67d7f01ca05.NginxMailingListEnglish@forum.nginx.org> Message-ID: <026BE68B-9992-4DF2-9AE0-934165918154@me.com> What is your ultimate goal here? What are you wanting to prevent? Sent from my iPhone > On Jul 4, 2017, at 4:01 AM, guruprasads wrote: > > Hi, > > I am trying to tune nginx server. > I want to restrict number of client connection per server and restrict > bandwidth. > I tried > worker_connections 2; > for max connections in nginx.conf file. > but its connecting only after worker_connection value set to 7. > > my conf file look like below. > > user nginx; > worker_processes auto; > error_log /var/log/nginx/error.log; > pid /run/nginx.pid; > > # Load dynamic modules. See /usr/share/nginx/README.dynamic. > include /usr/share/nginx/modules/*.conf; > > events { > worker_connections 7; > } > > thanks. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275276,275276#msg-275276 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From peter_booth at me.com Tue Jul 4 17:45:33 2017 From: peter_booth at me.com (Peter Booth) Date: Tue, 04 Jul 2017 13:45:33 -0400 Subject: Is there a module that can prettify the page output before it is send to the requesting client? In-Reply-To: References: Message-ID: Depends on your definition of pretty and what you want to achieve. Are you looking for pretty for a human reader or for a browser? Google's pagespeed module comes in both apache and nginx flavors and applies a bunch of page optimization transformations to the page and embedded resources. I've seen it reduce download times for an untuned site from 6 seconds to 1.2. But the HTML that's returned has been "prettified" for a browser not for a person to view. Peter Sent from my iPhone > On Jul 4, 2017, at 7:20 AM, vfclists . wrote: > > Is there a module that can prettify the HTML before it gets sent to the client? Some think like that must be run before the zip stage. > > -- > Frank Church > > ======================= > http://devblog.brahmancreations.com > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Tue Jul 4 19:51:24 2017 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 04 Jul 2017 12:51:24 -0700 Subject: Nginx Tuning In-Reply-To: <026BE68B-9992-4DF2-9AE0-934165918154@me.com> References: <3679e1e4c5dd08e1950ac67d7f01ca05.NginxMailingListEnglish@forum.nginx.org> <026BE68B-9992-4DF2-9AE0-934165918154@me.com> Message-ID: <20170704195124.5693526.49368.32245@lazygranch.com> I'd suggest the online guides on anti-DDoSing for NGINX. They cover limiting the number of connections, etc. Of course in reality these schemes would just limit some some kid in the basement from flooding your server rather than a real DDoS attack. But better than nothing, plus what is in those guides is effective for taming aggressive download managers that try to open multiple connections.? ? Original Message ? From: Peter Booth Sent: Tuesday, July 4, 2017 9:56 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: Nginx Tuning What is your ultimate goal here? What are you wanting to prevent? Sent from my iPhone > On Jul 4, 2017, at 4:01 AM, guruprasads wrote: > > Hi, > > I am trying to tune nginx server. > I want to restrict number of client connection per server and restrict > bandwidth. > I tried > worker_connections 2; > for max connections in nginx.conf file. > but its connecting only after worker_connection value set to 7. > > my conf file look like below. > > user nginx; > worker_processes auto; > error_log /var/log/nginx/error.log; > pid /run/nginx.pid; > > # Load dynamic modules. See /usr/share/nginx/README.dynamic. > include /usr/share/nginx/modules/*.conf; > > events { > worker_connections 7; > } > > thanks. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275276,275276#msg-275276 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From vfclists at gmail.com Tue Jul 4 21:32:13 2017 From: vfclists at gmail.com (vfclists .) Date: Tue, 4 Jul 2017 22:32:13 +0100 Subject: Is there a module that can prettify the page output before it is send to the requesting client? In-Reply-To: References: Message-ID: On 4 July 2017 at 18:45, Peter Booth wrote: > Depends on your definition of pretty and what you want to achieve. Are you > looking for pretty for a human reader or for a browser? > > Google's pagespeed module comes in both apache and nginx flavors and > applies a bunch of page optimization transformations to the page and > embedded resources. I've seen it reduce download times for an untuned site > from 6 seconds to 1.2. But the HTML that's returned has been "prettified" > for a browser not for a person to view. > > Peter > > Sent from my iPhone > > On Jul 4, 2017, at 7:20 AM, vfclists . wrote: > > Is there a module that can prettify the HTML before it gets sent to the > client? Some think like that must be run before the zip stage. > > -- > Frank Church > > ======================= > http://devblog.brahmancreations.com > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > I need one for a human reader. -- Frank Church ======================= http://devblog.brahmancreations.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jul 5 05:41:23 2017 From: nginx-forum at forum.nginx.org (foxgab) Date: Wed, 05 Jul 2017 01:41:23 -0400 Subject: proxy_set_header directives don't inherit from father context Message-ID: <49978e7f9faecb3bd48f97f8d6384958.NginxMailingListEnglish@forum.nginx.org> i found if i didn't configure any proxy_set_header directives in a context, it will inherit those directives from father context automatic, but if i set one in current context, the inheritance won't work. my configration is like bellow: http { proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $host; server { listen 80; server_name example.com; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; location / { proxy_pass http://huiju_nginx_ppr1; } } i expected the X-Forwarded-For and X-Forwarded-Proto header will be set, but it didn't. what happend? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275301,275301#msg-275301 From nginx-forum at forum.nginx.org Wed Jul 5 05:48:28 2017 From: nginx-forum at forum.nginx.org (foxgab) Date: Wed, 05 Jul 2017 01:48:28 -0400 Subject: map configuration In-Reply-To: <921F4EE1-E49C-4D8E-BB83-F0767DB98A1A@ericsson.com> References: <921F4EE1-E49C-4D8E-BB83-F0767DB98A1A@ericsson.com> Message-ID: why no just pass the $http_upgrade and $http_connection headers to the backend? like conf bellow: proxy_set_header Upgrade $http_upgrade proxy_set_header Connection $http_Connection and if i set the Connection header to "upgrade" permanently, does it make any harm? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275283,275302#msg-275302 From me at nanaya.pro Wed Jul 5 05:57:51 2017 From: me at nanaya.pro (nanaya) Date: Wed, 05 Jul 2017 14:57:51 +0900 Subject: proxy_set_header directives don't inherit from father context In-Reply-To: <49978e7f9faecb3bd48f97f8d6384958.NginxMailingListEnglish@forum.nginx.org> References: <49978e7f9faecb3bd48f97f8d6384958.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1499234271.126662.1030709392.27C41195@webmail.messagingengine.com> Hi, On Wed, Jul 5, 2017, at 14:41, foxgab wrote: > i expected the X-Forwarded-For and X-Forwarded-Proto header will be set, > but > it didn't. > what happend? > Quite a few of additive configs (proxy_set_header, add_header) are only inherited if there's no same directive set at current block. >From documentation [1]: > These directives are inherited from the previous level if and only if there are no proxy_set_header directives defined on the current level. [1] http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header From nginx-forum at forum.nginx.org Wed Jul 5 05:59:52 2017 From: nginx-forum at forum.nginx.org (foxgab) Date: Wed, 05 Jul 2017 01:59:52 -0400 Subject: map configuration In-Reply-To: <64754e6db825ef109c65f6d48ddb9d38@vivaldi.net> References: <64754e6db825ef109c65f6d48ddb9d38@vivaldi.net> Message-ID: that means, if the value of $http_upgrade is null or that variable doesn't exist, the value of $connection_upgrade will set to "close", or it will set to "upgrade" in the other cases. the $http_upgrade is the value of the "Upgrade" header in the client request. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275283,275304#msg-275304 From francis at daoine.org Wed Jul 5 07:11:53 2017 From: francis at daoine.org (Francis Daly) Date: Wed, 5 Jul 2017 08:11:53 +0100 Subject: Issues with limit_req_zone. In-Reply-To: References: <20170509224250.GB10157@daoine.org> Message-ID: <20170705071153.GC3000@daoine.org> On Mon, Jul 03, 2017 at 02:46:39AM -0400, Vishnu Priya Matha wrote: Hi there, > Then how does the burst_size play a role here ? How is the burst_size be > calculated ? "burst" means, roughly, "let this many happen quickly before fully enforcing the one-per-period rule". > Since requests_per_sec is 100/s => 1 request per 0.01 sec - then does that > mean 50 is also 50 per 0.01 sec or is it 1 per 0.02 sec ? 100/s means, roughly, "handle 1, then no more until 0.01s has passed". With that config, "burst=50" means (again, roughly), "handle (up to) 50, then no more until (up to) 0.5s has passed". You maintain the overall rate, but allow it be short-term exceeded followed by a quiet period. Test it by setting rate=1/s and burst=5. Send in 10 requests very quickly. See when they are handled. Wait a while. Send in 10 requests, the next one as soon as the previous one gets a response. See when they are handled. f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Jul 5 07:20:17 2017 From: francis at daoine.org (Francis Daly) Date: Wed, 5 Jul 2017 08:20:17 +0100 Subject: proxy_set_header directives don't inherit from father context In-Reply-To: <1499234271.126662.1030709392.27C41195@webmail.messagingengine.com> References: <49978e7f9faecb3bd48f97f8d6384958.NginxMailingListEnglish@forum.nginx.org> <1499234271.126662.1030709392.27C41195@webmail.messagingengine.com> Message-ID: <20170705072017.GD3000@daoine.org> On Wed, Jul 05, 2017 at 02:57:51PM +0900, nanaya wrote: > On Wed, Jul 5, 2017, at 14:41, foxgab wrote: Hi there, > > i expected the X-Forwarded-For and X-Forwarded-Proto header will be set, > > but it didn't. > > what happend? Your expectation was wrong. > Quite a few of additive configs (proxy_set_header, add_header) are only > inherited if there's no same directive set at current block. Correct; except I'd say that *all* directives are inherited by replacement, or not inherited. There are some pairs of directives where setting one will cause the other not to be inherited either, but they should be clearly related to each other. > > These directives are inherited from the previous level if and only if there are no proxy_set_header directives defined on the current level. The person writing the documentation gets to decide what should be there; I think it would be more useful to make it clear that *everything* works that way, apart from the few exceptions that are not inherited at all (which are, in the main, the *_pass directives). f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Jul 5 07:31:24 2017 From: francis at daoine.org (Francis Daly) Date: Wed, 5 Jul 2017 08:31:24 +0100 Subject: prxy_buffering In-Reply-To: <87cb3e2d7901cd9d269a45a8ac0173a8.NginxMailingListEnglish@forum.nginx.org> References: <87cb3e2d7901cd9d269a45a8ac0173a8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170705073124.GE3000@daoine.org> On Tue, Jul 04, 2017 at 03:33:36AM -0400, charlesparasa wrote: Hi there, > I am trying to check the functionality of proxy_buffering . can you please > provide me with some sample test scenario. The documentation is at http://nginx.org/r/proxy_buffering Is there something specific that is not clear, after reading that? > Basically I want to visualise the behaviours of nginx when proxy_buffering > is on and off . There is the client, which talks to nginx; and nginx, which talks to the http or https upstream. The link from the client to nginx can be fast or slow (or variable). The link from nginx to upstream can be fast or slow. Draw a picture of what happens if one link is fast and the other link is slow, with and without nginx buffering the response. Often, the desired aim is that upstream closes the connection to nginx as quickly as possible, in order that it can handle a new request. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Jul 5 07:47:01 2017 From: nginx-forum at forum.nginx.org (foxgab) Date: Wed, 05 Jul 2017 03:47:01 -0400 Subject: what's the difference between proxy_buffer_size and proxy_buffers Message-ID: <6147fc2bfdac051945e0e0f1d139fb70.NginxMailingListEnglish@forum.nginx.org> to explain the proxy_buffer_size directive, the document says: "Sets the size of the buffer used for reading the first part of the response received from the proxied server" what does "first part" mean? that is the first packet or what? what's the relationship and difference between between proxy_buffer_size and proxy_buffers? and how these two directives effect the limitation of proxy_busy_buffers_size? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275308,275308#msg-275308 From smntov at gmail.com Wed Jul 5 19:55:24 2017 From: smntov at gmail.com (ST) Date: Wed, 05 Jul 2017 22:55:24 +0300 Subject: response headers show wrong time Message-ID: <1499284524.3985.75.camel@gmail.com> Hello, response headers show wrong time: Date: Wed, 05 Jul 2017 19:44:37 GMT while system time is set to 22:44:37 (output of the 'date' from command line). Any ideas where I can set headers' date to system time? Thank you in advance! ST From r at roze.lv Wed Jul 5 21:33:26 2017 From: r at roze.lv (Reinis Rozitis) Date: Thu, 6 Jul 2017 00:33:26 +0300 Subject: response headers show wrong time In-Reply-To: <1499284524.3985.75.camel@gmail.com> References: <1499284524.3985.75.camel@gmail.com> Message-ID: <000001d2f5d6$5634c0f0$029e42d0$@roze.lv> > Hello, > > response headers show wrong time: > > Date: Wed, 05 Jul 2017 19:44:37 GMT > > while system time is set to 22:44:37 (output of the 'date' from command line). > > Any ideas where I can set headers' date to system time? It's not a wrong time just your system has a GMT+3 timezone or EEST (Eastern European Summer Time) but nginx sends the date in GMT (or UTC) timezone which is defined in HTTP protocol RFC (http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.3.1) So there is nothing to set or change. rr From nginx-forum at forum.nginx.org Thu Jul 6 02:34:07 2017 From: nginx-forum at forum.nginx.org (foxgab) Date: Wed, 05 Jul 2017 22:34:07 -0400 Subject: "aio on" unsupported on centos6.8 Message-ID: <117d6cf73405377e9ec67e1027eb5cac.NginxMailingListEnglish@forum.nginx.org> it said ""aio on" is unsupported on this platform in /usr/local/nginx/conf/nginx.conf:25" when i apply my conf. see version info bellow: [root]# /usr/local/nginx/sbin/nginx -v nginx version: nginx/1.10.3 [root]# lsb_release -a LSB Version: :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch Distributor ID: CentOS Description: CentOS release 6.8 (Final) Release: 6.8 Codename: Final [root]# uname -a Linux nginx-ppr1-public-a1 2.6.32-696.3.2.el6.x86_64 #1 SMP Tue Jun 20 01:26:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275320,275320#msg-275320 From nginx-forum at forum.nginx.org Thu Jul 6 06:19:09 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Thu, 06 Jul 2017 02:19:09 -0400 Subject: Nginx Auth Module auth_basic and Flooding/DoS/DDoS Message-ID: Here is my config : http { limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s; limit_conn_zone $binary_remote_addr zone=addr:10m; server { location /secured/ { auth_basic "secured area"; auth_basic_user_file conf/htpasswd; limit_req zone=one burst=5; limit_conn addr 1; } } My question is with the nginx auth module should i still need to protect that area from flooding / ddos or will the auth module denying access be enough. Would like to know how well the auth module processes and does compared to the limit_req or limit_conn module and if i should keep those in my configuration or remove them since the auth module could be already doing all the work. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275321,275321#msg-275321 From mdounin at mdounin.ru Thu Jul 6 11:55:41 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 6 Jul 2017 14:55:41 +0300 Subject: "aio on" unsupported on centos6.8 In-Reply-To: <117d6cf73405377e9ec67e1027eb5cac.NginxMailingListEnglish@forum.nginx.org> References: <117d6cf73405377e9ec67e1027eb5cac.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170706115541.GG55433@mdounin.ru> Hello! On Wed, Jul 05, 2017 at 10:34:07PM -0400, foxgab wrote: > it said ""aio on" is unsupported on this platform in > /usr/local/nginx/conf/nginx.conf:25" when i apply my conf. > > see version info bellow: > > [root]# /usr/local/nginx/sbin/nginx -v > nginx version: nginx/1.10.3 > > [root]# lsb_release -a > LSB Version: > :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch > Distributor ID: CentOS > Description: CentOS release 6.8 (Final) > Release: 6.8 > Codename: Final > > [root]# uname -a > Linux nginx-ppr1-public-a1 2.6.32-696.3.2.el6.x86_64 #1 SMP Tue Jun 20 > 01:26:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Make sure nginx you are using was compiled with AIO support ("--with-file-aio", use "nginx -V" to see configure arguments). -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Fri Jul 7 02:43:44 2017 From: nginx-forum at forum.nginx.org (rlx01) Date: Thu, 06 Jul 2017 22:43:44 -0400 Subject: nginx -s reload terminates connections Message-ID: <635f48c5d7f72deb6fe8d84a5e0ce252.NginxMailingListEnglish@forum.nginx.org> Hi, Using the official 1.12.0 package on debian 9 and the 1.12.0 ports package on FreeBSD 11-RC1, calling nginx -s reload drops connections. It's fairly easy to reproduce, I just install the fresh package from wherever, and serving the default index.html, I run wrk, and in the middle of the run I call 'nginx -s reload' without actually changing anything: $ wrk -d 30 -c 100 -t 20 http://nginx-test01.lan/ Running 30s test @ http://nginx-test01.lan/ 20 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 10.06ms 4.10ms 53.02ms 79.07% Req/Sec 503.04 161.36 757.00 53.20% 300801 requests in 30.05s, 243.82MB read Socket errors: connect 0, read 100, write 0, timeout 0 Requests/sec: 10008.67 Transfer/sec: 8.11MB You can see 100 socket read errors in this example. I tried this with the h2o (2.2.2) server just to see what happens and there is no error: $ wrk -d 30 -c 100 -t 20 http://h2o-test01.lan/ Running 30s test @ http://nginx-test01.lan/ 20 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 4.06ms 1.83ms 56.08ms 76.57% Req/Sec 1.24k 724.32 6.61k 97.67% 741569 requests in 30.02s, 599.01MB read Requests/sec: 24699.67 Transfer/sec: 19.95MB -- The same issue occurs when I test HTTP/2 with 'h2load'. The h2load test just stops as soon as I call 'nginx -s reload'. Is this expected behaviour? If so, is there some configuration directive that would fix it? In my real world usage, I have a nginx server running on the edge and it proxies (HTTP/1.1) to a different nginx server. If I call 'nginx -s reload' on the internal nginx the external nginx shows "upstream prematurely closed connection while reading response header from upstream" if it's under heavy load. Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275328,275328#msg-275328 From nginx-forum at forum.nginx.org Fri Jul 7 05:30:59 2017 From: nginx-forum at forum.nginx.org (JackB) Date: Fri, 07 Jul 2017 01:30:59 -0400 Subject: nginx -s reload terminates connections In-Reply-To: <635f48c5d7f72deb6fe8d84a5e0ce252.NginxMailingListEnglish@forum.nginx.org> References: <635f48c5d7f72deb6fe8d84a5e0ce252.NginxMailingListEnglish@forum.nginx.org> Message-ID: With a plain standard configuration I can confirm behavior of wrk on Ubuntu 16.04 (Linux 4.4.0): Running without reload during run: $ wrk -d 30 -c 100 -t 20 http://localhsot/ Running 30s test @ http://localhost/ 20 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 2.45ms 0.89ms 29.75ms 86.77% Req/Sec 2.07k 279.09 5.32k 78.41% 1241732 requests in 30.10s, 1.45GB read Requests/sec: 41248.07 Transfer/sec: 49.25MB Calling reload one time during run: $ wrk -d 30 -c 100 -t 20 http://localhost/ Running 30s test @ http://localhost/ 20 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 5.34ms 8.76ms 91.81ms 85.10% Req/Sec 1.22k 0.86k 5.43k 58.83% 688865 requests in 30.10s, 822.49MB read Socket errors: connect 0, read 39, write 0, timeout 0 Requests/sec: 22887.01 Transfer/sec: 27.33MB Could not verify a connection termination with a reload during a long running request. Might be a "problem" with wrk? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275328,275329#msg-275329 From nginx-forum at forum.nginx.org Fri Jul 7 05:55:34 2017 From: nginx-forum at forum.nginx.org (rlx01) Date: Fri, 07 Jul 2017 01:55:34 -0400 Subject: nginx -s reload terminates connections In-Reply-To: References: <635f48c5d7f72deb6fe8d84a5e0ce252.NginxMailingListEnglish@forum.nginx.org> Message-ID: <75417f3e77a575eedaf74831266e6c02.NginxMailingListEnglish@forum.nginx.org> Hi, It's not a problem specific to wrk as we see this with heavy production traffic where nginx drops real requests. This is just the simplest possible test case. :) It also fails with siege, h2load, ab, etc. Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275328,275330#msg-275330 From nginx-forum at forum.nginx.org Fri Jul 7 09:49:18 2017 From: nginx-forum at forum.nginx.org (JackB) Date: Fri, 07 Jul 2017 05:49:18 -0400 Subject: nginx -s reload terminates connections In-Reply-To: <75417f3e77a575eedaf74831266e6c02.NginxMailingListEnglish@forum.nginx.org> References: <635f48c5d7f72deb6fe8d84a5e0ce252.NginxMailingListEnglish@forum.nginx.org> <75417f3e77a575eedaf74831266e6c02.NginxMailingListEnglish@forum.nginx.org> Message-ID: <08fe441ce862adbd519bb3144d4a3c9d.NginxMailingListEnglish@forum.nginx.org> Tested ab and confirmed terminated requests with reloads during run (complete output pasted at the end). Failed requests: 180 (Connect: 0, Receive: 24, Length: 78, Exceptions: 78) ab reports reports terminated connections while receiving. This should not happen at all when executing a reload. But: If I run ab without keepalive connections, it is not possible for me to get failed requests by reloading nginx. Maybe the problem is related with persistent connections. Does your edge nginx use keepalive connections to access the internal nginx? $ ab -n 1000000 -c 20 -k -r http://localhost/ This is ApacheBench, Version 2.3 <$Revision: 1706008 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking 10.10.1.171 (be patient) Completed 100000 requests Completed 200000 requests Completed 300000 requests Completed 400000 requests Completed 500000 requests Completed 600000 requests Completed 700000 requests Completed 800000 requests Completed 900000 requests Completed 1000000 requests Finished 1000000 requests Server Software: nginx Server Hostname: 127.0.0.1 Server Port: 80 Document Path: / Document Length: 973 bytes Concurrency Level: 20 Time taken for tests: 17.916 seconds Complete requests: 1000000 Failed requests: 180 (Connect: 0, Receive: 24, Length: 78, Exceptions: 78) Keep-Alive requests: 994974 Total transferred: 1251877604 bytes HTML transferred: 972924106 bytes Requests per second: 55815.82 [#/sec] (mean) Time per request: 0.358 [ms] (mean) Time per request: 0.018 [ms] (mean, across all concurrent requests) Transfer rate: 68236.90 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.0 0 3 Processing: 0 0 0.4 0 20 Waiting: 0 0 0.4 0 20 Total: 0 0 0.4 0 20 Percentage of the requests served within a certain time (ms) 50% 0 66% 0 75% 0 80% 0 90% 1 95% 1 98% 1 99% 1 100% 20 (longest request) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275328,275339#msg-275339 From joan.tomas at marfeel.com Fri Jul 7 09:52:48 2017 From: joan.tomas at marfeel.com (=?UTF-8?Q?Joan_Tom=c3=a0s_i_Buliart?=) Date: Fri, 7 Jul 2017 11:52:48 +0200 Subject: NGINX stale-while-revalidate cluster Message-ID: <7d98c323-cbdd-b158-e4b2-260681324ca0@marfeel.com> Hi, We are implementing an stale-while-revalidate webserver cluster with NGINX. We are using the new proxy_cache_background_update to answer request as soon as possible while NGINX updates the content from the origin in the background. This solution works perfectly when the requests for the same object are served by the same NGINX server (when we have only one server or when we have a previous load balancer that classifies the requests). In our scenario we have a round robin load balancer (ELB) and we need to scale the webservers layer. So, as a consequence, only the Nginx that receive the request updates the cache content while the others keep the old version. This means that, we can send old versions of content due to the content not being updated on all the webservers. The problem accentuates when we put a CDN in front of the webservers. We are thinking on developing something that once an Nginx instance updates its cache would let know all other instances to get a copy of the newest content. We are thinking about processing NGINX logs and, when it detects a MISS, EXPIRED or UPDATING cache status, it makes a HEAD request to the other NGINXs on the cluster to force the invalidation of this content. Do any of you have dealt with this problem or a similar one? We have also tried the post_action but it is blocking the client request until it completes. It is not clear for us which would be the best approach. The options that we are considering are: - NGINX module - LUA script - External script that process syslog entries from NGINX What would be your recommendation? Many thanks in advance, -- Joan Tom?s-Buliart www.marfeel.com Joan Tom?s-Buliart +34 931 785 950 www.marfeel.com Discover our referral program!! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: olhgllimkponlhjm.gif Type: image/gif Size: 4180 bytes Desc: not available URL: From lucas at slcoding.com Fri Jul 7 10:12:35 2017 From: lucas at slcoding.com (Lucas Rolff) Date: Fri, 07 Jul 2017 12:12:35 +0200 Subject: NGINX stale-while-revalidate cluster In-Reply-To: <7d98c323-cbdd-b158-e4b2-260681324ca0@marfeel.com> References: <7d98c323-cbdd-b158-e4b2-260681324ca0@marfeel.com> Message-ID: <3E009A4E-0D3A-49FF-B7CC-D1EA1A53AEDA@slcoding.com> Instead of doing round robin load balancing why not do a URI based load balancing? Then you ensure your cached file is only present on a single machine behind the load balancer. Sure there will be moments where this is not the case ? let's assume that a box goes down, and traffic will switch, but in that case I'd as a "post task" take the moment from when the machine went down, until it came online again, find all requests that expired in the meantime, and flush it to ensure the entry is updated on the machine that had been down in the meantime. It will still require some work, but at least over time your "overhead" should be less. From: nginx on behalf of Joan Tom?s i Buliart Reply-To: Date: Friday, 7 July 2017 at 11.52 To: Subject: NGINX stale-while-revalidate cluster Hi, We are implementing an stale-while-revalidate webserver cluster with NGINX. We are using the new proxy_cache_background_update to answer request as soon as possible while NGINX updates the content from the origin in the background. This solution works perfectly when the requests for the same object are served by the same NGINX server (when we have only one server or when we have a previous load balancer that classifies the requests). In our scenario we have a round robin load balancer (ELB) and we need to scale the webservers layer. So, as a consequence, only the Nginx that receive the request updates the cache content while the others keep the old version. This means that, we can send old versions of content due to the content not being updated on all the webservers. The problem accentuates when we put a CDN in front of the webservers. We are thinking on developing something that once an Nginx instance updates its cache would let know all other instances to get a copy of the newest content. We are thinking about processing NGINX logs and, when it detects a MISS, EXPIRED or UPDATING cache status, it makes a HEAD request to the other NGINXs on the cluster to force the invalidation of this content. Do any of you have dealt with this problem or a similar one? We have also tried the post_action but it is blocking the client request until it completes. It is not clear for us which would be the best approach. The options that we are considering are: - NGINX module - LUA script - External script that process syslog entries from NGINX What would be your recommendation? Many thanks in advance, -- Joan Tom?s-Buliart Joan Tom?s-Buliart +34 931 785 950 www.marfeel.com Discover our referral program!! _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: olhgllimkponlhjm.gif Type: image/gif Size: 4180 bytes Desc: not available URL: From joan.tomas at marfeel.com Fri Jul 7 10:30:28 2017 From: joan.tomas at marfeel.com (=?UTF-8?Q?Joan_Tom=c3=a0s_i_Buliart?=) Date: Fri, 7 Jul 2017 12:30:28 +0200 Subject: NGINX stale-while-revalidate cluster In-Reply-To: <3E009A4E-0D3A-49FF-B7CC-D1EA1A53AEDA@slcoding.com> References: <7d98c323-cbdd-b158-e4b2-260681324ca0@marfeel.com> <3E009A4E-0D3A-49FF-B7CC-D1EA1A53AEDA@slcoding.com> Message-ID: Hi Lucas On 07/07/17 12:12, Lucas Rolff wrote: > Instead of doing round robin load balancing why not do a URI based > load balancing? Then you ensure your cached file is only present on a > single machine behind the load balancer. Yes, we considered this option but it forces us to deploy and maintain another layer (LB+NG+AppServer). All cloud providers have round robin load balancers out-of-the-box but no one provides URI based load balancer. Moreover, in our scenario, our webservers layer is quite dynamic due to scaling up/down. Best, Joan From mdounin at mdounin.ru Fri Jul 7 12:28:48 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 7 Jul 2017 15:28:48 +0300 Subject: nginx -s reload terminates connections In-Reply-To: <08fe441ce862adbd519bb3144d4a3c9d.NginxMailingListEnglish@forum.nginx.org> References: <635f48c5d7f72deb6fe8d84a5e0ce252.NginxMailingListEnglish@forum.nginx.org> <75417f3e77a575eedaf74831266e6c02.NginxMailingListEnglish@forum.nginx.org> <08fe441ce862adbd519bb3144d4a3c9d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170707122847.GL55433@mdounin.ru> Hello! On Fri, Jul 07, 2017 at 05:49:18AM -0400, JackB wrote: > Tested ab and confirmed terminated requests with reloads during run > (complete output pasted at the end). > > Failed requests: 180 > (Connect: 0, Receive: 24, Length: 78, Exceptions: 78) > > ab reports reports terminated connections while receiving. This should not > happen at all when executing a reload. > > But: > If I run ab without keepalive connections, it is not possible for me to get > failed requests by reloading nginx. Maybe the problem is related with > persistent connections. > > Does your edge nginx use keepalive connections to access the internal > nginx? TWIMC: https://trac.nginx.org/nginx/ticket/1022#comment:1 TL;DR: when using keepalive connections, clients are expected to be prepared that a connection can be closed. -- Maxim Dounin http://nginx.org/ From frank.dias at prodea.com Fri Jul 7 13:24:59 2017 From: frank.dias at prodea.com (Frank Dias) Date: Fri, 7 Jul 2017 13:24:59 +0000 Subject: NGINX stale-while-revalidate cluster Message-ID: <6e27a978-84b1-4e04-8110-b1d3b563cf8f@email.android.com> Have you thought about using a shared file system for the cache. This way all the nginx 's are looking at the same cached content. On Jul 7, 2017 5:30 AM, Joan Tom?s i Buliart wrote: Hi Lucas On 07/07/17 12:12, Lucas Rolff wrote: > Instead of doing round robin load balancing why not do a URI based > load balancing? Then you ensure your cached file is only present on a > single machine behind the load balancer. Yes, we considered this option but it forces us to deploy and maintain another layer (LB+NG+AppServer). All cloud providers have round robin load balancers out-of-the-box but no one provides URI based load balancer. Moreover, in our scenario, our webservers layer is quite dynamic due to scaling up/down. Best, Joan _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx This message is confidential to Prodea unless otherwise indicated or apparent from its nature. This message is directed to the intended recipient only, who may be readily determined by the sender of this message and its contents. If the reader of this message is not the intended recipient, or an employee or agent responsible for delivering this message to the intended recipient:(a)any dissemination or copying of this message is strictly prohibited; and(b)immediately notify the sender by return message and destroy any copies of this message in any form(electronic, paper or otherwise) that you have.The delivery of this message and its information is neither intended to be nor constitutes a disclosure or waiver of any trade secrets, intellectual property, attorney work product, or attorney-client communications. The authority of the individual sending this message to legally bind Prodea is neither apparent nor implied,and must be independently verified. -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Fri Jul 7 13:39:03 2017 From: peter_booth at me.com (Peter Booth) Date: Fri, 07 Jul 2017 09:39:03 -0400 Subject: NGINX stale-while-revalidate cluster In-Reply-To: <6e27a978-84b1-4e04-8110-b1d3b563cf8f@email.android.com> References: <6e27a978-84b1-4e04-8110-b1d3b563cf8f@email.android.com> Message-ID: You could do that but it would be bad. Nginx' great performance is based on serving files from a local Fisk and the behavior of a Linux page cache. If you serve from a shared (nfs) filsystem then every request is slower. You shouldn't slow down the common case just to increase cache hit rate. Sent from my iPhone > On Jul 7, 2017, at 9:24 AM, Frank Dias wrote: > > Have you thought about using a shared file system for the cache. This way all the nginx 's are looking at the same cached content. > > On Jul 7, 2017 5:30 AM, Joan Tom?s i Buliart wrote: > Hi Lucas > > On 07/07/17 12:12, Lucas Rolff wrote: > > Instead of doing round robin load balancing why not do a URI based > > load balancing? Then you ensure your cached file is only present on a > > single machine behind the load balancer. > > Yes, we considered this option but it forces us to deploy and maintain > another layer (LB+NG+AppServer). All cloud providers have round robin > load balancers out-of-the-box but no one provides URI based load > balancer. Moreover, in our scenario, our webservers layer is quite > dynamic due to scaling up/down. > > Best, > > Joan > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > This message is confidential to Prodea unless otherwise indicated or apparent from its nature. This message is directed to the intended recipient only, who may be readily determined by the sender of this message and its contents. If the reader of this message is not the intended recipient, or an employee or agent responsible for delivering this message to the intended recipient:(a)any dissemination or copying of this message is strictly prohibited; and(b)immediately notify the sender by return message and destroy any copies of this message in any form(electronic, paper or otherwise) that you have.The delivery of this message and its information is neither intended to be nor constitutes a disclosure or waiver of any trade secrets, intellectual property, attorney work product, or attorney-client communications. The authority of the individual sending this message to legally bind Prodea is neither apparent nor implied,and must be independently verified. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From enderulusoy at gmail.com Fri Jul 7 13:39:47 2017 From: enderulusoy at gmail.com (ender ulusoy) Date: Fri, 7 Jul 2017 16:39:47 +0300 Subject: How to handle 500 Error on upstream itself, While Nginx handle other 5xx errors Message-ID: We have a NGINX reverse proxy (clustered) with 45 upstreams (22 domains, 20 subdomains, 11 apps). Some of our projects hosts apis for some users globally. Our developers designed special custom 500 responses for special cases and want to show that messages only for tomcat_api upstream. They want to serve 500 pages and handle exceptions on tomcat_api upstream members not on NGINX. But we want to handle all other 5xx errors on NGINX. As I can not succeeded need a hand right now. Here is the server configuration follows, could you please help? Thank you. server { server_name api.hotcoldwarm.com; location / { add_header X-Frame-Options SAMEORIGIN; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; add_header x-redirect-uri hotcoldwarm; add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; always"; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://tomcat_api; proxy_next_upstream error timeout http_502 http_503 http_504 http_404; proxy_connect_timeout 1; } listen 80; listen 443 ssl; ssl_certificate /etc/rapidssl/live/api.hotcoldwarm.com/fullchain.pem; ssl_certificate_key /etc/rapidssl/live/api.hotcoldwarm.com/privkey.pem; include /etc/rapidssl/options-ssl-nginx.conf; if ($scheme != "https") { return 301 https://$host$request_uri; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From owen at nginx.com Fri Jul 7 14:04:31 2017 From: owen at nginx.com (Owen Garrett) Date: Fri, 7 Jul 2017 15:04:31 +0100 Subject: NGINX stale-while-revalidate cluster In-Reply-To: References: <6e27a978-84b1-4e04-8110-b1d3b563cf8f@email.android.com> Message-ID: <2B21C7BA-49CB-424F-BE01-70598ECFBCED@nginx.com> There are a couple of options described here that you could consider if you want to share your cache between NGINX instances: https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-1/ describes a sharded cache approach, where you load-balance by URI across the NGINX cache servers. You can combine your front-end load balancers and back-end caches onto one tier to reduce your footprint if you wish https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-2/ describes an alternative HA (shared) approach that replicates the cache so that there?s no increased load on the origin server if one cache server fails. It?s not possible to share a cache across instances by using a shared filesystem (e.g. nfs). --- owen at nginx.com Skype: owen.garrett Cell: +44 7764 344779 > On 7 Jul 2017, at 14:39, Peter Booth wrote: > > You could do that but it would be bad. Nginx' great performance is based on serving files from a local Fisk and the behavior of a Linux page cache. If you serve from a shared (nfs) filsystem then every request is slower. You shouldn't slow down the common case just to increase cache hit rate. > > Sent from my iPhone > > On Jul 7, 2017, at 9:24 AM, Frank Dias > wrote: > >> Have you thought about using a shared file system for the cache. This way all the nginx 's are looking at the same cached content. >> >> On Jul 7, 2017 5:30 AM, Joan Tom?s i Buliart > wrote: >> Hi Lucas >> >> On 07/07/17 12:12, Lucas Rolff wrote: >> > Instead of doing round robin load balancing why not do a URI based >> > load balancing? Then you ensure your cached file is only present on a >> > single machine behind the load balancer. >> >> Yes, we considered this option but it forces us to deploy and maintain >> another layer (LB+NG+AppServer). All cloud providers have round robin >> load balancers out-of-the-box but no one provides URI based load >> balancer. Moreover, in our scenario, our webservers layer is quite >> dynamic due to scaling up/down. >> >> Best, >> >> Joan >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> This message is confidential to Prodea unless otherwise indicated or apparent from its nature. This message is directed to the intended recipient only, who may be readily determined by the sender of this message and its contents. If the reader of this message is not the intended recipient, or an employee or agent responsible for delivering this message to the intended recipient:(a)any dissemination or copying of this message is strictly prohibited; and(b)immediately notify the sender by return message and destroy any copies of this message in any form(electronic, paper or otherwise) that you have.The delivery of this message and its information is neither intended to be nor constitutes a disclosure or waiver of any trade secrets, intellectual property, attorney work product, or attorney-client communications. The authority of the individual sending this message to legally bind Prodea is neither apparent nor implied,and must be independently verified. >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglisten at simonhoenscheid.de Fri Jul 7 15:55:20 2017 From: mailinglisten at simonhoenscheid.de (mailinglisten at simonhoenscheid.de) Date: Fri, 07 Jul 2017 17:55:20 +0200 Subject: Writing Header to Logfile Message-ID: <4e40909103dc1d416e2e64e33432b1d5@simonhoenscheid.de> Hello List, I would like to log a header which is send with the incoming request into a custom log field. How can this be done? Kind Regards Simon From nginx-forum at forum.nginx.org Fri Jul 7 19:41:11 2017 From: nginx-forum at forum.nginx.org (rlx01) Date: Fri, 07 Jul 2017 15:41:11 -0400 Subject: nginx -s reload terminates connections In-Reply-To: <20170707122847.GL55433@mdounin.ru> References: <20170707122847.GL55433@mdounin.ru> Message-ID: <731deb6691e29829de520f9f33d92c99.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, Thank you for the link to the ticket. IMHO this "during configuration reload existing connections are closed as soon as they are idle" is kind of misleading as the connection is not idle since it's being slammed with requests, it's "idle" in the sense that nginx processed the last request that was in the "queue" for that keepalive connection. As this keeps being reported, maybe it would be useful (?) if there was an option to "drain" keepalive connections (with a hard time limit) before terminating them. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275328,275369#msg-275369 From nginx-forum at forum.nginx.org Fri Jul 7 21:28:34 2017 From: nginx-forum at forum.nginx.org (rlx01) Date: Fri, 07 Jul 2017 17:28:34 -0400 Subject: nginx -s reload terminates connections In-Reply-To: <731deb6691e29829de520f9f33d92c99.NginxMailingListEnglish@forum.nginx.org> References: <20170707122847.GL55433@mdounin.ru> <731deb6691e29829de520f9f33d92c99.NginxMailingListEnglish@forum.nginx.org> Message-ID: <27bf3deaa6cbaa60ddbafb1d8173d9ad.NginxMailingListEnglish@forum.nginx.org> I found worker_shutdown_timeout and the code for graceful shutdown was fairly easy to follow, so we've patched it to do what we want. Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275328,275370#msg-275370 From zchao1995 at gmail.com Sat Jul 8 05:32:11 2017 From: zchao1995 at gmail.com (Zhang Chao) Date: Fri, 7 Jul 2017 22:32:11 -0700 Subject: Writing Header to Logfile In-Reply-To: <4e40909103dc1d416e2e64e33432b1d5@simonhoenscheid.de> References: <4e40909103dc1d416e2e64e33432b1d5@simonhoenscheid.de> Message-ID: I would like to log a header which is send with the incoming request into a custom log field. How can this be done? you mean the request header? On 7 July 2017 at 23:55:36, mailinglisten at simonhoenscheid.de ( mailinglisten at simonhoenscheid.de) wrote: Hello List, I would like to log a header which is send with the incoming request into a custom log field. How can this be done? Kind Regards Simon _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From joan.tomas at marfeel.com Sat Jul 8 13:00:40 2017 From: joan.tomas at marfeel.com (=?UTF-8?Q?Joan_Tom=c3=a0s_i_Buliart?=) Date: Sat, 8 Jul 2017 15:00:40 +0200 Subject: NGINX stale-while-revalidate cluster In-Reply-To: <2B21C7BA-49CB-424F-BE01-70598ECFBCED@nginx.com> References: <6e27a978-84b1-4e04-8110-b1d3b563cf8f@email.android.com> <2B21C7BA-49CB-424F-BE01-70598ECFBCED@nginx.com> Message-ID: <75465980-150b-3fd0-e43c-6cc246d746c4@marfeel.com> Thanks Owen! We considered all the options on these 2 documents but, on our environment in which is important to use stale-while-revalidate, all of them have, at least, one of these drawbacks: or it adds a layer in the fast path to the content or it can't guarantee that one request on a stale content will force the invalidation off all the copies of this object. That is the reason for which we are looking for a "background" alternative to update the content. Many thanks in any case, Joan On 07/07/17 16:04, Owen Garrett wrote: > There are a couple of options described here that you could consider > if you want to share your cache between NGINX instances: > > https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-1/ describes > a sharded cache approach, where you load-balance by URI across the > NGINX cache servers. You can combine your front-end load balancers > and back-end caches onto one tier to reduce your footprint if you wish > > https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-2/ describes > an alternative HA (shared) approach that replicates the cache so that > there?s no increased load on the origin server if one cache server fails. > > It?s not possible to share a cache across instances by using a shared > filesystem (e.g. nfs). > > --- > owen at nginx.com > Skype: owen.garrett > Cell: +44 7764 344779 > >> On 7 Jul 2017, at 14:39, Peter Booth > > wrote: >> >> You could do that but it would be bad. Nginx' great performance is >> based on serving files from a local Fisk and the behavior of a Linux >> page cache. If you serve from a shared (nfs) filsystem then every >> request is slower. You shouldn't slow down the common case just to >> increase cache hit rate. >> >> Sent from my iPhone >> >> On Jul 7, 2017, at 9:24 AM, Frank Dias > > wrote: >> >>> Have you thought about using a shared file system for the cache. >>> This way all the nginx 's are looking at the same cached content. >>> >>> On Jul 7, 2017 5:30 AM, Joan Tom?s i Buliart >> > wrote: >>> >>> Hi Lucas >>> >>> On 07/07/17 12:12, Lucas Rolff wrote: >>> > Instead of doing round robin load balancing why not do a URI >>> based >>> > load balancing? Then you ensure your cached file is only >>> present on a >>> > single machine behind the load balancer. >>> >>> Yes, we considered this option but it forces us to deploy and >>> maintain >>> another layer (LB+NG+AppServer). All cloud providers have round >>> robin >>> load balancers out-of-the-box but no one provides URI based load >>> balancer. Moreover, in our scenario, our webservers layer is quite >>> dynamic due to scaling up/down. >>> >>> Best, >>> >>> Joan >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >>> This message is confidential to Prodea unless otherwise indicated or >>> apparent from its nature. This message is directed to the intended >>> recipient only, who may be readily determined by the sender of this >>> message and its contents. If the reader of this message is not the >>> intended recipient, or an employee or agent responsible for >>> delivering this message to the intended recipient:(a)any >>> dissemination or copying of this message is strictly prohibited; >>> and(b)immediately notify the sender by return message and destroy >>> any copies of this message in any form(electronic, paper or >>> otherwise) that you have.The delivery of this message and its >>> information is neither intended to be nor constitutes a disclosure >>> or waiver of any trade secrets, intellectual property, attorney work >>> product, or attorney-client communications. The authority of the >>> individual sending this message to legally bind Prodea is neither >>> apparent nor implied,and must be independently verified. >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Sat Jul 8 13:30:15 2017 From: peter_booth at me.com (Peter Booth) Date: Sat, 08 Jul 2017 09:30:15 -0400 Subject: NGINX stale-while-revalidate cluster In-Reply-To: <75465980-150b-3fd0-e43c-6cc246d746c4@marfeel.com> References: <6e27a978-84b1-4e04-8110-b1d3b563cf8f@email.android.com> <2B21C7BA-49CB-424F-BE01-70598ECFBCED@nginx.com> <75465980-150b-3fd0-e43c-6cc246d746c4@marfeel.com> Message-ID: Perhaps it would help if, rather than focus on the specific solution that you are wanting, you instead explained your specific problem and business context? What is driving your architecture? Is it about protecting a backend that doesn't scale or more about reducing latencies? How many different requests are there that might be cached? What are the backend calls doing? How do cached objects expire? How long does a call to the backend take? Why is it OK to return a stale version of X to the first client but not OK to return a stale version to a second requester? Imagine a scenario where two identical requests arrive from different clients and hit different web servers. Is it OK for both requests to be satisfied with a stale resource? It's very easy for us to make incorrect assumptions about all of these questions because of our own experiences. Peter Sent from my iPhone > On Jul 8, 2017, at 9:00 AM, Joan Tom?s i Buliart wrote: > > Thanks Owen! > > We considered all the options on these 2 documents but, on our environment in which is important to use stale-while-revalidate, all of them have, at least, one of these drawbacks: or it adds a layer in the fast path to the content or it can't guarantee that one request on a stale content will force the invalidation off all the copies of this object. > > That is the reason for which we are looking for a "background" alternative to update the content. > > Many thanks in any case, > > Joan > >> On 07/07/17 16:04, Owen Garrett wrote: >> There are a couple of options described here that you could consider if you want to share your cache between NGINX instances: >> >> https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-1/ describes a sharded cache approach, where you load-balance by URI across the NGINX cache servers. You can combine your front-end load balancers and back-end caches onto one tier to reduce your footprint if you wish >> >> https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-2/ describes an alternative HA (shared) approach that replicates the cache so that there?s no increased load on the origin server if one cache server fails. >> >> It?s not possible to share a cache across instances by using a shared filesystem (e.g. nfs). >> >> --- >> owen at nginx.com >> Skype: owen.garrett >> Cell: +44 7764 344779 >> >>> On 7 Jul 2017, at 14:39, Peter Booth wrote: >>> >>> You could do that but it would be bad. Nginx' great performance is based on serving files from a local Fisk and the behavior of a Linux page cache. If you serve from a shared (nfs) filsystem then every request is slower. You shouldn't slow down the common case just to increase cache hit rate. >>> >>> Sent from my iPhone >>> >>> On Jul 7, 2017, at 9:24 AM, Frank Dias wrote: >>> >>>> Have you thought about using a shared file system for the cache. This way all the nginx 's are looking at the same cached content. >>>> >>>> On Jul 7, 2017 5:30 AM, Joan Tom?s i Buliart wrote: >>>> Hi Lucas >>>> >>>> On 07/07/17 12:12, Lucas Rolff wrote: >>>> > Instead of doing round robin load balancing why not do a URI based >>>> > load balancing? Then you ensure your cached file is only present on a >>>> > single machine behind the load balancer. >>>> >>>> Yes, we considered this option but it forces us to deploy and maintain >>>> another layer (LB+NG+AppServer). All cloud providers have round robin >>>> load balancers out-of-the-box but no one provides URI based load >>>> balancer. Moreover, in our scenario, our webservers layer is quite >>>> dynamic due to scaling up/down. >>>> >>>> Best, >>>> >>>> Joan >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>>> This message is confidential to Prodea unless otherwise indicated or apparent from its nature. This message is directed to the intended recipient only, who may be readily determined by the sender of this message and its contents. If the reader of this message is not the intended recipient, or an employee or agent responsible for delivering this message to the intended recipient:(a)any dissemination or copying of this message is strictly prohibited; and(b)immediately notify the sender by return message and destroy any copies of this message in any form(electronic, paper or otherwise) that you have.The delivery of this message and its information is neither intended to be nor constitutes a disclosure or waiver of any trade secrets, intellectual property, attorney work product, or attorney-client communications. The authority of the individual sending this message to legally bind Prodea is neither apparent nor implied,and must be independently verified. >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From liulantao at gmail.com Sat Jul 8 17:33:54 2017 From: liulantao at gmail.com (Liu Lantao) Date: Sun, 9 Jul 2017 01:33:54 +0800 Subject: Writing Header to Logfile In-Reply-To: <4e40909103dc1d416e2e64e33432b1d5@simonhoenscheid.de> References: <4e40909103dc1d416e2e64e33432b1d5@simonhoenscheid.de> Message-ID: <018D0189-FB36-4D41-900A-ED704F20DD85@gmail.com> According to nginx document (http://nginx.org/en/docs/http/ngx_http_core_module.html#var_http_): --- $http_name arbitrary request header field; the last part of a variable name is the field name converted to lower case with dashes replaced by underscores ??? if the request header name is 'X-Custom-Header', then the variable name is ?$http_x_custom_header?. > On Jul 7, 2017, at 11:55 PM, mailinglisten at simonhoenscheid.de wrote: > > Hello List, > > I would like to log a header which is send with the incoming request into a custom log field. How can this be done? > > Kind Regards > Simon > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From joan.tomas at marfeel.com Sat Jul 8 20:28:54 2017 From: joan.tomas at marfeel.com (=?UTF-8?Q?Joan_Tom=c3=a0s_i_Buliart?=) Date: Sat, 8 Jul 2017 22:28:54 +0200 Subject: NGINX stale-while-revalidate cluster In-Reply-To: References: <6e27a978-84b1-4e04-8110-b1d3b563cf8f@email.android.com> <2B21C7BA-49CB-424F-BE01-70598ECFBCED@nginx.com> <75465980-150b-3fd0-e43c-6cc246d746c4@marfeel.com> Message-ID: Hi Peter, yes, it's true. I will try to explain our problem better. We provide a mobile solution for newspaper and media groups. With this kind of partners, it is easy to have a peak of traffic. We prefer to give stale content (1 or 2 minutes stale content, not more) instead of block the request for some seconds (the time that our tomcat back-end could expend to crawl our customers desktop site and generate the new content). As I tried to explain in my first e-mail, the proxy_cache_background_update works ok while the number of servers is fix and the LB in front of them does a URI load balancer. The major problem appears when the servers has to scale up and scale down. Imagine that the URL1 is cache by server 1. All the request for URL1 are redirected to Server1 by the LB. Suddenly, the traffic raise up and a new server is added. The LB will remap the request in order to send some URLs to server 2. The URL1 is one of this group of URL that goes to server 2. Some hours later, the traffic goes down and the server 2 is removed. In this situation, the new request that arrive to Server 1 asking for URL1 will receive the version of some hours before (not some minutes). This is what we are trying to avoid. Many thanks for all your feedback and suggestions, Joan Joan Tom?s-Buliart On 08/07/17 15:30, Peter Booth wrote: > Perhaps it would help if, rather than focus on the specific solution > that you are wanting, you instead explained your specific problem and > business context? > > What is driving your architecture? Is it about protecting a backend > that doesn't scale or more about reducing latencies? > > How many different requests are there that might be cached? What are > the backend calls doing? How do cached objects expire? How long does a > call to the backend take? > Why is it OK to return a stale version of X to the first client but > not OK to return a stale version to a second requester? > > Imagine a scenario where two identical requests arrive from different > clients and hit different web servers. Is it OK for both requests to > be satisfied with a stale resource? > > It's very easy for us to make incorrect assumptions about all of these > questions because of our own experiences. > > Peter > > Sent from my iPhone > > On Jul 8, 2017, at 9:00 AM, Joan Tom?s i Buliart > > wrote: > >> Thanks Owen! >> >> We considered all the options on these 2 documents but, on our >> environment in which is important to use stale-while-revalidate, all >> of them have, at least, one of these drawbacks: or it adds a layer in >> the fast path to the content or it can't guarantee that one request >> on a stale content will force the invalidation off all the copies of >> this object. >> >> That is the reason for which we are looking for a "background" >> alternative to update the content. >> >> Many thanks in any case, >> >> Joan >> >> On 07/07/17 16:04, Owen Garrett wrote: >>> There are a couple of options described here that you could consider >>> if you want to share your cache between NGINX instances: >>> >>> https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-1/ describes >>> a sharded cache approach, where you load-balance by URI across the >>> NGINX cache servers. You can combine your front-end load balancers >>> and back-end caches onto one tier to reduce your footprint if you wish >>> >>> https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-2/ describes >>> an alternative HA (shared) approach that replicates the cache so >>> that there?s no increased load on the origin server if one cache >>> server fails. >>> >>> It?s not possible to share a cache across instances by using a >>> shared filesystem (e.g. nfs). >>> >>> --- >>> owen at nginx.com >>> Skype: owen.garrett >>> Cell: +44 7764 344779 >>> >>>> On 7 Jul 2017, at 14:39, Peter Booth >>> > wrote: >>>> >>>> You could do that but it would be bad. Nginx' great performance is >>>> based on serving files from a local Fisk and the behavior of a >>>> Linux page cache. If you serve from a shared (nfs) filsystem then >>>> every request is slower. You shouldn't slow down the common case >>>> just to increase cache hit rate. >>>> >>>> Sent from my iPhone >>>> >>>> On Jul 7, 2017, at 9:24 AM, Frank Dias >>> > wrote: >>>> >>>>> Have you thought about using a shared file system for the cache. >>>>> This way all the nginx 's are looking at the same cached content. >>>>> >>>>> On Jul 7, 2017 5:30 AM, Joan Tom?s i Buliart >>>>> > wrote: >>>>> >>>>> Hi Lucas >>>>> >>>>> On 07/07/17 12:12, Lucas Rolff wrote: >>>>> > Instead of doing round robin load balancing why not do a URI >>>>> based >>>>> > load balancing? Then you ensure your cached file is only >>>>> present on a >>>>> > single machine behind the load balancer. >>>>> >>>>> Yes, we considered this option but it forces us to deploy and >>>>> maintain >>>>> another layer (LB+NG+AppServer). All cloud providers have >>>>> round robin >>>>> load balancers out-of-the-box but no one provides URI based load >>>>> balancer. Moreover, in our scenario, our webservers layer is >>>>> quite >>>>> dynamic due to scaling up/down. >>>>> >>>>> Best, >>>>> >>>>> Joan >>>>> _______________________________________________ >>>>> nginx mailing list >>>>> nginx at nginx.org >>>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>>> >>>>> >>>>> This message is confidential to Prodea unless otherwise indicated >>>>> or apparent from its nature. This message is directed to the >>>>> intended recipient only, who may be readily determined by the >>>>> sender of this message and its contents. If the reader of this >>>>> message is not the intended recipient, or an employee or agent >>>>> responsible for delivering this message to the intended >>>>> recipient:(a)any dissemination or copying of this message is >>>>> strictly prohibited; and(b)immediately notify the sender by return >>>>> message and destroy any copies of this message in any >>>>> form(electronic, paper or otherwise) that you have.The delivery of >>>>> this message and its information is neither intended to be nor >>>>> constitutes a disclosure or waiver of any trade secrets, >>>>> intellectual property, attorney work product, or attorney-client >>>>> communications. The authority of the individual sending this >>>>> message to legally bind Prodea is neither apparent nor implied,and >>>>> must be independently verified. >>>>> _______________________________________________ >>>>> nginx mailing list >>>>> nginx at nginx.org >>>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From ng23 at firemail.cc Sun Jul 9 17:43:05 2017 From: ng23 at firemail.cc (Johan Andersson) Date: Sun, 09 Jul 2017 19:43:05 +0200 Subject: Flushing responses in nginx modules Message-ID: <7063b3f64e0aad4bed1fe3ece15f832d@firemail.cc> Hi everyone, I have some issues writing my nginx modules. I am on Debian Stretch, installed nginx with the default configuration, and took the hello_world module. It works without a hitch. Then I changed the handler to send three "hello world" responses, and sleep for one second between each response. However, when I look at the result in my browser, the page loads, pauses for three seconds, and then displays all three "hello world" messages at once. Actually I was flushing each response, so I expected each "hello world" message to appear one after the other, with one second pause between them. Am I doing something wrong? Is this event the correct way to achieve this? All functions return NGX_OK. This is my code: static ngx_int_t ngx_http_hello_world_handler(ngx_http_request_t *r) { ngx_buf_t *b; ngx_chain_t out; ngx_int_t result; r->headers_out.content_type.len = sizeof("text/html") - 1; r->headers_out.content_type.data = (u_char *) "text/html"; r->headers_out.status = NGX_HTTP_OK; //r->headers_out.content_length_n = sizeof(ngx_hello_world); ngx_http_send_header(r); for(int i = 0; i < 3; i++) { b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t)); out.buf = b; out.next = NULL; b->pos = ngx_hello_world; b->last = ngx_hello_world + sizeof(ngx_hello_world); b->memory = 1; b->flush = 1; b->last_buf = (i == 2); result = ngx_http_output_filter(r, &out); ngx_http_send_special(r, NGX_HTTP_FLUSH); sleep(1); } return result; } Cheers Johann From sca at andreasschulze.de Sun Jul 9 20:38:01 2017 From: sca at andreasschulze.de (A. Schulze) Date: Sun, 9 Jul 2017 22:38:01 +0200 Subject: Flushing responses in nginx modules In-Reply-To: <7063b3f64e0aad4bed1fe3ece15f832d@firemail.cc> References: <7063b3f64e0aad4bed1fe3ece15f832d@firemail.cc> Message-ID: Am 09.07.2017 um 19:43 schrieb Johan Andersson: > Actually I was flushing each response, so I expected each "hello world" message to appear one after the other, with one second pause between them. You may have a look at https://github.com/openresty/echo-nginx-module As far as I know they solved similar problems. Andreas From peter_booth at me.com Sun Jul 9 20:58:59 2017 From: peter_booth at me.com (Peter Booth) Date: Sun, 09 Jul 2017 16:58:59 -0400 Subject: NGINX stale-while-revalidate cluster In-Reply-To: References: <6e27a978-84b1-4e04-8110-b1d3b563cf8f@email.android.com> <2B21C7BA-49CB-424F-BE01-70598ECFBCED@nginx.com> <75465980-150b-3fd0-e43c-6cc246d746c4@marfeel.com> Message-ID: <3EC8F0DB-E170-4535-83C7-506D5FD19A70@me.com> stale-while-revalidate is awesome, but it might not be the optimal tool here. It came out of Yahoo!, the sixth largest website in the world, who used a small number of caching proxies. In their context most content is served hot from cache. A cloud deployment typically means a larger number of VMs that are each a fraction of a physical server. Great for fine grained control but a problem for cache hit rates. So if you have much less traffic than Yahoo spread across a larger number of web servers then your hit rates will suffer. What hit rates do you see today? Dynamic scale out isn?t very compatible with caching reverse proxies. Can you separate the caching reverse proxy functionality from the other functionality and keep the number of caches constant, whilst scaling out the web servers? Your give the example of a few hour old page being served because of a scale out event. Is that the most common case of cache misses in your context or is it unpopular pages and quiet times of the day? Are these also served stale even when your server count is static? Finally, if the root of the problem is serving very stale content, could you simply delete that content throughout the day? A script that finds and removes all cached files older than five minutes wouldn?t take long to run. Peter > On 8 Jul 2017, at 4:28 PM, Joan Tom?s i Buliart wrote: > > Hi Peter, > > > yes, it's true. I will try to explain our problem better. > > We provide a mobile solution for newspaper and media groups. With this kind of partners, it is easy to have a peak of traffic. We prefer to give stale content (1 or 2 minutes stale content, not more) instead of block the request for some seconds (the time that our tomcat back-end could expend to crawl our customers desktop site and generate the new content). As I tried to explain in my first e-mail, the proxy_cache_background_update works ok while the number of servers is fix and the LB in front of them does a URI load balancer. > > The major problem appears when the servers has to scale up and scale down. Imagine that the URL1 is cache by server 1. All the request for URL1 are redirected to Server1 by the LB. Suddenly, the traffic raise up and a new server is added. The LB will remap the request in order to send some URLs to server 2. The URL1 is one of this group of URL that goes to server 2. Some hours later, the traffic goes down and the server 2 is removed. In this situation, the new request that arrive to Server 1 asking for URL1 will receive the version of some hours before (not some minutes). This is what we are trying to avoid. > > Many thanks for all your feedback and suggestions, > > > Joan > On 08/07/17 15:30, Peter Booth wrote: >> Perhaps it would help if, rather than focus on the specific solution that you are wanting, you instead explained your specific problem and business context? >> >> What is driving your architecture? Is it about protecting a backend that doesn't scale or more about reducing latencies? >> >> How many different requests are there that might be cached? What are the backend calls doing? How do cached objects expire? How long does a call to the backend take? >> Why is it OK to return a stale version of X to the first client but not OK to return a stale version to a second requester? >> >> Imagine a scenario where two identical requests arrive from different clients and hit different web servers. Is it OK for both requests to be satisfied with a stale resource? >> >> It's very easy for us to make incorrect assumptions about all of these questions because of our own experiences. >> >> Peter >> >> Sent from my iPhone >> >> On Jul 8, 2017, at 9:00 AM, Joan Tom?s i Buliart > wrote: >> >>> Thanks Owen! >>> >>> We considered all the options on these 2 documents but, on our environment in which is important to use stale-while-revalidate, all of them have, at least, one of these drawbacks: or it adds a layer in the fast path to the content or it can't guarantee that one request on a stale content will force the invalidation off all the copies of this object. >>> >>> That is the reason for which we are looking for a "background" alternative to update the content. >>> >>> Many thanks in any case, >>> >>> Joan >>> >>> On 07/07/17 16:04, Owen Garrett wrote: >>>> There are a couple of options described here that you could consider if you want to share your cache between NGINX instances: >>>> >>>> https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-1/ describes a sharded cache approach, where you load-balance by URI across the NGINX cache servers. You can combine your front-end load balancers and back-end caches onto one tier to reduce your footprint if you wish >>>> >>>> https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-2/ describes an alternative HA (shared) approach that replicates the cache so that there?s no increased load on the origin server if one cache server fails. >>>> >>>> It?s not possible to share a cache across instances by using a shared filesystem (e.g. nfs). >>>> >>>> --- >>>> owen at nginx.com >>>> Skype: owen.garrett >>>> Cell: +44 7764 344779 >>>> >>>>> On 7 Jul 2017, at 14:39, Peter Booth > wrote: >>>>> >>>>> You could do that but it would be bad. Nginx' great performance is based on serving files from a local Fisk and the behavior of a Linux page cache. If you serve from a shared (nfs) filsystem then every request is slower. You shouldn't slow down the common case just to increase cache hit rate. >>>>> >>>>> Sent from my iPhone >>>>> >>>>> On Jul 7, 2017, at 9:24 AM, Frank Dias > wrote: >>>>> >>>>>> Have you thought about using a shared file system for the cache. This way all the nginx 's are looking at the same cached content. >>>>>> >>>>>> On Jul 7, 2017 5:30 AM, Joan Tom?s i Buliart > wrote: >>>>>> Hi Lucas >>>>>> >>>>>> On 07/07/17 12:12, Lucas Rolff wrote: >>>>>> > Instead of doing round robin load balancing why not do a URI based >>>>>> > load balancing? Then you ensure your cached file is only present on a >>>>>> > single machine behind the load balancer. >>>>>> >>>>>> Yes, we considered this option but it forces us to deploy and maintain >>>>>> another layer (LB+NG+AppServer). All cloud providers have round robin >>>>>> load balancers out-of-the-box but no one provides URI based load >>>>>> balancer. Moreover, in our scenario, our webservers layer is quite >>>>>> dynamic due to scaling up/down. >>>>>> >>>>>> Best, >>>>>> >>>>>> Joan >>>>>> _______________________________________________ >>>>>> nginx mailing list >>>>>> nginx at nginx.org >>>>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>>>> >>>>>> This message is confidential to Prodea unless otherwise indicated or apparent from its nature. This message is directed to the intended recipient only, who may be readily determined by the sender of this message and its contents. If the reader of this message is not the intended recipient, or an employee or agent responsible for delivering this message to the intended recipient:(a)any dissemination or copying of this message is strictly prohibited; and(b)immediately notify the sender by return message and destroy any copies of this message in any form(electronic, paper or otherwise) that you have.The delivery of this message and its information is neither intended to be nor constitutes a disclosure or waiver of any trade secrets, intellectual property, attorney work product, or attorney-client communications. The authority of the individual sending this message to legally bind Prodea is neither apparent nor implied,and must be independently verified. >>>>>> _______________________________________________ >>>>>> nginx mailing list >>>>>> nginx at nginx.org >>>>>> http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ >>>>> nginx mailing list >>>>> nginx at nginx.org >>>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From sdnetwork at gmail.com Sun Jul 9 21:26:47 2017 From: sdnetwork at gmail.com (Arnaud Le-roy) Date: Sun, 9 Jul 2017 23:26:47 +0200 Subject: Nginx Proxy seems to send twice the same request to the backend Message-ID: <4A8C8A80-EF80-4CD3-9C0C-240279B64E06@gmail.com> Hello, i encountered a strange behaviour with nginx, my backend seems to receive twice the same request from nginx proxy, to be sure that it's not the client that send two request i have had an uuid params to each request. when the problem occurs in nginx log i found one request in success in access.log x.x.x.x - - [09/Jul/2017:09:18:33 +0200] "GET /query?uid=b85cc8a4-b9cd-4093-aea5-95c0ea1391a6_428 HTTP/1.1" 200 2 "-" "-" and an other one than generate this log in error.log : 2017/07/09 09:18:31 [error] 38111#38111: *4098505 upstream prematurely closed connection while reading response header from upstream, client: x.x.x.x, server: x.x.com, request: "GET /query?uid=b85cc8a4-b9cd-4093-aea5-95c0ea1391a6_428 HTTP/1.1", upstream: "http://172.16.0.11:9092/query?uid=b85cc8a4-b9cd-4093-aea5-95c0ea1391a6_428", host: "x.x.com" on my backend i can see two request with the same uuid (the two succeed) {"pid":11424,"level":"info","message":"[API] AUTH1 /query?uid=b85cc8a4-b9cd-4093-aea5-95c0ea1391a6_428","timestamp":"2017-07-09 09:18:31.861Z"} {"pid":11424,"level":"info","message":"[API] AUTH1 /query?uid=b85cc8a4-b9cd-4093-aea5-95c0ea1391a6_428","timestamp":"2017-07-09 09:18:33.196Z"} The client is a node program so i'm sure that it sends only one request with the same uuid (no thread problem ;) the nginx serve as simple proxy (no load balancing) [nginx.conf] user www-data; worker_processes 8; worker_rlimit_nofile 8192; pid /run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { upstream api { keepalive 100; server 172.16.0.11:9092; } include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; proxy_buffering off; proxy_buffer_size 128k; proxy_buffers 100 128k; proxy_http_version 1.1; ### timeouts ### resolver_timeout 6; client_header_timeout 30; client_body_timeout 600; send_timeout 10; keepalive_timeout 65 20; proxy_read_timeout 600; server { listen 443 ssl; listen [::]:443 ssl; server_name x.x.com; include /etc/nginx/nginx_ssl.conf; client_max_body_size 200M; location / { proxy_next_upstream off; proxy_pass http://api; proxy_redirect http://api/ https://$host/; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Connection ""; } } } The backend is a simple node server. the problem occurs randomly, and it happens for sure on nginx/1.10.3 and nginx/1.13.2 on debian/jessie After some days of research, i found that if i remove the keepalive 100 from upstream configuration there is no longer the problem but i don't understand why ? Maybe somebody can explain me what could hapen ? maybe a misunderstanding about some configuration on keep alive ? For me it seems to be a problem on nginx, if you can't explain with these information, i can send some debug (nginx-debug) log to you. thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zchao1995 at gmail.com Mon Jul 10 02:04:37 2017 From: zchao1995 at gmail.com (Zhang Chao) Date: Sun, 9 Jul 2017 21:04:37 -0500 Subject: Flushing responses in nginx modules In-Reply-To: <7063b3f64e0aad4bed1fe3ece15f832d@firemail.cc> References: <7063b3f64e0aad4bed1fe3ece15f832d@firemail.cc> Message-ID: Hello! You mustn?t use standard sleep function for it will block Nginx?s events loop, alternatively, you need to put your write event to a timer, set the proper handler when the timer expires. BTW, you should always check the return value of ngx_http_send_header and ngx_http_output_filter. On 10 July 2017 at 01:43:46, Johan Andersson (ng23 at firemail.cc) wrote: Hi everyone, I have some issues writing my nginx modules. I am on Debian Stretch, installed nginx with the default configuration, and took the hello_world module. It works without a hitch. Then I changed the handler to send three "hello world" responses, and sleep for one second between each response. However, when I look at the result in my browser, the page loads, pauses for three seconds, and then displays all three "hello world" messages at once. Actually I was flushing each response, so I expected each "hello world" message to appear one after the other, with one second pause between them. Am I doing something wrong? Is this event the correct way to achieve this? All functions return NGX_OK. This is my code: static ngx_int_t ngx_http_hello_world_handler(ngx_http_request_t *r) { ngx_buf_t *b; ngx_chain_t out; ngx_int_t result; r->headers_out.content_type.len = sizeof("text/html") - 1; r->headers_out.content_type.data = (u_char *) "text/html"; r->headers_out.status = NGX_HTTP_OK; //r->headers_out.content_length_n = sizeof(ngx_hello_world); ngx_http_send_header(r); for(int i = 0; i < 3; i++) { b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t)); out.buf = b; out.next = NULL; b->pos = ngx_hello_world; b->last = ngx_hello_world + sizeof(ngx_hello_world); b->memory = 1; b->flush = 1; b->last_buf = (i == 2); result = ngx_http_output_filter(r, &out); ngx_http_send_special(r, NGX_HTTP_FLUSH); sleep(1); } return result; } Cheers Johann _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jul 10 02:55:02 2017 From: nginx-forum at forum.nginx.org (motor) Date: Sun, 09 Jul 2017 22:55:02 -0400 Subject: Upstream HTTP/2 support Message-ID: <102558b0b56ca135655aa9fbf8644d4d.NginxMailingListEnglish@forum.nginx.org> The following post is based on the assumptions that 1. upstream HTTP/2 support is not available in nginx at the moment, and 2. said feature is not firmly excluded from nginx's future road map :) We've been internally using an nginx patch (authored by me) which provides this feature in our nginx instances that front-end etcd servers. Please let me know if this patch can be a candidate for review and potential merge. In case the answer is yes, a heads up: said implementation has been subject to a few 'compromises' due to internal deadlines, so I might need a fair bit of technical guidance/discussion to iron out some design/implementation issues. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,275391,275391#msg-275391 From gfrankliu at gmail.com Mon Jul 10 04:56:01 2017 From: gfrankliu at gmail.com (Frank Liu) Date: Sun, 9 Jul 2017 21:56:01 -0700 Subject: Upstream HTTP/2 support In-Reply-To: <102558b0b56ca135655aa9fbf8644d4d.NginxMailingListEnglish@forum.nginx.org> References: <102558b0b56ca135655aa9fbf8644d4d.NginxMailingListEnglish@forum.nginx.org> Message-ID: This subject has been discussed here: https://trac.nginx.org/nginx/ticket/923 I think it is a good idea to have this support. On Sun, Jul 9, 2017 at 7:55 PM, motor wrote: > The following post is based on the assumptions that 1. upstream HTTP/2 > support is not available in nginx at the moment, and 2. said feature is not > firmly excluded from nginx's future road map :) > > We've been internally using an nginx patch (authored by me) which provides > this feature in our nginx instances that front-end etcd servers. > Please let me know if this patch can be a candidate for review and > potential > merge. > > In case the answer is yes, a heads up: said implementation has been subject > to a few 'compromises' due to internal deadlines, so I might need a fair > bit > of technical guidance/discussion to iron out some design/implementation > issues. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,275391,275391#msg-275391 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jul 10 12:07:58 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 10 Jul 2017 15:07:58 +0300 Subject: NGINX stale-while-revalidate cluster In-Reply-To: References: <6e27a978-84b1-4e04-8110-b1d3b563cf8f@email.android.com> <2B21C7BA-49CB-424F-BE01-70598ECFBCED@nginx.com> <75465980-150b-3fd0-e43c-6cc246d746c4@marfeel.com> Message-ID: <20170710120758.GM55433@mdounin.ru> Hello! On Sat, Jul 08, 2017 at 10:28:54PM +0200, Joan Tom?s i Buliart wrote: > Hi Peter, > > > yes, it's true. I will try to explain our problem better. > > We provide a mobile solution for newspaper and media groups. With this > kind of partners, it is easy to have a peak of traffic. We prefer to > give stale content (1 or 2 minutes stale content, not more) instead of > block the request for some seconds (the time that our tomcat back-end > could expend to crawl our customers desktop site and generate the new > content). As I tried to explain in my first e-mail, the > proxy_cache_background_update > > works ok while the number of servers is fix and the LB in front of them > does a URI load balancer. > > The major problem appears when the servers has to scale up and scale > down. Imagine that the URL1 is cache by server 1. All the request for > URL1 are redirected to Server1 by the LB. Suddenly, the traffic raise up > and a new server is added. The LB will remap the request in order to > send some URLs to server 2. The URL1 is one of this group of URL that > goes to server 2. Some hours later, the traffic goes down and the server > 2 is removed. In this situation, the new request that arrive to Server 1 > asking for URL1 will receive the version of some hours before (not some > minutes). This is what we are trying to avoid. This situation is exactly why "stale-while-revalidate=