From gk at leniwiec.biz Thu Mar 1 03:06:09 2018 From: gk at leniwiec.biz (Grzegorz Kulewski) Date: Thu, 1 Mar 2018 04:06:09 +0100 Subject: Stream module logging questions Message-ID: <8c4f1836-69a6-4d64-12e5-e80eb3bc647a@leniwiec.biz> Hello, 1. How can I log the IP and (especially) the port used by nginx (proxy) to connect to upstream when stream module is used? 2. Can I somehow get a log entry also/instead at stream connection setup time, not only after it ends? 3. I think that $tcpinfo_* aren't supported in stream. Is there any reason for this? -- Grzegorz Kulewski From nginx-forum at forum.nginx.org Thu Mar 1 10:55:10 2018 From: nginx-forum at forum.nginx.org (peanky) Date: Thu, 01 Mar 2018 05:55:10 -0500 Subject: Nginx mail proxy In-Reply-To: <20150322104830.GW88631@mdounin.ru> References: <20150322104830.GW88631@mdounin.ru> Message-ID: Why using nginx with not my smtp is wrong way? PS: I see the date ot topic, but this time I try to solve the same problem. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,257510,278855#msg-278855 From arut at nginx.com Thu Mar 1 11:52:43 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 1 Mar 2018 14:52:43 +0300 Subject: Stream module logging questions In-Reply-To: <8c4f1836-69a6-4d64-12e5-e80eb3bc647a@leniwiec.biz> References: <8c4f1836-69a6-4d64-12e5-e80eb3bc647a@leniwiec.biz> Message-ID: <20180301115243.GD3177@Romans-MacBook-Air.local> Hello, On Thu, Mar 01, 2018 at 04:06:09AM +0100, Grzegorz Kulewski wrote: > Hello, > > 1. How can I log the IP and (especially) the port used by nginx (proxy) to connect to upstream when stream module is used? > 2. Can I somehow get a log entry also/instead at stream connection setup time, not only after it ends? Stream module logs this in error.log with 'info' log level as soon as this information is available: 2018/03/01 14:37:27 [info] 38462#0: *1 client 127.0.0.1:61020 connected to 0.0.0.0:9000 2018/03/01 14:37:27 [info] 38462#0: *1 proxy 127.0.0.1:61021 connected to 127.0.0.1:8001 > 3. I think that $tcpinfo_* aren't supported in stream. Is there any reason for this? There's a number of http module features still missing in stream. This is one of them. -- Roman Arutyunyan From luciano at vespaperitivo.it Thu Mar 1 11:54:32 2018 From: luciano at vespaperitivo.it (Luciano Mannucci) Date: Thu, 1 Mar 2018 12:54:32 +0100 Subject: Nginx Directory Autoindex In-Reply-To: References: <3zs3Mb0pjZz3jYkR@baobab.bilink.it> Message-ID: <3zsW7X5hrVz3jYmF@baobab.bilink.it> On Wed, 28 Feb 2018 23:30:35 +0000 Miguel C wrote: > I'm unsure if thats possible without 3rd party module... > > I've used fancyindex before when I wanted sorting. I don't find fancyindex module in 1.13.9. It seems that some of its features are now in autoindex. It is probably trivial to add an option to the configuration, something like "autoindex_reverse_sort" (set to off by default), though I don'nt know if it would be usefull for other nginx users... Thanks for your answer, Luciano. -- /"\ /Via A. Salaino, 7 - 20144 Milano (Italy) \ / ASCII RIBBON CAMPAIGN / PHONE : +39 2 485781 FAX: +39 2 48578250 X AGAINST HTML MAIL / E-MAIL: posthamster at sublink.sublink.ORG / \ AND POSTINGS / WWW: http://www.lesassaie.IT/ From vl at nginx.com Thu Mar 1 12:12:31 2018 From: vl at nginx.com (Vladimir Homutov) Date: Thu, 1 Mar 2018 15:12:31 +0300 Subject: Nginx Directory Autoindex In-Reply-To: <3zs3Mb0pjZz3jYkR@baobab.bilink.it> References: <3zs3Mb0pjZz3jYkR@baobab.bilink.it> Message-ID: <20180301121230.GA32473@vlpc> On Wed, Feb 28, 2018 at 07:03:22PM +0100, Luciano Mannucci wrote: > > Hello all, > > I have a directory served by nginx via autoindex (That works perfectly > as documented :). I need to show the content in reverse order (ls -r), > is there any rather simple method? > > Thanks in advance, Hello Luciano, you can set http://nginx.org/en/docs/http/ngx_http_autoindex_module.html#autoindex_format to XML and combine with http://nginx.org/en/docs/http/ngx_http_xslt_module.html to get any listing that you desire. > > Luciano. > -- > /"\ /Via A. Salaino, 7 - 20144 Milano (Italy) > \ / ASCII RIBBON CAMPAIGN / PHONE : +39 2 485781 FAX: +39 2 48578250 > X AGAINST HTML MAIL / E-MAIL: posthamster at sublink.sublink.ORG > / \ AND POSTINGS / WWW: http://www.lesassaie.IT/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From miguelmclara at gmail.com Thu Mar 1 12:16:34 2018 From: miguelmclara at gmail.com (Miguel C) Date: Thu, 1 Mar 2018 12:16:34 +0000 Subject: Nginx Directory Autoindex In-Reply-To: <3zsW7X5hrVz3jYmF@baobab.bilink.it> References: <3zs3Mb0pjZz3jYkR@baobab.bilink.it> <3zsW7X5hrVz3jYmF@baobab.bilink.it> Message-ID: Yeah I was looking in github and there's no update for a year now... which tbh might be about the same time I last used it :D there's a default sorting option for name/data etc https://github.com/aperezdc/ngx-fancyindex#fancyindex-default-sort I do see a default_sort as a directive but not for "ascending" "descending" (doesn't since supper hard to implement in the code though, but I'm not a C expert.... Vladimir Homutov just suggested a different approach with the autoindex module and xslt that seems to serve you're need, and that way there's no need to install a 3rd party moduel :) Melhores Cumprimentos // Best Regards ----------------------------------------------- *Miguel Clara* *IT - Sys Admin & Developer* On Thu, Mar 1, 2018 at 11:54 AM, Luciano Mannucci wrote: > On Wed, 28 Feb 2018 23:30:35 +0000 > Miguel C wrote: > > > I'm unsure if thats possible without 3rd party module... > > > > I've used fancyindex before when I wanted sorting. > I don't find fancyindex module in 1.13.9. It seems that some of its > features are now in autoindex. It is probably trivial to add an option > to the configuration, something like "autoindex_reverse_sort" (set to > off by default), though I don'nt know if it would be usefull for other > nginx users... > > Thanks for your answer, > > Luciano. > -- > /"\ /Via A. Salaino, 7 - 20144 Milano (Italy) > \ / ASCII RIBBON CAMPAIGN / PHONE : +39 2 485781 FAX: +39 2 48578250 > X AGAINST HTML MAIL / E-MAIL: posthamster at sublink.sublink.ORG > / \ AND POSTINGS / WWW: http://www.lesassaie.IT/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From valery+nginxen at grid.net.ru Thu Mar 1 12:24:29 2018 From: valery+nginxen at grid.net.ru (Valery Kholodkov) Date: Thu, 1 Mar 2018 13:24:29 +0100 Subject: fsync()-in webdav PUT In-Reply-To: <3EA526B9-EA09-4304-86BD-1A341CBBD5B8@me.com> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <995CC1C3-26DC-46B4-94B4-2D1C7AC0204D@nginx.com> <58F408FB-17D2-4595-B6C8-7225D09F3F42@nginx.com> <3EA526B9-EA09-4304-86BD-1A341CBBD5B8@me.com> Message-ID: <583c5f90-49f6-9415-f766-1fe7ed193f03@grid.net.ru> I admire your wise approach to this discussion, as well your technical expertise! I see the value in people who know the right way, but I see the value in people who dare to explore and want learning the right way. Coincidently, at Qubiq Labs we're looking for that kind of kick-ass Systems and Performance Architect to run and scale our software and infrastructure. If you're challenged by the intricates of online marketing industry and tons of traffic, we'd love to get your application at info at qubiqlabs.com It's a funded and growing startup with a lots of interesting projects that you always dreamed of if you're into nginx. So, make sure to shoot us an email and don't forget to mention "I want that job" in the subject! On 28-02-18 23:33, Peter Booth wrote: > This discussion is interesting, educational, and thought provoking. Web > architects > only learn ?the right way? by first doing things ?the wrong way? and > seeing what happens. > Attila and Valery asked questions that sound logical, and I think > there's value in exploring > what would happen if their suggestions were implemented. > > First caveat - nginx is deployed in all manner different scenarios on > different hardware > and operating systems. Physical servers and VMs behave very differently, > as do local > and remote storage. When an application writes to NFS mounted storage > there's no guarantee > that even and synch will correctly enforce a write barrier. Still, if we > consider real numbers: > > * On current model quad socket hosts, nginx can support well over 1 > million requests per second (see TechEmpower benchmarks) > * On the same hardware, a web app that writes to a Postgresql DB can > do at least a few thousand writes per second. > * A SATA drive might support 300 write IOPS, whilst an SSD will > support 100x that. > > What this means that doing fully synchronous writes can reduce your > potential throughput > by a factor of 100 or more. So it?s not a great way to ensure consistency. > > But there are cheaper ways to achieve the same consistency and > reliability characteristics: > > * If you are using Linux then your reads and write swill occur through > the page cache - so the actual disk itself really doesn?t matter > (whilst your host is up). > * If you want to protect against loss of physical disk then use RAID. > * If you want to protect against a random power failure then use > drives with battery backed caches, so writes will get persisted when > a server restarts after a power failure > * If you want to protect against a crazy person hitting your server > with an axe then write to two servers ... > > *But the bottom line is separation of concerns.* Nginx should not use > fsync because it isn?t nginx's business. > > My two cents, > > Peter > > >> On Feb 28, 2018, at 4:41 PM, Aziz Rozyev > > wrote: >> >> Hello! >> >> On Wed, Feb 28, 2018 at 10:30:08AM +0100, Nagy, Attila wrote: >> >>> On 02/27/2018 02:24 PM, Maxim Dounin wrote: >>>> >>>>> Now, that nginx supports running threads, are there plans to convert at >>>>> least DAV PUTs into it's own thread(pool), so make it possible to do >>>>> non-blocking (from nginx's event loop PoV) fsync on the uploaded file? >>>> No, there are no such plans. >>>> >>>> (Also, trying to do fsync() might not be the best idea even in >>>> threads. A reliable server might be a better option.) >>>> >>> What do you mean by a reliable server? >>> I want to make sure when the HTTP operation returns, the file is on the >>> disk, not just in a buffer waiting for an indefinite amount of time to >>> be flushed. >>> This is what fsync is for. >> >> The question here is - why you want the file to be on disk, and >> not just in a buffer? Because you expect the server to die in a >> few seconds without flushing the file to disk? How probable it >> is, compared to the probability of the disk to die? A more >> reliable server can make this probability negligible, hence the >> suggestion. >> >> (Also, another question is what "on the disk" meas from physical >> point of view. In many cases this in fact means "somewhere in the >> disk buffers", and a power outage can easily result in the file >> being not accessible even after fsync().) >> >>> Why doing this in a thread is not a good idea? It would'nt block nginx >>> that way. >> >> Because even in threads, fsync() is likely to cause performance >> degradation. It might be a better idea to let the OS manage >> buffers instead. >> >> -- >> Maxim Dounin >> http://mdounin.ru/ >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From valery+nginxen at grid.net.ru Thu Mar 1 12:27:43 2018 From: valery+nginxen at grid.net.ru (Valery Kholodkov) Date: Thu, 1 Mar 2018 13:27:43 +0100 Subject: fsync()-in webdav PUT In-Reply-To: <583c5f90-49f6-9415-f766-1fe7ed193f03@grid.net.ru> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <995CC1C3-26DC-46B4-94B4-2D1C7AC0204D@nginx.com> <58F408FB-17D2-4595-B6C8-7225D09F3F42@nginx.com> <3EA526B9-EA09-4304-86BD-1A341CBBD5B8@me.com> <583c5f90-49f6-9415-f766-1fe7ed193f03@grid.net.ru> Message-ID: <551e9e5a-5ae6-3a9c-ffb9-d8ad16f5db6f@grid.net.ru> You can also apply online: https://angel.co/qubiq-digital-b-v/jobs That's more 2018-nish. On 01-03-18 13:24, Valery Kholodkov wrote: > I admire your wise approach to this discussion, as well your technical > expertise! I see the value in people who know the right way, but I see > the value in people who dare to explore and want learning the right way. > Coincidently, at Qubiq Labs we're looking for that kind of kick-ass > Systems and Performance Architect to run and scale our software and > infrastructure. > > If you're challenged by the intricates of online marketing industry and > tons of traffic, we'd love to get your application at info at qubiqlabs.com > > It's a funded and growing startup with a lots of interesting projects > that you always dreamed of if you're into nginx. > > So, make sure to shoot us an email and don't forget to mention "I want > that job" in the subject! > > On 28-02-18 23:33, Peter Booth wrote: >> This discussion is interesting, educational, and thought provoking. Web >> architects >> only learn ?the right way? by first doing things ?the wrong way? and >> seeing what happens. >> Attila and Valery asked questions that sound logical, and I think >> there's value in exploring >> what would happen if their suggestions were implemented. >> >> First caveat - nginx is deployed in all manner different scenarios on >> different hardware >> and operating systems. Physical servers and VMs behave very differently, >> as do local >> and remote storage. When an application writes to NFS mounted storage >> there's no guarantee >> that even and synch will correctly enforce a write barrier. Still, if we >> consider real numbers: >> >> * On current model quad socket hosts, nginx can support well over 1 >> million requests per second (see TechEmpower benchmarks) >> * On the same hardware, a web app that writes to a Postgresql DB can >> do at least a few thousand writes per second. >> * A SATA drive might support 300 write IOPS, whilst an SSD will >> support 100x that. >> >> What this means that doing fully synchronous writes can reduce your >> potential throughput >> by a factor of 100 or more. So it?s not a great way to ensure >> consistency. >> >> But there are cheaper ways to achieve the same consistency and >> reliability characteristics: >> >> * If you are using Linux then your reads and write swill occur through >> the page cache - so the actual disk itself really doesn?t matter >> (whilst your host is up). >> * If you want to protect against loss of physical disk then use RAID. >> * If you want to protect against a random power failure then use >> drives with battery backed caches, so writes will get persisted when >> a server restarts after a power failure >> * If you want to protect against a crazy person hitting your server >> with an axe then write to two servers ... >> >> *But the bottom line is separation of concerns.* Nginx should not use >> fsync because it isn?t nginx's business. >> >> My two cents, >> >> Peter >> >> >>> On Feb 28, 2018, at 4:41 PM, Aziz Rozyev >> > wrote: >>> >>> Hello! >>> >>> On Wed, Feb 28, 2018 at 10:30:08AM +0100, Nagy, Attila wrote: >>> >>>> On 02/27/2018 02:24 PM, Maxim Dounin wrote: >>>>> >>>>>> Now, that nginx supports running threads, are there plans to >>>>>> convert at >>>>>> least DAV PUTs into it's own thread(pool), so make it possible to do >>>>>> non-blocking (from nginx's event loop PoV) fsync on the uploaded >>>>>> file? >>>>> No, there are no such plans. >>>>> >>>>> (Also, trying to do fsync() might not be the best idea even in >>>>> threads. A reliable server might be a better option.) >>>>> >>>> What do you mean by a reliable server? >>>> I want to make sure when the HTTP operation returns, the file is on the >>>> disk, not just in a buffer waiting for an indefinite amount of time to >>>> be flushed. >>>> This is what fsync is for. >>> >>> The question here is - why you want the file to be on disk, and >>> not just in a buffer? Because you expect the server to die in a >>> few seconds without flushing the file to disk? How probable it >>> is, compared to the probability of the disk to die? A more >>> reliable server can make this probability negligible, hence the >>> suggestion. >>> >>> (Also, another question is what "on the disk" meas from physical >>> point of view. In many cases this in fact means "somewhere in the >>> disk buffers", and a power outage can easily result in the file >>> being not accessible even after fsync().) >>> >>>> Why doing this in a thread is not a good idea? It would'nt block nginx >>>> that way. >>> >>> Because even in threads, fsync() is likely to cause performance >>> degradation. It might be a better idea to let the OS manage >>> buffers instead. >>> >>> -- >>> Maxim Dounin >>> http://mdounin.ru/ >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Thu Mar 1 13:31:49 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 1 Mar 2018 16:31:49 +0300 Subject: Nginx mail proxy In-Reply-To: References: <20150322104830.GW88631@mdounin.ru> Message-ID: <20180301133149.GS89840@mdounin.ru> Hello! On Thu, Mar 01, 2018 at 05:55:10AM -0500, peanky wrote: > Why using nginx with not my smtp is wrong way? > PS: I see the date ot topic, but this time I try to solve the same problem. Bacause nginx smtp proxy is designed to protect / balance your own smtp backends. If you want to proxy to external smtp servers, consider using other solutions. -- Maxim Dounin http://mdounin.ru/ From luciano at vespaperitivo.it Thu Mar 1 16:27:49 2018 From: luciano at vespaperitivo.it (Luciano Mannucci) Date: Thu, 1 Mar 2018 17:27:49 +0100 Subject: Nginx Directory Autoindex In-Reply-To: References: <3zs3Mb0pjZz3jYkR@baobab.bilink.it> <3zsW7X5hrVz3jYmF@baobab.bilink.it> Message-ID: <3zsdBt0FgSz3jYqR@baobab.bilink.it> On Thu, 1 Mar 2018 12:16:34 +0000 Miguel C wrote: > there's a default sorting option for name/data etc > https://github.com/aperezdc/ngx-fancyindex#fancyindex-default-sort > > I do see a default_sort as a directive but not for "ascending" "descending" > (doesn't since supper hard to implement in the code though, but I'm not a C > expert.... Yes, it sould'nt be that hard... :) > Vladimir Homutov just suggested a different approach with the autoindex > module and xslt that seems to serve you're need, and that way there's no > need to install a 3rd party moduel :) I'm not comfortable with XML/XSLT, so I just made a small hack to the original autoindex that is quite enough for me (=works and made my users happy :). If there is some interest I can post my patch to the apropriate list (nginx devel, if I recall correctly :). Cheers, Luciano. -- /"\ /Via A. Salaino, 7 - 20144 Milano (Italy) \ / ASCII RIBBON CAMPAIGN / PHONE : +39 2 485781 FAX: +39 2 48578250 X AGAINST HTML MAIL / E-MAIL: posthamster at sublink.sublink.ORG / \ AND POSTINGS / WWW: http://www.lesassaie.IT/ From arut at nginx.com Thu Mar 1 16:45:27 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 1 Mar 2018 19:45:27 +0300 Subject: Nginx Directory Autoindex In-Reply-To: <3zsdBt0FgSz3jYqR@baobab.bilink.it> References: <3zs3Mb0pjZz3jYkR@baobab.bilink.it> <3zsW7X5hrVz3jYmF@baobab.bilink.it> <3zsdBt0FgSz3jYqR@baobab.bilink.it> Message-ID: <20180301164527.GJ3177@Romans-MacBook-Air.local> Hi, On Thu, Mar 01, 2018 at 05:27:49PM +0100, Luciano Mannucci wrote: > On Thu, 1 Mar 2018 12:16:34 +0000 > Miguel C wrote: > > > there's a default sorting option for name/data etc > > https://github.com/aperezdc/ngx-fancyindex#fancyindex-default-sort > > > > I do see a default_sort as a directive but not for "ascending" "descending" > > (doesn't since supper hard to implement in the code though, but I'm not a C > > expert.... > Yes, it sould'nt be that hard... :) > > > Vladimir Homutov just suggested a different approach with the autoindex > > module and xslt that seems to serve you're need, and that way there's no > > need to install a 3rd party moduel :) > I'm not comfortable with XML/XSLT, so I just made a small hack to the > original autoindex that is quite enough for me (=works and made my > users happy :). > > If there is some interest I can post my patch to the apropriate list > (nginx devel, if I recall correctly :). Just in case you'll get back to XML/XSLT, here's a simple configuration to sort the files: location / { autoindex on; autoindex_format xml; xslt_stylesheet conf/sort.xslt; root html; } sort.xslt:
-- Roman Arutyunyan From nginx-forum at forum.nginx.org Thu Mar 1 17:24:40 2018 From: nginx-forum at forum.nginx.org (neuronetv) Date: Thu, 01 Mar 2018 12:24:40 -0500 Subject: yum install nginx Transaction Check Error: Message-ID: trying to run 'yum install nginx*' (stable version) on centos 6 using these instructions: http://nginx.org/en/linux_packages.html#stable I get the following error Transaction Check Error: file /usr/lib64/nginx/modules/ngx_http_geoip_module.so conflicts between attempted installs of nginx-module-geoip-1.12.2-1.el6.ngx.x86_64 and nginx-mod-http-geoip-1.10.2-1.el6.x86_64 file /usr/lib64/nginx/modules/ngx_http_image_filter_module.so conflicts between attempted installs of nginx-mod-http-image-filter-1.10.2-1.el6.x86_64 and nginx-module-image-filter-1.12.2-1.el6.ngx.x86_64 file /usr/lib64/nginx/modules/ngx_http_xslt_filter_module.so conflicts between attempted installs of nginx-mod-http-xslt-filter-1.10.2-1.el6.x86_64 and nginx-module-xslt-1.12.2-1.el6.ngx.x86_64 file /usr/lib64/nginx/modules/ngx_http_perl_module.so conflicts between attempted installs of nginx-module-perl-1.12.2-1.el6.ngx.x86_64 and nginx-mod-http-perl-1.10.2-1.el6.x86_64 file /usr/lib64/perl5/vendor_perl/auto/nginx/nginx.so conflicts between attempted installs of nginx-module-perl-1.12.2-1.el6.ngx.x86_64 and nginx-mod-http-perl-1.10.2-1.el6.x86_64 file /usr/lib64/perl5/vendor_perl/nginx.pm conflicts between attempted installs of nginx-module-perl-1.12.2-1.el6.ngx.x86_64 and nginx-mod-http-perl-1.10.2-1.el6.x86_64 Error Summary -------------------------------- I also tried changing nginx.repo to the mainline version but that didn't work either. is it becuase I'm running a 64 bit machine? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278870,278870#msg-278870 From luciano at vespaperitivo.it Thu Mar 1 17:29:49 2018 From: luciano at vespaperitivo.it (Luciano Mannucci) Date: Thu, 1 Mar 2018 18:29:49 +0100 Subject: Nginx Directory Autoindex In-Reply-To: <20180301164527.GJ3177@Romans-MacBook-Air.local> References: <3zs3Mb0pjZz3jYkR@baobab.bilink.it> <3zsW7X5hrVz3jYmF@baobab.bilink.it> <3zsdBt0FgSz3jYqR@baobab.bilink.it> <20180301164527.GJ3177@Romans-MacBook-Air.local> Message-ID: <3zsfZQ0Mmyz3jYp6@baobab.bilink.it> On Thu, 1 Mar 2018 19:45:27 +0300 Roman Arutyunyan wrote: > Just in case you'll get back to XML/XSLT, here's a simple configuration to > sort the files: Many thanks! I'll give it a try. Cheers, Luciano. -- /"\ /Via A. Salaino, 7 - 20144 Milano (Italy) \ / ASCII RIBBON CAMPAIGN / PHONE : +39 2 485781 FAX: +39 2 48578250 X AGAINST HTML MAIL / E-MAIL: posthamster at sublink.sublink.ORG / \ AND POSTINGS / WWW: http://www.lesassaie.IT/ From dsd.trash at gmail.com Thu Mar 1 20:40:28 2018 From: dsd.trash at gmail.com (Daniel) Date: Thu, 1 Mar 2018 21:40:28 +0100 Subject: why hardcoded /var/log/nginx/error.log in pre-built packages? Message-ID: <11f9715c-8115-fd89-6c1a-c5bfeed0b6be@gmail.com> Hello all, can someone please explain to me why the location /var/log/nginx/error log is hardcoded in the official prebuilt packages? Or why nginx -t checks if this file exists even if there is another location defined in the config file? Thank you. Daniel From arozyev at nginx.com Thu Mar 1 21:33:24 2018 From: arozyev at nginx.com (Aziz Rozyev) Date: Fri, 2 Mar 2018 00:33:24 +0300 Subject: why hardcoded /var/log/nginx/error.log in pre-built packages? In-Reply-To: <11f9715c-8115-fd89-6c1a-c5bfeed0b6be@gmail.com> References: <11f9715c-8115-fd89-6c1a-c5bfeed0b6be@gmail.com> Message-ID: it?s not hardcoded afaik. check output of nginx -T perhaps it?s defined error_log directive somewhere.. br, Aziz. > On 1 Mar 2018, at 23:40, Daniel wrote: > > Hello all, > > can someone please explain to me why the location /var/log/nginx/error > log is hardcoded in the official prebuilt packages? > > Or why nginx -t checks if this file exists even if there is another > location defined in the config file? > > > Thank you. > > Daniel > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From vbart at nginx.com Fri Mar 2 07:53:11 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 02 Mar 2018 10:53:11 +0300 Subject: why hardcoded /var/log/nginx/error.log in pre-built packages? In-Reply-To: <11f9715c-8115-fd89-6c1a-c5bfeed0b6be@gmail.com> References: <11f9715c-8115-fd89-6c1a-c5bfeed0b6be@gmail.com> Message-ID: <1553156.2IhMOe31l7@vbart-laptop> On Thursday, 1 March 2018 23:40:28 MSK Daniel wrote: > Hello all, > > can someone please explain to me why the location /var/log/nginx/error > log is hardcoded in the official prebuilt packages? > > Or why nginx -t checks if this file exists even if there is another > location defined in the config file? > Because nginx needs to log errors that happen before the config file is parsed. wbr, Valentin V. Bartenev From bra at fsn.hu Fri Mar 2 08:50:29 2018 From: bra at fsn.hu (Nagy, Attila) Date: Fri, 2 Mar 2018 09:50:29 +0100 Subject: fsync()-in webdav PUT In-Reply-To: <995CC1C3-26DC-46B4-94B4-2D1C7AC0204D@nginx.com> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <995CC1C3-26DC-46B4-94B4-2D1C7AC0204D@nginx.com> Message-ID: <9f640674-bfcb-74e1-e36c-a39d127fae3b@fsn.hu> On 02/28/2018 11:04 AM, Aziz Rozyev wrote: > While it?s not clear why one may need to flush the data on each http operation, > I can imagine to what performance degradation that may lead of. I store data on HTTP servers in a distributed manner and have a catalog of where each file is. If I get back a successful HTTP response for a PUT operation, I want it to be true, so the file must be on stable storage. If I just write it to a buffer and something happens with the machine while the data is still in the buffer, I can't trust that response and I have to make sure the file is there in its entirety from time to time, which is much much much much more of a performance degradation. With clever file systems and/or good hardware (battery backed write cache) it won't cost you much. Anyways, it's completely irrelevant how fast you can write to RAM. The task here is to write reliably. And you can make it fast if you want with software and hardware. > > if it?s not a some kind of funny clustering among nodes, I wouldn't care much > where actual data is, RAM still should be much faster, than disk I/O. > Let's turn the question this way: if you write to RAM, you can't make sure that the file really made it's way to the storage. Why do you upload files to an HTTP server if you don't care whether they are there or not? You could use /dev/null too. It's even more faster... Or just make your upload_file() function to a dummy "return immediately" call. That's even more faster. :) From bra at fsn.hu Fri Mar 2 09:00:26 2018 From: bra at fsn.hu (Nagy, Attila) Date: Fri, 2 Mar 2018 10:00:26 +0100 Subject: fsync()-in webdav PUT In-Reply-To: <3EA526B9-EA09-4304-86BD-1A341CBBD5B8@me.com> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <995CC1C3-26DC-46B4-94B4-2D1C7AC0204D@nginx.com> <58F408FB-17D2-4595-B6C8-7225D09F3F42@nginx.com> <3EA526B9-EA09-4304-86BD-1A341CBBD5B8@me.com> Message-ID: <8fbe14ff-89e6-3670-abea-8184fa5cd220@fsn.hu> On 02/28/2018 11:33 PM, Peter Booth wrote: > This discussion is interesting, educational, and thought provoking. > ?Web architects > only learn ?the right way? by first doing things ?the wrong way? and > seeing what happens. > Attila and Valery asked questions that sound logical, and I think > there's value in exploring > what would happen if their suggestions were implemented. > > First caveat - nginx is deployed in all manner different scenarios on > different hardware > and operating systems. Physical servers and VMs behave very > differently, as do local > and remote storage. When an application writes to NFS mounted storage > there's no guarantee > that even and synch will correctly enforce a write barrier. Still, if > we consider ?real numbers: > > * On current model quad socket hosts, nginx can support well over 1 > million requests per second (see TechEmpower benchmarks) > * On the same hardware, a web app that writes to a Postgresql DB can > do at least a few thousand writes per second. > * A SATA drive might support ?300 write IOPS, whilst an SSD will > support 100x that. > > What this means that doing fully synchronous writes can reduce your > potential throughput > by a factor of 100 or more. So it?s not a great way to ensure consistency. > > But there are cheaper ways to achieve the same consistency and > reliability characteristics: > > * If you are using Linux then your reads and write swill occur > through the page cache - so the actual disk itself really doesn?t > matter (whilst your host is up). > * If you want to protect against loss of physical disk then use RAID. > * If you want to protect against a random power failure then use > drives with battery backed caches, so writes will get persisted > when a server restarts after a power failure > Sorry, but this point shows that you don't understand the problem. A BBWC won't save you from random power failure. Because the data is still in RAM! BBWC will save you when you do an fsync and the end of the write (and that fsync will still write RAM, but it will be the controller's RAM which is protected by battery). But nginx doesn't do this today. And that's what this discussion is all about... > * If you want to protect against a crazy person hitting your server > with an axe then write to two servers ... > And still you won't have it reliably on your disks. > *But the bottom line is separation of concerns.* Nginx should not use > fsync because it isn?t nginx's business. > Please suggest at least working solution, which is compatible with nginx's asynchronous architecture and it ensures that a successful HTTP PUT will mean the data written to a reliable store. There are several filesystems which can be turned "fsync by default", but that will fail miserably because nginx does the writes in the same process in the same thread. That's what could be solved by doing at least the fsyncs in different threads, so they wouldn't block the main thread. BTW, I'm not proposing this to be the default. It should be an optional setting, so if somebody want to maintain the current situation, they could do that. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bra at fsn.hu Fri Mar 2 09:12:02 2018 From: bra at fsn.hu (Nagy, Attila) Date: Fri, 2 Mar 2018 10:12:02 +0100 Subject: fsync()-in webdav PUT In-Reply-To: <20180228140831.GO89840@mdounin.ru> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <20180228140831.GO89840@mdounin.ru> Message-ID: <64404d71-d667-ca92-c174-df0607819dbf@fsn.hu> On 02/28/2018 03:08 PM, Maxim Dounin wrote: > The question here is - why you want the file to be on disk, and > not just in a buffer? Because you expect the server to die in a > few seconds without flushing the file to disk? How probable it > is, compared to the probability of the disk to die? A more > reliable server can make this probability negligible, hence the > suggestion. Because the files I upload to nginx servers are important to me. Please step back a little and forget that we are talking about nginx or an HTTP server. We have data which we want to write to somewhere. Check any of the database servers. Would you accept a DB server which can loose confirmed data or couldn't be configured that way that a write/insert/update/commit/whatever you use to modify or put data into it operation is reliably written by the time you receive acknowledgement? Now try to use this example. I would like to use nginx to store files. That's what HTTP PUT is for. Of course I'm not expecting that the server will die every day. But when that happens, I want to make sure that the confirmed data is there. Let's take a look at various object storage systems, like ceph. Would you accept a confirmed write to be lost there? They make a great deal of work to make that impossible. Now try to imagine that somebody doesn't need the complexity of -for example- ceph, but wants to store data with plain HTTP. And you got there. If you store data, then you want to make sure the data is there. If you don't, why do you store it anyways? > (Also, another question is what "on the disk" meas from physical > point of view. In many cases this in fact means "somewhere in the > disk buffers", and a power outage can easily result in the file > being not accessible even after fsync().) Not with good software/hardware. (and it doesn't really have to be super good, but average) > >> Why doing this in a thread is not a good idea? It would'nt block nginx >> that way. > Because even in threads, fsync() is likely to cause performance > degradation. It might be a better idea to let the OS manage > buffers instead. > Sure, it will cause some (not much BTW in a good configuration). But if my primary goal is to store files reliably, why should I care? I can solve that by using SSDs for logs, BBWCs and a lot more thing. But in the current way, I can't make sure that a HTTP PUT was really successful or it will be successful in some seconds or it will fail badly. From bra at fsn.hu Fri Mar 2 10:02:13 2018 From: bra at fsn.hu (Nagy, Attila) Date: Fri, 2 Mar 2018 11:02:13 +0100 Subject: fsync()-in webdav PUT In-Reply-To: <58F408FB-17D2-4595-B6C8-7225D09F3F42@nginx.com> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <995CC1C3-26DC-46B4-94B4-2D1C7AC0204D@nginx.com> <58F408FB-17D2-4595-B6C8-7225D09F3F42@nginx.com> Message-ID: On 02/28/2018 10:41 PM, Aziz Rozyev wrote: >> Without fsyncing file's data and metadata a client will receive a positive reply before data has reached the storage, thus leaving non-zero probability that states of two systems involved into a web transaction end up inconsistent. > > I understand why one may need consistency, but doing so with fsyncing is non-sense. > > Here is what man page says in that regard: > > > fsync() transfers ("flushes") all modified in-core data of (i.e., modified buffer cache pages for) the file referred to by the file descriptor fd to the disk device (or other permanent > storage device) so that all changed information can be retrieved even after the system crashed or was rebooted. This includes writing through or flushing a disk cache if present. The call > blocks until the device reports that the transfer has completed. It also flushes metadata information associated with the file (see stat(2)). > > Could you please elaborate what do you mean by calling this a nonsense? Also I don't understand why you cited the man page. It clearly says this is what ensures that when fsync return, the file will be on stable storage. What else method do you recommend if somebody wants to get an acknowledgement to the HTTP PUT only after the file is safely stored? From nginx-forum at forum.nginx.org Fri Mar 2 10:30:00 2018 From: nginx-forum at forum.nginx.org (sonpg) Date: Fri, 02 Mar 2018 05:30:00 -0500 Subject: NTLM sharepoint when use nginx reverse proxy In-Reply-To: References: Message-ID: my design is : enduser --> nginx --> sites (sharepoint site:443, web:80; 443) if server listen in 80 will redirect to 443 i try to use stream block but it can't use same port. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278737,278885#msg-278885 From list at xdrv.co.uk Fri Mar 2 10:33:45 2018 From: list at xdrv.co.uk (James) Date: Fri, 2 Mar 2018 10:33:45 +0000 Subject: Nginx Directory Autoindex In-Reply-To: <3zsW7X5hrVz3jYmF@baobab.bilink.it> References: <3zs3Mb0pjZz3jYkR@baobab.bilink.it> <3zsW7X5hrVz3jYmF@baobab.bilink.it> Message-ID: <6d44fadd-dbcf-9b61-505a-7477c760d902@xdrv.co.uk> On 01/03/2018 11:54, Luciano Mannucci wrote: > It is probably trivial to add an option > to the configuration, something like "autoindex_reverse_sort" (set to > off by default), though I don'nt know if it would be usefull for other > nginx users... I'd like the option of order by date, "ls -t", "ls -rt". This helps because the text order of numbers is "10 11 8 9", see: http://nginx.org/download/ The latest version is 2/3 of the way down and if I was looking for an update then 1.13.10 would not be next to the current 1.13.9. From arozyev at nginx.com Fri Mar 2 10:42:17 2018 From: arozyev at nginx.com (Aziz Rozyev) Date: Fri, 2 Mar 2018 13:42:17 +0300 Subject: fsync()-in webdav PUT In-Reply-To: <64404d71-d667-ca92-c174-df0607819dbf@fsn.hu> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <20180228140831.GO89840@mdounin.ru> <64404d71-d667-ca92-c174-df0607819dbf@fsn.hu> Message-ID: <491B0E3F-0D19-4A2D-90E9-E5463CBF1035@nginx.com> Atilla, man page quote is related to the Valery?s argument that fsync wont affect performance, forget it. It?s nonsense because you?re trying to solve the reliability problem at the different level, it has been multiple times suggested here already by maxim and Paul, that it?s better to invest to the good server/storage infrastructure, instead of fsyncing each PUT. Regarding the DB server analogy, you?re still not save from the power outages as long as your transaction isn?t in a transaction log. If you?re still consent with syncing and ready to sacrifice your time, try mounting a file system with ?sync? option. br, Aziz. > On 2 Mar 2018, at 12:12, Nagy, Attila wrote: > > On 02/28/2018 03:08 PM, Maxim Dounin wrote: >> The question here is - why you want the file to be on disk, and >> not just in a buffer? Because you expect the server to die in a >> few seconds without flushing the file to disk? How probable it >> is, compared to the probability of the disk to die? A more >> reliable server can make this probability negligible, hence the >> suggestion. > Because the files I upload to nginx servers are important to me. Please step back a little and forget that we are talking about nginx or an HTTP server. > We have data which we want to write to somewhere. > Check any of the database servers. Would you accept a DB server which can loose confirmed data or couldn't be configured that way that a write/insert/update/commit/whatever you use to modify or put data into it operation is reliably written by the time you receive acknowledgement? > Now try to use this example. I would like to use nginx to store files. That's what HTTP PUT is for. > Of course I'm not expecting that the server will die every day. But when that happens, I want to make sure that the confirmed data is there. > Let's take a look at various object storage systems, like ceph. Would you accept a confirmed write to be lost there? They make a great deal of work to make that impossible. > Now try to imagine that somebody doesn't need the complexity of -for example- ceph, but wants to store data with plain HTTP. And you got there. If you store data, then you want to make sure the data is there. > If you don't, why do you store it anyways? > >> (Also, another question is what "on the disk" meas from physical >> point of view. In many cases this in fact means "somewhere in the >> disk buffers", and a power outage can easily result in the file >> being not accessible even after fsync().) > Not with good software/hardware. (and it doesn't really have to be super good, but average) > >> >>> Why doing this in a thread is not a good idea? It would'nt block nginx >>> that way. >> Because even in threads, fsync() is likely to cause performance >> degradation. It might be a better idea to let the OS manage >> buffers instead. >> > Sure, it will cause some (not much BTW in a good configuration). But if my primary goal is to store files reliably, why should I care? > I can solve that by using SSDs for logs, BBWCs and a lot more thing. But in the current way, I can't make sure that a HTTP PUT was really successful or it will be successful in some seconds or it will fail badly. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From bra at fsn.hu Fri Mar 2 11:30:31 2018 From: bra at fsn.hu (Nagy, Attila) Date: Fri, 2 Mar 2018 12:30:31 +0100 Subject: fsync()-in webdav PUT In-Reply-To: <491B0E3F-0D19-4A2D-90E9-E5463CBF1035@nginx.com> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <20180228140831.GO89840@mdounin.ru> <64404d71-d667-ca92-c174-df0607819dbf@fsn.hu> <491B0E3F-0D19-4A2D-90E9-E5463CBF1035@nginx.com> Message-ID: <03003619-ff19-54ab-bf90-ce9db4f67754@fsn.hu> On 03/02/2018 11:42 AM, Aziz Rozyev wrote: > man page quote is related to the Valery?s argument that fsync wont affect performance, forget it. Of course it affects performance. But as for how much: it depends on many factors. It's possible to build servers where the overall effect will be negligible. > > It?s nonsense because you?re trying to solve the reliability problem at the different level, > it has been multiple times suggested here already by maxim and Paul, that it?s better > to invest to the good server/storage infrastructure, instead of fsyncing each PUT. Yes, it has been suggested multiple times, the only problem is it's not true. No matter how good server/storage you have, if you write to unbacked memory buffers (which nginx does), you are toast. > > Regarding the DB server analogy, you?re still not save from the power outages as long as your > transaction isn?t in a transaction log. > > If you?re still consent with syncing and ready to sacrifice your time, try mounting a file system > with ?sync? option. > That's what really kills performance, because of the async nature of nginx. That's why I'm proposing an option to do the fsync at the end of the PUT (or maybe even the whole operation) in a thread(pool). If you care about performance and reliability, that's the way it has to be solved. From luciano at vespaperitivo.it Fri Mar 2 11:33:40 2018 From: luciano at vespaperitivo.it (Luciano Mannucci) Date: Fri, 2 Mar 2018 12:33:40 +0100 Subject: Nginx Directory Autoindex In-Reply-To: <6d44fadd-dbcf-9b61-505a-7477c760d902@xdrv.co.uk> References: <3zs3Mb0pjZz3jYkR@baobab.bilink.it> <3zsW7X5hrVz3jYmF@baobab.bilink.it> <6d44fadd-dbcf-9b61-505a-7477c760d902@xdrv.co.uk> Message-ID: <3zt6d13BtRz3jYqP@baobab.bilink.it> On Fri, 2 Mar 2018 10:33:45 +0000 James wrote: > I'd like the option of order by date, "ls -t", "ls -rt". This helps > because the text order of numbers is "10 11 8 9", see: > > http://nginx.org/download/ Well, this is way less trivial than simply add a flag to reverse sort order. It belongs indeed to fancyindex, that already does that: fancyindex_default_sort Syntax: fancyindex_default_sort [name | size | date | name_desc | size_desc | date_desc] Though you need to compile your own nginx to get that working (and follow the module installation instructions :-). Cheers, Luciano. -- /"\ /Via A. Salaino, 7 - 20144 Milano (Italy) \ / ASCII RIBBON CAMPAIGN / PHONE : +39 2 485781 FAX: +39 2 48578250 X AGAINST HTML MAIL / E-MAIL: posthamster at sublink.sublink.ORG / \ AND POSTINGS / WWW: http://www.lesassaie.IT/ From list at xdrv.co.uk Fri Mar 2 14:03:36 2018 From: list at xdrv.co.uk (James) Date: Fri, 2 Mar 2018 14:03:36 +0000 Subject: Nginx Directory Autoindex In-Reply-To: <3zt6d13BtRz3jYqP@baobab.bilink.it> References: <3zs3Mb0pjZz3jYkR@baobab.bilink.it> <3zsW7X5hrVz3jYmF@baobab.bilink.it> <6d44fadd-dbcf-9b61-505a-7477c760d902@xdrv.co.uk> <3zt6d13BtRz3jYqP@baobab.bilink.it> Message-ID: <35046a33-0696-071d-17ac-c1d40ffa4642@xdrv.co.uk> On 02/03/2018 11:33, Luciano Mannucci wrote: >> I'd like the option of order by date, "ls -t", "ls -rt". This helps >> because the text order of numbers is "10 11 8 9", see: >> >> http://nginx.org/download/ > Well, this is way less trivial than simply add a flag to reverse sort > order. It belongs indeed to fancyindex, that already does that: ... > Though you need to compile your own nginx to get that working (and > follow the module installation instructions :-). Perhaps I should have expressed it as I'd like other people to sort by date and that isn't going to happen unless it's easy, ie, built in. autoindex on | off | date | text | ... ; James. From vl at nginx.com Fri Mar 2 14:17:10 2018 From: vl at nginx.com (Vladimir Homutov) Date: Fri, 2 Mar 2018 17:17:10 +0300 Subject: Nginx Directory Autoindex In-Reply-To: <35046a33-0696-071d-17ac-c1d40ffa4642@xdrv.co.uk> References: <3zs3Mb0pjZz3jYkR@baobab.bilink.it> <3zsW7X5hrVz3jYmF@baobab.bilink.it> <6d44fadd-dbcf-9b61-505a-7477c760d902@xdrv.co.uk> <3zt6d13BtRz3jYqP@baobab.bilink.it> <35046a33-0696-071d-17ac-c1d40ffa4642@xdrv.co.uk> Message-ID: <20180302141709.GA20938@vlpc> On Fri, Mar 02, 2018 at 02:03:36PM +0000, James wrote: > On 02/03/2018 11:33, Luciano Mannucci wrote: > > >> I'd like the option of order by date, "ls -t", "ls -rt". This helps > >> because the text order of numbers is "10 11 8 9", see: > >> > >> http://nginx.org/download/ > > Well, this is way less trivial than simply add a flag to reverse sort > > order. It belongs indeed to fancyindex, that already does that: > ... > > Though you need to compile your own nginx to get that working (and > > follow the module installation instructions :-). > > Perhaps I should have expressed it as I'd like other people to sort by > date and that isn't going to happen unless it's easy, ie, built in. > > autoindex on | off | date | text | ... ; > > Well, if you want some interactive sorting, you have to do it in javascript; the native autoindex module is able to output json/jsonp, so a simple js script may be used to implement any desired behaviour. From list at xdrv.co.uk Fri Mar 2 14:28:29 2018 From: list at xdrv.co.uk (James) Date: Fri, 2 Mar 2018 14:28:29 +0000 Subject: Nginx Directory Autoindex In-Reply-To: <20180302141709.GA20938@vlpc> References: <3zs3Mb0pjZz3jYkR@baobab.bilink.it> <3zsW7X5hrVz3jYmF@baobab.bilink.it> <6d44fadd-dbcf-9b61-505a-7477c760d902@xdrv.co.uk> <3zt6d13BtRz3jYqP@baobab.bilink.it> <35046a33-0696-071d-17ac-c1d40ffa4642@xdrv.co.uk> <20180302141709.GA20938@vlpc> Message-ID: <4af1285b-b287-79ab-1ea8-da93af0ac001@xdrv.co.uk> On 02/03/2018 14:17, Vladimir Homutov wrote: >> Perhaps I should have expressed it as I'd like other people to sort by >> date and that isn't going to happen unless it's easy, ie, built in. >> >> autoindex on | off | date | text | ... ; >> >> > Well, if you want some interactive sorting, you have to do it in > javascript; the native autoindex module is able to output json/jsonp, > so a simple js script may be used to implement any desired behaviour. No, I want *other* people to sort their indices by date where appropriate. It's more likely to happen if it's easy - no plugin, no scripting. Having just done my monthly check for software updates on 421 projects I have a reason. Many are in text sorted directory listings and would benefit from date ordering. James. From nginx-forum at forum.nginx.org Fri Mar 2 14:54:31 2018 From: nginx-forum at forum.nginx.org (peanky) Date: Fri, 02 Mar 2018 09:54:31 -0500 Subject: Nginx mail proxy In-Reply-To: <20180301133149.GS89840@mdounin.ru> References: <20180301133149.GS89840@mdounin.ru> Message-ID: > Bacause nginx smtp proxy is designed to protect / balance your own > smtp backends. If you want to proxy to external smtp servers, > consider using other solutions. Thank you for answer! 1. what is the diff between "my smtp" and "3rd party smtp" from technical point of view? 2. which other solutions can you imagine? It's very interesting! 3. I've heared that "nginx mail module supports only non-ssl backeds". It's true? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,257510,278897#msg-278897 From francis at daoine.org Fri Mar 2 15:51:48 2018 From: francis at daoine.org (Francis Daly) Date: Fri, 2 Mar 2018 15:51:48 +0000 Subject: NTLM sharepoint when use nginx reverse proxy In-Reply-To: References: Message-ID: <20180302155148.GH3280@daoine.org> On Fri, Mar 02, 2018 at 05:30:00AM -0500, sonpg wrote: Hi there, > my design is : enduser --> nginx --> sites (sharepoint site:443, web:80; > 443) > if server listen in 80 will redirect to 443 That seems generally sensible. > i try to use stream block but it can't use same port. Ah: you have one nginx, but with one "stream { server { listen 80; } }" and also one "http { server { listen 80; } }". Yes, that will not work. (And is not a case I had imagined, when I sent the previous mail.) If you use both stream and http, they cannot both listen on the same ip:port. You use "http" because you want nginx to reverse-proxy one or more web sites. You use "stream" because you want nginx to reverse-proxy one ntlm-authentication web site, and you know that nginx does not reverse-proxy ntlm. You use "stream" to send all inbound traffic to a specific backend server, in order to get around nginx's lack of ntlm support. You can do that, but you can not also use "http" on the same port, because that would want to handle the same inbound traffic. So you must choose to stop supporting the ntlm web site, or to stop supporting more-than-one web site, or to use something other than nginx. (Or to put the ntlm stream listener and the http listener on different ip:ports -- you might be able to use multiple IP addresses, depending on your setup.) f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Fri Mar 2 16:06:39 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 2 Mar 2018 19:06:39 +0300 Subject: fsync()-in webdav PUT In-Reply-To: <64404d71-d667-ca92-c174-df0607819dbf@fsn.hu> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <20180228140831.GO89840@mdounin.ru> <64404d71-d667-ca92-c174-df0607819dbf@fsn.hu> Message-ID: <20180302160638.GF89840@mdounin.ru> Hello! On Fri, Mar 02, 2018 at 10:12:02AM +0100, Nagy, Attila wrote: > On 02/28/2018 03:08 PM, Maxim Dounin wrote: > > The question here is - why you want the file to be on disk, and > > not just in a buffer? Because you expect the server to die in a > > few seconds without flushing the file to disk? How probable it > > is, compared to the probability of the disk to die? A more > > reliable server can make this probability negligible, hence the > > suggestion. > Because the files I upload to nginx servers are important to me. Please > step back a little and forget that we are talking about nginx or an HTTP > server. If file are indeed important to you, you have to keep a second copy in a different location, or even in multiple different locations. Trying to do fsync() won't save your data in a lot of quite realistic scenarios, but certainly will imply performance (and complexity, from nginx code point of view) costs. > We have data which we want to write to somewhere. > Check any of the database servers. Would you accept a DB server which > can loose confirmed data or couldn't be configured that way that a > write/insert/update/commit/whatever you use to modify or put data into > it operation is reliably written by the time you receive acknowledgement? The "can loose confirmed data" claim applies to all database servers, all installations in the world. There is no such thing as 100% reliability. And the question is how probable data loss is, and if we can ignore a particular probability or not. > Now try to use this example. I would like to use nginx to store files. > That's what HTTP PUT is for. > Of course I'm not expecting that the server will die every day. But when > that happens, I want to make sure that the confirmed data is there. > Let's take a look at various object storage systems, like ceph. Would > you accept a confirmed write to be lost there? They make a great deal of > work to make that impossible. > Now try to imagine that somebody doesn't need the complexity of -for > example- ceph, but wants to store data with plain HTTP. And you got > there. If you store data, then you want to make sure the data is there. > If you don't, why do you store it anyways? So, given the fact that there is no such thing as 100% reliability, you suggest to do not store files at all? I don't think it's viable approach - and clearly you are already doing the opposite. Rather, you want to consider various scenarios and their probabilities, and minimize probabilities of loosing data where possible and makes sense. And that's why I asked you to compare the probability you are trying to avoid with other probabilities which can cause data loss - for example, the probability of the disk to die. Just for the reference, assuming you are using a commodity HDD to store your files, the probability that it will fail within a year is about 2% (see, for example, Backblaze data, recent stats are available at https://www.backblaze.com/blog/hard-drive-stats-for-2017/). Moreover, even if you have numbers on hand and this numbers will show that you indeed need to ensure syncing files to disk to reach greater reliability, doing fsync() might not be the best way to achieve this. For example, doing sync() instead after loading multiple files might be a better solution, both due to lower complexity and higher performance. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Fri Mar 2 16:28:48 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 2 Mar 2018 19:28:48 +0300 Subject: Nginx mail proxy In-Reply-To: References: <20180301133149.GS89840@mdounin.ru> Message-ID: <20180302162848.GG89840@mdounin.ru> Hello! On Fri, Mar 02, 2018 at 09:54:31AM -0500, peanky wrote: > > Bacause nginx smtp proxy is designed to protect / balance your own > > smtp backends. If you want to proxy to external smtp servers, > > consider using other solutions. > > Thank you for answer! > 1. what is the diff between "my smtp" and "3rd party smtp" from technical > point of view? The difference is assumptions made during development, and solutions implemented according to these assumptions. Most obvious ones are, as already mentioned in this thread: - you don't need to bother with authenticating to a backend, but can use XCLIENT instead; - you don't need to use SSL to your backends, and can assume secure internal network instead. Others include various protocol limitations when it comes to talking to backends (some exotic yet valid responses might not be recognized properly), and lack of various negotiations - e.g., SMTP pipelining must be supported by the backend if you list it in the smtp_capabilities. > 2. which other solutions can you imagine? It's very interesting! This depends on what you are trying to do. In some basic cases a TCP proxy as provided by the nginx stream module might do the trick. In some other - a properly configured SMTP server will be enough. > 3. I've heared that "nginx mail module supports only non-ssl backeds". It's > true? Yes. -- Maxim Dounin http://mdounin.ru/ From valery+nginxen at grid.net.ru Fri Mar 2 19:47:17 2018 From: valery+nginxen at grid.net.ru (Valery Kholodkov) Date: Fri, 2 Mar 2018 20:47:17 +0100 Subject: fsync()-in webdav PUT In-Reply-To: <20180302160638.GF89840@mdounin.ru> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <20180228140831.GO89840@mdounin.ru> <64404d71-d667-ca92-c174-df0607819dbf@fsn.hu> <20180302160638.GF89840@mdounin.ru> Message-ID: On 02-03-18 17:06, Maxim Dounin wrote: >>> The question here is - why you want the file to be on disk, and >>> not just in a buffer? Because you expect the server to die in a >>> few seconds without flushing the file to disk? How probable it >>> is, compared to the probability of the disk to die? A more >>> reliable server can make this probability negligible, hence the >>> suggestion. >> Because the files I upload to nginx servers are important to me. Please >> step back a little and forget that we are talking about nginx or an HTTP >> server. > > If file are indeed important to you, you have to keep a second > copy in a different location, or even in multiple different > locations. Trying to do fsync() won't save your data in a lot of > quite realistic scenarios, but certainly will imply performance > (and complexity, from nginx code point of view) costs. But do you understand that even in a replicated setup the time interval when data reaches permanent storage might be significantly long and according to your assumptions is random and unpredictable. In other words, without fsync() it's not possible to make any judgments about consistency of your data, consequently it's not possible to implement a program, that tells if your data is consistent or not. Don't you think that your arguments are fundamentally flawed because you insist on probabilistic nature of the problem, while it is actually deterministic? By the way, even LevelDB has options for synchronous writes: https://github.com/google/leveldb/blob/master/doc/index.md#synchronous-writes and it implements them with fsync() Bitcoin Core varies these options depending on operation mode (see src/validation.cpp, src/txdb.cpp, src/dbwrapper.cpp). Oh, I forgot, Bitcoin it's nonsense... val From mdounin at mdounin.ru Sat Mar 3 00:43:02 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 3 Mar 2018 03:43:02 +0300 Subject: fsync()-in webdav PUT In-Reply-To: References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <20180228140831.GO89840@mdounin.ru> <64404d71-d667-ca92-c174-df0607819dbf@fsn.hu> <20180302160638.GF89840@mdounin.ru> Message-ID: <20180303004302.GI89840@mdounin.ru> Hello! On Fri, Mar 02, 2018 at 08:47:17PM +0100, Valery Kholodkov wrote: > On 02-03-18 17:06, Maxim Dounin wrote: > >>> The question here is - why you want the file to be on disk, and > >>> not just in a buffer? Because you expect the server to die in a > >>> few seconds without flushing the file to disk? How probable it > >>> is, compared to the probability of the disk to die? A more > >>> reliable server can make this probability negligible, hence the > >>> suggestion. > >> Because the files I upload to nginx servers are important to me. Please > >> step back a little and forget that we are talking about nginx or an HTTP > >> server. > > > > If file are indeed important to you, you have to keep a second > > copy in a different location, or even in multiple different > > locations. Trying to do fsync() won't save your data in a lot of > > quite realistic scenarios, but certainly will imply performance > > (and complexity, from nginx code point of view) costs. > > But do you understand that even in a replicated setup the time interval > when data reaches permanent storage might be significantly long and > according to your assumptions is random and unpredictable. > > In other words, without fsync() it's not possible to make any judgments > about consistency of your data, consequently it's not possible to > implement a program, that tells if your data is consistent or not. > > Don't you think that your arguments are fundamentally flawed because you > insist on probabilistic nature of the problem, while it is actually > deterministic? In no particular order: 1. There are no "my assumptions". 2. This is not about consistency, it's about fault tolerance. Everything is consistent unless a server crash happens. 3. Using fsync() can increase the chance that your data will survive a server crash / power outage. It doesn't matter in many other scenarios though, for example, if your disk dies. 4. Trying to insist that reliability is deterministic looks unwise to me, but it's up to you to insist on anything you want. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Sun Mar 4 03:27:21 2018 From: nginx-forum at forum.nginx.org (Sergey Sandler) Date: Sat, 03 Mar 2018 22:27:21 -0500 Subject: thread_pool in Windows Message-ID: Hi, I am running Nginx as a proxy for a single-threaded server (Shiny R) in Windows;both the proxy and the server are located on the same machine. There are delays due to timeout errors: "upstream timed out: A connection attempt failed because the connected party did not properly respond after a period of time, request: "GET /shared/jquery.min.js HTTP/1.1"" (and similar for other js and css files). This can probably be solved with 'location alias'. But this is tedious, since the required files are spread out in R/Lib subdirectories. The proper fix seems to be by restricting thread_pool with thread_pool io_pool threads=1; and a corresponding directive under 'location' aio threads=io_pool; However, Nginx reports: unknown directive "thread_pool". Is there a way to have 'thread_pool' supported in Windows? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278909,278909#msg-278909 From r at roze.lv Sun Mar 4 11:45:02 2018 From: r at roze.lv (Reinis Rozitis) Date: Sun, 4 Mar 2018 13:45:02 +0200 Subject: fsync()-in webdav PUT In-Reply-To: <03003619-ff19-54ab-bf90-ce9db4f67754@fsn.hu> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <20180228140831.GO89840@mdounin.ru> <64404d71-d667-ca92-c174-df0607819dbf@fsn.hu> <491B0E3F-0D19-4A2D-90E9-E5463CBF1035@nginx.com> <03003619-ff19-54ab-bf90-ce9db4f67754@fsn.hu> Message-ID: <000001d3b3ae$3b5405c0$b1fc1140$@roze.lv> > That's what really kills performance, because of the async nature of > nginx. That's why I'm proposing an option to do the fsync at the end of > the PUT (or maybe even the whole operation) in a thread(pool). Then again this way you make it again "asynchronous" (since it is/could be waiting in some thread/pool (forever)). In general this whole thing reminds me of Linus rant about how ".. the only reason O_DIRECT exists is because database people are too used to it" .. rr From mdounin at mdounin.ru Sun Mar 4 15:48:43 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 4 Mar 2018 18:48:43 +0300 Subject: thread_pool in Windows In-Reply-To: References: Message-ID: <20180304154843.GJ89840@mdounin.ru> Hello! On Sat, Mar 03, 2018 at 10:27:21PM -0500, Sergey Sandler wrote: > I am running Nginx as a proxy for a single-threaded server (Shiny R) in > Windows;both the proxy and the server are located on the same machine. There > are delays due to timeout errors: > "upstream timed out: A connection attempt failed because the connected > party did not properly respond after a period of time, request: "GET > /shared/jquery.min.js HTTP/1.1"" > (and similar for other js and css files). > > This can probably be solved with 'location alias'. But this is tedious, > since the required files are spread out in R/Lib subdirectories. > > The proper fix seems to be by restricting thread_pool with > thread_pool io_pool threads=1; > and a corresponding directive under 'location' > aio threads=io_pool; > However, Nginx reports: unknown directive "thread_pool". The error message indicate that your upstream server can't cope with load. Using thread pools in nginx won't help you with this, you need to either tune your upstream server to handle more load, or move some load away from it, e.g., by serving static files directly by nginx. > Is there a way to have 'thread_pool' supported in Windows? Thread pools support for Windows is not implemented. If you need them supported, you have to code relevant code bits first. But, as explained above, this won't help with your particular problem. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Sun Mar 4 17:46:48 2018 From: nginx-forum at forum.nginx.org (sonpg) Date: Sun, 04 Mar 2018 12:46:48 -0500 Subject: NTLM sharepoint when use nginx reverse proxy In-Reply-To: <20180302155148.GH3280@daoine.org> References: <20180302155148.GH3280@daoine.org> Message-ID: i try to using diffirent port but sharepoint site get error: ERR_EMPTY_RESPONSE stream { upstream ecm { hash $remote_addr consistent; server ecm.test.vn:80 weight=5; server 10.68.8.182:80 max_fails=3 fail_timeout=30s; server ecm.test.vn:443 weight=5; server 10.68.8.182:444 max_fails=3 fail_timeout=30s; ntlm on; } server { listen 444 ssl; #Line 27 ssl_certificate /etc/nginx/ssl/test/test.pem; ssl_certificate_key /etc/nginx/ssl/test/test.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 5m; proxy_connect_timeout 1s; proxy_timeout 3s; proxy_pass ecm.test.vn; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278737,278914#msg-278914 From bra at fsn.hu Mon Mar 5 10:29:36 2018 From: bra at fsn.hu (Nagy, Attila) Date: Mon, 5 Mar 2018 11:29:36 +0100 Subject: fsync()-in webdav PUT In-Reply-To: <000001d3b3ae$3b5405c0$b1fc1140$@roze.lv> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <20180228140831.GO89840@mdounin.ru> <64404d71-d667-ca92-c174-df0607819dbf@fsn.hu> <491B0E3F-0D19-4A2D-90E9-E5463CBF1035@nginx.com> <03003619-ff19-54ab-bf90-ce9db4f67754@fsn.hu> <000001d3b3ae$3b5405c0$b1fc1140$@roze.lv> Message-ID: <1ec761ad-a8d1-6097-6c2c-0ea8e5743669@fsn.hu> On 03/04/2018 12:45 PM, Reinis Rozitis wrote: >> That's what really kills performance, because of the async nature of >> nginx. That's why I'm proposing an option to do the fsync at the end of >> the PUT (or maybe even the whole operation) in a thread(pool). > Then again this way you make it again "asynchronous" (since it is/could be waiting in some thread/pool (forever)). Jesus, why? You start the fsync in a thread and you wait for it to be completed with the HTTP response. Until this happens, the main thread can service other requests. Have you ever seen an async program which uses threads to run blocking operations? From r at roze.lv Mon Mar 5 11:54:53 2018 From: r at roze.lv (Reinis Rozitis) Date: Mon, 5 Mar 2018 13:54:53 +0200 Subject: fsync()-in webdav PUT In-Reply-To: <1ec761ad-a8d1-6097-6c2c-0ea8e5743669@fsn.hu> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <20180228140831.GO89840@mdounin.ru> <64404d71-d667-ca92-c174-df0607819dbf@fsn.hu> <491B0E3F-0D19-4A2D-90E9-E5463CBF1035@nginx.com> <03003619-ff19-54ab-bf90-ce9db4f67754@fsn.hu> <000001d3b3ae$3b5405c0$b1fc1140$@roze.lv> <1ec761ad-a8d1-6097-6c2c-0ea8e5743669@fsn.hu> Message-ID: <000501d3b478$c5f9bd30$51ed3790$@roze.lv> > Jesus, why? You start the fsync in a thread and you wait for it to be completed > with the HTTP response. Until this happens, the main thread can service other > requests. > Have you ever seen an async program which uses threads to run blocking > operations? The point was that it's odd that you are going to "trust" the userland daemon to finish the sync operation (which obviously has the possibility to fail) in some background thread while not trusting the OS/kernel to do the buffer/vm/pagecache flush at some "later" / "better" time (which you can even finetune to do it immediately - vm.dirty_ratio / vm.dirty_expire_centisecs etc). Besides even with sync you don't get 100% of guarantees that the write actually ends (correctly) "on the iron" - coming from the land of ZFS ("lots of cheksuming") people will confirm that there are quite a few parts (like drive cache/firmware, controller cache/firmware) which occasionally lie about state of things. p.s. then again nginx is also an opensource project and one can implement/propose whatever changes you need for your application even they don't align with authors (for example I also use nginx's webdav module but I do remove everything related to directory delete (to be more safe) just because of the way the app operates) Just my 2 cents .. rr From valery+nginxen at grid.net.ru Mon Mar 5 13:12:01 2018 From: valery+nginxen at grid.net.ru (Valery Kholodkov) Date: Mon, 5 Mar 2018 14:12:01 +0100 Subject: fsync()-in webdav PUT In-Reply-To: <000501d3b478$c5f9bd30$51ed3790$@roze.lv> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <20180228140831.GO89840@mdounin.ru> <64404d71-d667-ca92-c174-df0607819dbf@fsn.hu> <491B0E3F-0D19-4A2D-90E9-E5463CBF1035@nginx.com> <03003619-ff19-54ab-bf90-ce9db4f67754@fsn.hu> <000001d3b3ae$3b5405c0$b1fc1140$@roze.lv> <1ec761ad-a8d1-6097-6c2c-0ea8e5743669@fsn.hu> <000501d3b478$c5f9bd30$51ed3790$@roze.lv> Message-ID: <9f664a7e-0c5d-dfe4-93ab-12fd6a94b872@grid.net.ru> On 05-03-18 12:54, Reinis Rozitis wrote: >> Have you ever seen an async program which uses threads to run blocking >> operations? > > The point was that it's odd that you are going to "trust" the userland daemon to finish the sync operation (which obviously has the possibility to fail) in some background thread while not trusting the OS/kernel to do the buffer/vm/pagecache flush at some "later" / "better" time (which you can even finetune to do it immediately - vm.dirty_ratio / vm.dirty_expire_centisecs etc). And so is odd to return a positive reply when you only speculated about its positiveness. > Besides even with sync you don't get 100% of guarantees that the write actually ends (correctly) "on the iron" - coming from the land of ZFS ("lots of cheksuming") people will confirm that there are quite a few parts (like drive cache/firmware, controller cache/firmware) which occasionally lie about state of things. Not lie, but speculate. The speculative behavior is then isolated in the "iron", don't you think? val From bra at fsn.hu Mon Mar 5 13:53:25 2018 From: bra at fsn.hu (Nagy, Attila) Date: Mon, 5 Mar 2018 14:53:25 +0100 Subject: fsync()-in webdav PUT In-Reply-To: <000501d3b478$c5f9bd30$51ed3790$@roze.lv> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <20180228140831.GO89840@mdounin.ru> <64404d71-d667-ca92-c174-df0607819dbf@fsn.hu> <491B0E3F-0D19-4A2D-90E9-E5463CBF1035@nginx.com> <03003619-ff19-54ab-bf90-ce9db4f67754@fsn.hu> <000001d3b3ae$3b5405c0$b1fc1140$@roze.lv> <1ec761ad-a8d1-6097-6c2c-0ea8e5743669@fsn.hu> <000501d3b478$c5f9bd30$51ed3790$@roze.lv> Message-ID: <4a1e6f58-54fb-55a4-ccd3-6ecc1d6ad6f1@fsn.hu> On 03/05/2018 12:54 PM, Reinis Rozitis wrote: >> Jesus, why? You start the fsync in a thread and you wait for it to be completed >> with the HTTP response. Until this happens, the main thread can service other >> requests. >> Have you ever seen an async program which uses threads to run blocking >> operations? > The point was that it's odd that you are going to "trust" the userland daemon to finish the sync operation (which obviously has the possibility to fail) in some background thread while not trusting the OS/kernel to do the buffer/vm/pagecache flush at some "later" / "better" time (which you can even finetune to do it immediately - vm.dirty_ratio / vm.dirty_expire_centisecs etc). The point here is you are completely on the wrong track. I'm not talking about a "background thread". I was talking about: 1. placing the WHOLE PUT operation into its own thread OR 2. placing just the fsync into a thread (and waiting it to complete with the HTTP response of course) (or doing something equivalent, which won't block nginx's main thread) This means the HTTP success or fail response will only be given back AFTER the fsync was successfully completed. There is absolutely no problem with a failing fsync. It means the operation couldn't complete and the file is not written successfully. An information, which you'll never know with current nginx. > Besides even with sync you don't get 100% of guarantees that the write actually ends (correctly) "on the iron" - coming from the land of ZFS ("lots of cheksuming") people will confirm that there are quite a few parts (like drive cache/firmware, controller cache/firmware) which occasionally lie about state of things. This is not relevant here. As somebody in this thread said: this has nothing to do with nginx. It's the matter of choosing the right hardware. > p.s. then again nginx is also an opensource project and one can implement/propose whatever changes you need for your application even they don't align with authors (for example I also use nginx's webdav module but I do remove everything related to directory delete (to be more safe) just because of the way the app operates) Sure. But this also doesn't really add to this discussion. > > > Just my 2 cents .. > I'm not sure it worths 2 cents. You clearly don't understand the problem here, so I'm not sure why you have to speculate about something which I didn't write... It's completely useless... From richarddemeny at gmail.com Mon Mar 5 14:03:40 2018 From: richarddemeny at gmail.com (Richard Demeny) Date: Mon, 5 Mar 2018 14:03:40 +0000 Subject: fsync()-in webdav PUT In-Reply-To: <4a1e6f58-54fb-55a4-ccd3-6ecc1d6ad6f1@fsn.hu> References: <4a497b7c-fc8d-ff1d-688e-c180ae9dd5f7@fsn.hu> <20180227132427.GF89840@mdounin.ru> <20180228140831.GO89840@mdounin.ru> <64404d71-d667-ca92-c174-df0607819dbf@fsn.hu> <491B0E3F-0D19-4A2D-90E9-E5463CBF1035@nginx.com> <03003619-ff19-54ab-bf90-ce9db4f67754@fsn.hu> <000001d3b3ae$3b5405c0$b1fc1140$@roze.lv> <1ec761ad-a8d1-6097-6c2c-0ea8e5743669@fsn.hu> <000501d3b478$c5f9bd30$51ed3790$@roze.lv> <4a1e6f58-54fb-55a4-ccd3-6ecc1d6ad6f1@fsn.hu> Message-ID: This needs to stop On Monday, March 5, 2018, Nagy, Attila wrote: > On 03/05/2018 12:54 PM, Reinis Rozitis wrote: > >> Jesus, why? You start the fsync in a thread and you wait for it to be >>> completed >>> with the HTTP response. Until this happens, the main thread can service >>> other >>> requests. >>> Have you ever seen an async program which uses threads to run blocking >>> operations? >>> >> The point was that it's odd that you are going to "trust" the userland >> daemon to finish the sync operation (which obviously has the possibility to >> fail) in some background thread while not trusting the OS/kernel to do the >> buffer/vm/pagecache flush at some "later" / "better" time (which you can >> even finetune to do it immediately - vm.dirty_ratio / >> vm.dirty_expire_centisecs etc). >> > The point here is you are completely on the wrong track. I'm not talking > about a "background thread". > I was talking about: > 1. placing the WHOLE PUT operation into its own thread OR > 2. placing just the fsync into a thread (and waiting it to complete with > the HTTP response of course) > (or doing something equivalent, which won't block nginx's main thread) > This means the HTTP success or fail response will only be given back AFTER > the fsync was successfully completed. > > There is absolutely no problem with a failing fsync. It means the > operation couldn't complete and the file is not written successfully. > An information, which you'll never know with current nginx. > > Besides even with sync you don't get 100% of guarantees that the write >> actually ends (correctly) "on the iron" - coming from the land of ZFS >> ("lots of cheksuming") people will confirm that there are quite a few parts >> (like drive cache/firmware, controller cache/firmware) which occasionally >> lie about state of things. >> > This is not relevant here. As somebody in this thread said: this has > nothing to do with nginx. It's the matter of choosing the right hardware. > > p.s. then again nginx is also an opensource project and one can >> implement/propose whatever changes you need for your application even they >> don't align with authors (for example I also use nginx's webdav module but >> I do remove everything related to directory delete (to be more safe) just >> because of the way the app operates) >> > Sure. But this also doesn't really add to this discussion. > >> >> >> Just my 2 cents .. >> >> I'm not sure it worths 2 cents. You clearly don't understand the problem > here, so I'm not sure why you have to speculate about something which I > didn't write... > It's completely useless... > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Mar 6 00:46:42 2018 From: nginx-forum at forum.nginx.org (Sergey Sandler) Date: Mon, 05 Mar 2018 19:46:42 -0500 Subject: thread_pool in Windows In-Reply-To: <20180304154843.GJ89840@mdounin.ru> References: <20180304154843.GJ89840@mdounin.ru> Message-ID: <557fd90c3785b3f50bf0795ea9516146.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, Thank you for prompt reply. I suspect there would be no delays/timeouts if there was a single thread in nginx that communicates with the upstream server. There is no simple way to restrict nginx threads number to one in Windows? Serving static files directly by nginx will likely solve the issue; it is not straightforward extracting static files (css, js) since in the Shiny R context they are spread in R libraries subfolders. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278909,278934#msg-278934 From nginx-forum at forum.nginx.org Tue Mar 6 07:50:36 2018 From: nginx-forum at forum.nginx.org (peanky) Date: Tue, 06 Mar 2018 02:50:36 -0500 Subject: Nginx mail proxy In-Reply-To: <20180302162848.GG89840@mdounin.ru> References: <20180302162848.GG89840@mdounin.ru> Message-ID: <2e8f812d1762d11d4cee5ecc247bc2cd.NginxMailingListEnglish@forum.nginx.org> Thx, Maxim! Closed. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,257510,278936#msg-278936 From gk at leniwiec.biz Tue Mar 6 08:18:23 2018 From: gk at leniwiec.biz (Grzegorz Kulewski) Date: Tue, 6 Mar 2018 09:18:23 +0100 Subject: Stream module logging questions In-Reply-To: <20180301115243.GD3177@Romans-MacBook-Air.local> References: <8c4f1836-69a6-4d64-12e5-e80eb3bc647a@leniwiec.biz> <20180301115243.GD3177@Romans-MacBook-Air.local> Message-ID: Hello, Thank you for your answer. W dniu 01.03.2018 o?12:52, Roman Arutyunyan pisze: > Hello, > > On Thu, Mar 01, 2018 at 04:06:09AM +0100, Grzegorz Kulewski wrote: >> Hello, >> >> 1. How can I log the IP and (especially) the port used by nginx (proxy) to connect to upstream when stream module is used? >> 2. Can I somehow get a log entry also/instead at stream connection setup time, not only after it ends? > > Stream module logs this in error.log with 'info' log level as soon as this > information is available: > > 2018/03/01 14:37:27 [info] 38462#0: *1 client 127.0.0.1:61020 connected to 0.0.0.0:9000 > 2018/03/01 14:37:27 [info] 38462#0: *1 proxy 127.0.0.1:61021 connected to 127.0.0.1:8001 Yes, I know about it. It would be really useful to have it in access log (== in some variable) too. Do you think you could add that? Also it would be really usefull to have access_log_connect and access_log_upstream_connect directives in addition to access_log that enable logging things on connect and upstream connect, not only on disconnect. >> 3. I think that $tcpinfo_* aren't supported in stream. Is there any reason for this? > > There's a number of http module features still missing in stream. > This is one of them. I can offer virtual (and real if in Poland) beer for adding this. :) -- Grzegorz Kulewski From maxim at nginx.com Tue Mar 6 08:24:03 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 6 Mar 2018 11:24:03 +0300 Subject: Stream module logging questions In-Reply-To: References: <8c4f1836-69a6-4d64-12e5-e80eb3bc647a@leniwiec.biz> <20180301115243.GD3177@Romans-MacBook-Air.local> Message-ID: <7e7afcff-0ae4-45aa-5b5f-63051ba91f5f@nginx.com> Hi Grzegorz, [..] >>> 3. I think that $tcpinfo_* aren't supported in stream. Is >>> there any reason for this? >> >> There's a number of http module features still missing in >> stream. This is one of them. > > I can offer virtual (and real if in Poland) beer for adding this. > :) > Just wonder, how do you use this info in http? Is it useful for you? -- Maxim Konovalov From gk at leniwiec.biz Tue Mar 6 10:10:55 2018 From: gk at leniwiec.biz (Grzegorz Kulewski) Date: Tue, 6 Mar 2018 11:10:55 +0100 Subject: Stream module logging questions In-Reply-To: <7e7afcff-0ae4-45aa-5b5f-63051ba91f5f@nginx.com> References: <8c4f1836-69a6-4d64-12e5-e80eb3bc647a@leniwiec.biz> <20180301115243.GD3177@Romans-MacBook-Air.local> <7e7afcff-0ae4-45aa-5b5f-63051ba91f5f@nginx.com> Message-ID: <810573f0-b23f-4b91-32ef-8793cf6dc13b@leniwiec.biz> W dniu 06.03.2018 o?09:24, Maxim Konovalov pisze: > Hi Grzegorz, > > [..] >>>> 3. I think that $tcpinfo_* aren't supported in stream. Is >>>> there any reason for this? >>> >>> There's a number of http module features still missing in >>> stream. This is one of them. >> >> I can offer virtual (and real if in Poland) beer for adding this. >> :) >> > Just wonder, how do you use this info in http? Is it useful for you? Hello, We are using it mainly as an aid in debugging since we have sometimes clients with *very* poor internet connection (for example from Africa). Also we are considering to use it to detect extremely badly GeoDNS routed clients and maybe as a test platform for clients to aid our further GeoDNS decissions. I think this feature is not a silver bullet that solves all the problems but it can be usefull in some cases and should be easy to add to stream in nginx and to any other program. -- Grzegorz Kulewski From nginx-forum at forum.nginx.org Tue Mar 6 12:13:20 2018 From: nginx-forum at forum.nginx.org (mejetjoseph) Date: Tue, 06 Mar 2018 07:13:20 -0500 Subject: Check the size of one of the request header in nginx conf Message-ID: <4b785ebfbbc2128c73028a9661377446.NginxMailingListEnglish@forum.nginx.org> Dear Team, I would like to know is it possible to check the size of one of header values in nginx conf file . I need to reset the header value if the size of this header value exceed 64 character. Could you please provide can I able to do this condition check in ngnix conf file? Kind regards, Joseph Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278940,278940#msg-278940 From arozyev at nginx.com Tue Mar 6 12:16:47 2018 From: arozyev at nginx.com (Aziz Rozyev) Date: Tue, 6 Mar 2018 15:16:47 +0300 Subject: Check the size of one of the request header in nginx conf In-Reply-To: <4b785ebfbbc2128c73028a9661377446.NginxMailingListEnglish@forum.nginx.org> References: <4b785ebfbbc2128c73028a9661377446.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2F70AF05-E09E-4DDE-B87D-58EB625052A7@nginx.com> hi, I think you can do such a checking with lua/njs modules. br, Aziz. > On 6 Mar 2018, at 15:13, mejetjoseph wrote: > > Dear Team, > > I would like to know is it possible to check the size of one of header > values in nginx conf file . I need to reset the header value if the size of > this header value exceed 64 character. > > Could you please provide can I able to do this condition check in ngnix conf > file? > > > Kind regards, > Joseph > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278940,278940#msg-278940 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From vbart at nginx.com Tue Mar 6 12:57:31 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 06 Mar 2018 15:57:31 +0300 Subject: thread_pool in Windows In-Reply-To: <557fd90c3785b3f50bf0795ea9516146.NginxMailingListEnglish@forum.nginx.org> References: <20180304154843.GJ89840@mdounin.ru> <557fd90c3785b3f50bf0795ea9516146.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2850488.7uyVhPghoa@vbart-workstation> On Monday 05 March 2018 19:46:42 Sergey Sandler wrote: > Hi Maxim, > > Thank you for prompt reply. > > I suspect there would be no delays/timeouts if there was a single thread in > nginx that communicates with the upstream server. > There is no simple way to restrict nginx threads number to one in Windows? > [..] nginx currently cannot use more than one thread in Windows for all operations. Support for thread pool means adding support for more threads. Moreover, thread pools in nginx are used only for reading and writing files. They are never used for connections. wbr, Valentin V. Bartenev From vl at nginx.com Tue Mar 6 14:43:47 2018 From: vl at nginx.com (Vladimir Homutov) Date: Tue, 6 Mar 2018 17:43:47 +0300 Subject: Routing based on ALPN In-Reply-To: <8febc163-7206-e530-7ea1-019d476a2271@metacode.biz> References: <8febc163-7206-e530-7ea1-019d476a2271@metacode.biz> Message-ID: <20180306144347.GA704@vlpc> On Sun, Feb 25, 2018 at 08:16:18PM +0100, Wiktor Kwapisiewicz via nginx wrote: > >> Is there a way to access and save ALPN value to a variable? > > > > It should possible to parse the incoming buffer with https://nginx.org/r/js_filter and create a variable to make a routing decision on. > > > > Excellent idea for quickly solving this problem, thanks! > > Would a long term solution involve creating a new, additional variable > in the ssl_preread module (e.g. ssl_preread_alpn)? > below is the initial version of patch that creates the "$ssl_preread_alpn_protocols" variable; the content is a comma-separated list of protocols, sent by client in ALPN extension, if present. Any feedback is appretiated. -------------- next part -------------- # HG changeset patch # User Roman Arutyunyan # Date 1520346970 -10800 # Tue Mar 06 17:36:10 2018 +0300 # Node ID edea1fea2b3970889946d38a077c7f3ed98613f5 # Parent 265c29b0b8b8c54b1c623268481ed85324ce3c79 Stream ssl_preread: $ssl_preread_alpn_protocols variable. The variable keeps a comma-separated list of ALPN protocol names sent by client. diff --git a/src/stream/ngx_stream_ssl_preread_module.c b/src/stream/ngx_stream_ssl_preread_module.c --- a/src/stream/ngx_stream_ssl_preread_module.c +++ b/src/stream/ngx_stream_ssl_preread_module.c @@ -17,10 +17,12 @@ typedef struct { typedef struct { size_t left; size_t size; + size_t ext; u_char *pos; u_char *dst; u_char buf[4]; ngx_str_t host; + ngx_str_t alpn; ngx_log_t *log; ngx_pool_t *pool; ngx_uint_t state; @@ -32,6 +34,8 @@ static ngx_int_t ngx_stream_ssl_preread_ ngx_stream_ssl_preread_ctx_t *ctx, u_char *pos, u_char *last); static ngx_int_t ngx_stream_ssl_preread_server_name_variable( ngx_stream_session_t *s, ngx_stream_variable_value_t *v, uintptr_t data); +static ngx_int_t ngx_stream_ssl_preread_alpn_protocols_variable( + ngx_stream_session_t *s, ngx_stream_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_stream_ssl_preread_add_variables(ngx_conf_t *cf); static void *ngx_stream_ssl_preread_create_srv_conf(ngx_conf_t *cf); static char *ngx_stream_ssl_preread_merge_srv_conf(ngx_conf_t *cf, void *parent, @@ -85,6 +89,9 @@ static ngx_stream_variable_t ngx_stream { ngx_string("ssl_preread_server_name"), NULL, ngx_stream_ssl_preread_server_name_variable, 0, 0, 0 }, + { ngx_string("ssl_preread_alpn_protocols"), NULL, + ngx_stream_ssl_preread_alpn_protocols_variable, 0, 0, 0 }, + ngx_stream_null_variable }; @@ -175,7 +182,7 @@ static ngx_int_t ngx_stream_ssl_preread_parse_record(ngx_stream_ssl_preread_ctx_t *ctx, u_char *pos, u_char *last) { - size_t left, n, size; + size_t left, n, size, ext; u_char *dst, *p; enum { @@ -192,7 +199,10 @@ ngx_stream_ssl_preread_parse_record(ngx_ sw_ext_header, /* extension_type, extension_data length */ sw_sni_len, /* SNI length */ sw_sni_host_head, /* SNI name_type, host_name length */ - sw_sni_host /* SNI host_name */ + sw_sni_host, /* SNI host_name */ + sw_alpn_len, /* ALPN length */ + sw_alpn_proto_len, /* ALPN protocol_name length */ + sw_alpn_proto_data /* ALPN protocol_name */ } state; ngx_log_debug2(NGX_LOG_DEBUG_STREAM, ctx->log, 0, @@ -201,6 +211,7 @@ ngx_stream_ssl_preread_parse_record(ngx_ state = ctx->state; size = ctx->size; left = ctx->left; + ext = ctx->ext; dst = ctx->dst; p = ctx->buf; @@ -299,10 +310,18 @@ ngx_stream_ssl_preread_parse_record(ngx_ break; case sw_ext_header: - if (p[0] == 0 && p[1] == 0) { + if (p[0] == 0 && p[1] == 0 && ctx->host.data == NULL) { /* SNI extension */ state = sw_sni_len; - dst = NULL; + dst = p; + size = 2; + break; + } + + if (p[0] == 0 && p[1] == 16 && ctx->alpn.data == NULL) { + /* ALPN extension */ + state = sw_alpn_len; + dst = p; size = 2; break; } @@ -313,6 +332,7 @@ ngx_stream_ssl_preread_parse_record(ngx_ break; case sw_sni_len: + ext = (p[0] << 8) + p[1]; state = sw_sni_host_head; dst = p; size = 3; @@ -328,6 +348,13 @@ ngx_stream_ssl_preread_parse_record(ngx_ state = sw_sni_host; size = (p[1] << 8) + p[2]; + if (ext < 3 + size) { + ngx_log_debug0(NGX_LOG_DEBUG_STREAM, ctx->log, 0, + "ssl preread: SNI format error"); + return NGX_DECLINED; + } + ext -= 3 + size; + ctx->host.data = ngx_pnalloc(ctx->pool, size); if (ctx->host.data == NULL) { return NGX_ERROR; @@ -341,7 +368,64 @@ ngx_stream_ssl_preread_parse_record(ngx_ ngx_log_debug1(NGX_LOG_DEBUG_STREAM, ctx->log, 0, "ssl preread: SNI hostname \"%V\"", &ctx->host); - return NGX_OK; + + state = sw_ext; + dst = NULL; + size = ext; + break; + + case sw_alpn_len: + ext = (p[0] << 8) + p[1]; + dst = p; + size = 1; + + ctx->alpn.data = ngx_pnalloc(ctx->pool, ext); + if (ctx->alpn.data == NULL) { + return NGX_ERROR; + } + + state = sw_alpn_proto_len; + break; + + case sw_alpn_proto_len: + dst = ctx->alpn.data + ctx->alpn.len; + size = p[0]; + + if (ext < 1 + size) { + ngx_log_debug0(NGX_LOG_DEBUG_STREAM, ctx->log, 0, + "ssl preread: ALPN format error"); + return NGX_DECLINED; + } + ext -= 1 + size; + state = sw_alpn_proto_data; + break; + + case sw_alpn_proto_data: + + if (p[0] == 0) { + ngx_log_debug0(NGX_LOG_DEBUG_STREAM, ctx->log, 0, + "ssl preread: ALPN protocol zero length"); + return NGX_DECLINED; + } + + ctx->alpn.len += p[0]; + + ngx_log_debug1(NGX_LOG_DEBUG_STREAM, ctx->log, 0, + "ssl preread: ALPN protocols \"%V\"", &ctx->alpn); + + if (ext) { + ctx->alpn.data[ctx->alpn.len++] = ','; + + state = sw_alpn_proto_len; + dst = p; + size = 1; + break; + } + + state = sw_ext; + dst = NULL; + size = 0; + break; } if (left < size) { @@ -354,6 +438,7 @@ ngx_stream_ssl_preread_parse_record(ngx_ ctx->state = state; ctx->size = size; ctx->left = left; + ctx->ext = ext; ctx->dst = dst; return NGX_AGAIN; @@ -384,6 +469,29 @@ ngx_stream_ssl_preread_server_name_varia static ngx_int_t +ngx_stream_ssl_preread_alpn_protocols_variable(ngx_stream_session_t *s, + ngx_variable_value_t *v, uintptr_t data) +{ + ngx_stream_ssl_preread_ctx_t *ctx; + + ctx = ngx_stream_get_module_ctx(s, ngx_stream_ssl_preread_module); + + if (ctx == NULL) { + v->not_found = 1; + return NGX_OK; + } + + v->valid = 1; + v->no_cacheable = 0; + v->not_found = 0; + v->len = ctx->alpn.len; + v->data = ctx->alpn.data; + + return NGX_OK; +} + + +static ngx_int_t ngx_stream_ssl_preread_add_variables(ngx_conf_t *cf) { ngx_stream_variable_t *var, *v; From arozyev at nginx.com Tue Mar 6 16:54:08 2018 From: arozyev at nginx.com (Aziz Rozyev) Date: Tue, 6 Mar 2018 19:54:08 +0300 Subject: Check the size of one of the request header in nginx conf In-Reply-To: <2F70AF05-E09E-4DDE-B87D-58EB625052A7@nginx.com> References: <4b785ebfbbc2128c73028a9661377446.NginxMailingListEnglish@forum.nginx.org> <2F70AF05-E09E-4DDE-B87D-58EB625052A7@nginx.com> Message-ID: <3A5E134E-23D2-4888-B75C-5B220B5973CF@nginx.com> by the way, there is easier solution to this (thanks to Ruslan Ermilov) something like this: map $http_ $disabled { ^.{65,} 1 } location / { if $disable { return 404; } proxy_pass http://upstream; } br, Aziz. > On 6 Mar 2018, at 15:16, Aziz Rozyev wrote: > > hi, > > I think you can do such a checking with lua/njs modules. > > > br, > Aziz. > > > > > >> On 6 Mar 2018, at 15:13, mejetjoseph wrote: >> >> Dear Team, >> >> I would like to know is it possible to check the size of one of header >> values in nginx conf file . I need to reset the header value if the size of >> this header value exceed 64 character. >> >> Could you please provide can I able to do this condition check in ngnix conf >> file? >> >> >> Kind regards, >> Joseph >> >> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278940,278940#msg-278940 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at forum.nginx.org Wed Mar 7 03:38:21 2018 From: nginx-forum at forum.nginx.org (Sergey Sandler) Date: Tue, 06 Mar 2018 22:38:21 -0500 Subject: thread_pool in Windows In-Reply-To: <2850488.7uyVhPghoa@vbart-workstation> References: <2850488.7uyVhPghoa@vbart-workstation> Message-ID: <67953aa2b71684159b7c0492801a0650.NginxMailingListEnglish@forum.nginx.org> Thank you, Valentin. There is something I am missing. Please see the start of the error.log below, 2018/03/04 14:05:50 [notice] 5144#9212: using the "select" event method 2018/03/04 14:05:50 [notice] 5144#9212: using the "select" event method 2018/03/04 14:05:50 [notice] 5144#9212: nginx/1.12.2 2018/03/04 14:05:50 [notice] 5144#9212: nginx/1.12.2 2018/03/04 14:05:50 [info] 5144#9212: OS: 260200 build:9200, "", suite:300, type:1 2018/03/04 14:05:50 [notice] 5144#9212: start worker processes 2018/03/04 14:05:50 [notice] 5144#9212: start worker processes 2018/03/04 14:05:50 [notice] 5144#9212: start worker process 13648 2018/03/04 14:05:50 [notice] 5144#9212: start worker process 13648 2018/03/04 14:05:51 [notice] 13648#4924: nginx/1.12.2 2018/03/04 14:05:51 [notice] 13648#4924: nginx/1.12.2 2018/03/04 14:05:51 [info] 13648#4924: OS: 260200 build:9200, "", suite:300, type:1 2018/03/04 14:05:51 [notice] 13648#4924: create thread 17496 2018/03/04 14:05:51 [notice] 13648#4924: create thread 17496 2018/03/04 14:05:51 [notice] 13648#4924: create thread 16328 2018/03/04 14:05:51 [notice] 13648#4924: create thread 16328 2018/03/04 14:05:51 [notice] 13648#4924: create thread 13940 2018/03/04 14:05:51 [notice] 13648#4924: create thread 13940 There is a single process (worker_processes 1 in the nginx.conf), with seemingly three threads (not sure why the lines in the log file are duplicated). Is the purpose of the additional threads to read static files (from the server)? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278909,278946#msg-278946 From lists at lazygranch.com Wed Mar 7 04:20:13 2018 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 6 Mar 2018 20:20:13 -0800 Subject: add_before_body Message-ID: <20180306202013.0808bb81.lists@lazygranch.com> I can't get the add_before_body feature to work. I have verified the module is installed. Here is what I am trying to accomplish. I want to add the following lines to the header of every html file: ------------------------ -------------------------------------------------- This is supposed to satisfy all browser requests for favicon, tablet icons, etc. I have these lines in a file named "before" and that file is located in the web root. This is how I implemented the feature: --------- location / { root /usr/share/nginx/html/example.com/public_html; add_before_body /before; index index.html index.htm; } -------------------------------------------- From ru at nginx.com Wed Mar 7 08:22:03 2018 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 7 Mar 2018 11:22:03 +0300 Subject: thread_pool in Windows In-Reply-To: <67953aa2b71684159b7c0492801a0650.NginxMailingListEnglish@forum.nginx.org> References: <2850488.7uyVhPghoa@vbart-workstation> <67953aa2b71684159b7c0492801a0650.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180307082203.GE65689@lo0.su> On Tue, Mar 06, 2018 at 10:38:21PM -0500, Sergey Sandler wrote: > Thank you, Valentin. > > There is something I am missing. Please see the start of the error.log > below, > > 2018/03/04 14:05:50 [notice] 5144#9212: using the "select" event method > 2018/03/04 14:05:50 [notice] 5144#9212: using the "select" event method > 2018/03/04 14:05:50 [notice] 5144#9212: nginx/1.12.2 > 2018/03/04 14:05:50 [notice] 5144#9212: nginx/1.12.2 > 2018/03/04 14:05:50 [info] 5144#9212: OS: 260200 build:9200, "", suite:300, > type:1 > 2018/03/04 14:05:50 [notice] 5144#9212: start worker processes > 2018/03/04 14:05:50 [notice] 5144#9212: start worker processes > 2018/03/04 14:05:50 [notice] 5144#9212: start worker process 13648 > 2018/03/04 14:05:50 [notice] 5144#9212: start worker process 13648 > 2018/03/04 14:05:51 [notice] 13648#4924: nginx/1.12.2 > 2018/03/04 14:05:51 [notice] 13648#4924: nginx/1.12.2 > 2018/03/04 14:05:51 [info] 13648#4924: OS: 260200 build:9200, "", suite:300, > type:1 > 2018/03/04 14:05:51 [notice] 13648#4924: create thread 17496 > 2018/03/04 14:05:51 [notice] 13648#4924: create thread 17496 > 2018/03/04 14:05:51 [notice] 13648#4924: create thread 16328 > 2018/03/04 14:05:51 [notice] 13648#4924: create thread 16328 > 2018/03/04 14:05:51 [notice] 13648#4924: create thread 13940 > 2018/03/04 14:05:51 [notice] 13648#4924: create thread 13940 > > There is a single process (worker_processes 1 in the nginx.conf), with > seemingly three threads (not sure why the lines in the log file are > duplicated). Is the purpose of the additional threads to read static files > (from the server)? nginx for Windows only uses one worker process, and inside it only one worker thread is created. You see three threads because it also creates two other threads for cache_manager and cache_loader (these are implemented as separate processes on UNIX): if (ngx_create_thread(&wtid, ngx_worker_thread, NULL, log) != 0) { goto failed; } if (ngx_create_thread(&cmtid, ngx_cache_manager_thread, NULL, log) != 0) { goto failed; } if (ngx_create_thread(&cltid, ngx_cache_loader_thread, NULL, log) != 0) { goto failed; } See also: http://nginx.org/en/docs/windows.html#known_issues (item #1) http://nginx.org/en/docs/windows.html#possible_future_enhancements (item #3) From nginx-forum at forum.nginx.org Wed Mar 7 08:24:18 2018 From: nginx-forum at forum.nginx.org (neuronetv) Date: Wed, 07 Mar 2018 03:24:18 -0500 Subject: newbie: nginx rtmp module Message-ID: <15df435066aa00ee69c37332946c30ed.NginxMailingListEnglish@forum.nginx.org> I'm running centos 6 and installed nginx using 'yum install nginx'. Videos are not working and I don't know whether I have the rtmp module or not. Here is the text from the yum install: Installing: collectd-nginx x86_64 4.10.9-4.el6 epel 14 k munin-nginx noarch 2.0.33-1.el6 epel 26 k nginx x86_64 1.10.2-1.el6 epel 462 k nginx-all-modules noarch 1.10.2-1.el6 epel 7.7 k nginx-filesystem noarch 1.10.2-1.el6 epel 8.5 k It says 'nginx-all-modules' on the 4th line but no other clue. Is there a way to tell if I have the rtmp module? If I don't have it is there a way to install it? extra info: I did previously install nginx using the nginx-1.13.9.tar.gz tarball and also installed the nginx rtmp module from the git clone and it worked but i couldn't get nginx to work with multiple domains using that install. Should I go back to the tarball install? or is there a way to get rtmp working with my current 'yum' install? Thanks for any help. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278950,278950#msg-278950 From lists at lazygranch.com Wed Mar 7 09:17:53 2018 From: lists at lazygranch.com (Gary) Date: Wed, 07 Mar 2018 01:17:53 -0800 Subject: newbie: nginx rtmp module In-Reply-To: <15df435066aa00ee69c37332946c30ed.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3s54e0ovpd68tplheolkgppi.1520414273418@lazygranch.com> nginx - V will show what modules are installed. ? Original Message ? From: nginx-forum at forum.nginx.org Sent: March 7, 2018 12:24 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: newbie: nginx rtmp module I'm running centos 6 and installed nginx using 'yum install nginx'. Videos are not working and I don't know whether I have the rtmp module or not. Here is the text from the yum install: Installing: collectd-nginx???????????????????????????? x86_64?????????????? 4.10.9-4.el6???????????????????????? epel???????????????????????? 14 k munin-nginx??????????????????????????????? noarch?????????????? 2.0.33-1.el6???????????????????????? epel???????????????????????? 26 k nginx????????????????????????????????????? x86_64?????????????? 1.10.2-1.el6???????????????????????? epel??????????????????????? 462 k nginx-all-modules????????????????????????? noarch?????????????? 1.10.2-1.el6???????????????????????? epel??????????????????????? 7.7 k nginx-filesystem?????????????????????????? noarch?????????????? 1.10.2-1.el6???????????????????????? epel??????????????????????? 8.5 k It says 'nginx-all-modules'? on the 4th line but no other clue. Is there a way to tell if I have the rtmp module? If I don't have it is there a way to install it? extra info: I did previously install nginx using the nginx-1.13.9.tar.gz tarball and also installed the nginx rtmp module from the git clone and it ... From nginx-forum at forum.nginx.org Wed Mar 7 09:53:30 2018 From: nginx-forum at forum.nginx.org (neuronetv) Date: Wed, 07 Mar 2018 04:53:30 -0500 Subject: newbie: nginx rtmp module In-Reply-To: <3s54e0ovpd68tplheolkgppi.1520414273418@lazygranch.com> References: <3s54e0ovpd68tplheolkgppi.1520414273418@lazygranch.com> Message-ID: thankyou for your feedback gariac. # nginx - V nginx: invalid option: "V" I think this may be because I have the 'yum install' version of nginx and not the tarball. TIA for any further ideas. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278950,278952#msg-278952 From lists at lazygranch.com Wed Mar 7 09:59:38 2018 From: lists at lazygranch.com (Gary) Date: Wed, 07 Mar 2018 01:59:38 -0800 Subject: newbie: nginx rtmp module In-Reply-To: Message-ID: Grrr that swift keyboard. There is no space before the capital V. nginx -V I'd be surprised if that command doesn't work now. Any reason you haven't upgraded to Centos 7? ? Original Message ? From: nginx-forum at forum.nginx.org Sent: March 7, 2018 1:53 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Re: newbie: nginx rtmp module thankyou for your feedback gariac. # nginx - V nginx: invalid option: "V" I think this may be because I have the 'yum install' version of nginx and not the tarball. TIA for any further ideas. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278950,278952#msg-278952 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From wiktor at metacode.biz Wed Mar 7 11:38:51 2018 From: wiktor at metacode.biz (Wiktor Kwapisiewicz) Date: Wed, 7 Mar 2018 12:38:51 +0100 Subject: Routing based on ALPN In-Reply-To: <20180306144347.GA704@vlpc> References: <8febc163-7206-e530-7ea1-019d476a2271@metacode.biz> <20180306144347.GA704@vlpc> Message-ID: <77f279db-374d-02dc-417e-fbb1fea29889@metacode.biz> > below is the initial version of patch that creates the > "$ssl_preread_alpn_protocols" variable; the content is a comma-separated > list of protocols, sent by client in ALPN extension, if present. > > Any feedback is appretiated. > I have just tested this patch and can confirm it's working perfectly fine. The patch was applied against this commit: https://github.com/nginx/nginx/commit/83dceda8688fcba6da9fd12f6480606563d7b7a3 And I was using LibreSSL. I've set up three upstream servers for tests, two using node.js (HTTPS) and one Prosody (XMPP server): map $ssl_preread_alpn_protocols $upstream { default node1; "h2,http/1.1" node2; "xmpp-client" prosody; } Curling with no ALPN correctly returns answer from node1: > curl -k -i --no-alpn https://docker.local HTTP/1.1 200 OK Date: Wed, 07 Mar 2018 11:24:26 GMT Connection: keep-alive Content-Length: 23 Everything works: node1 Curling with default configuration (ALPN: h2,http/1.1) also works: > curl -k -i https://docker.local HTTP/1.1 200 OK Date: Wed, 07 Mar 2018 11:24:43 GMT Connection: keep-alive Content-Length: 23 Everything works: node2 Then I tested XMPP by adding an SRV record: > dig _xmpps-client._tcp.testing.metacode.biz SRV ;; ANSWER SECTION: _xmpps-client._tcp.testing.metacode.biz. 119 IN SRV 1 1 443 docker.local. And using Gajim to connect to testing.metacode.biz. It worked. Nginx (web_1) logs correctly show all connection attempts with ALPN values: prosody_1 | c2s2564890 info Client connected web_1 | 192.168.99.1 xmpp-client [07/Mar/2018:11:21:58 +0000] TCP 200 2335 871 1.566 web_1 | 192.168.99.1 [07/Mar/2018:11:24:26 +0000] TCP 200 1546 327 0.298 web_1 | 192.168.99.1 h2,http/1.1 [07/Mar/2018:11:24:35 +0000] TCP 200 1539 262 0.324 web_1 | 192.168.99.1 h2,http/1.1 [07/Mar/2018:11:24:43 +0000] TCP 200 1539 262 0.293 prosody_1 | c2s2564890 info Authenticated as wiktor at testing.metacode.biz I've used log_format basic '$remote_addr $ssl_preread_alpn_protocols [$time_local] ' '$protocol $status $bytes_sent $bytes_received ' '$session_time'; This looks *very good*, thanks for your time! Kind regards, Wiktor -- */metacode/* From maxim at nginx.com Wed Mar 7 11:47:09 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 7 Mar 2018 14:47:09 +0300 Subject: Routing based on ALPN In-Reply-To: <77f279db-374d-02dc-417e-fbb1fea29889@metacode.biz> References: <8febc163-7206-e530-7ea1-019d476a2271@metacode.biz> <20180306144347.GA704@vlpc> <77f279db-374d-02dc-417e-fbb1fea29889@metacode.biz> Message-ID: <82f25f50-4dd3-b4e1-5699-52da27d28abe@nginx.com> On 07/03/2018 14:38, Wiktor Kwapisiewicz via nginx wrote: [...] > This looks *very good*, thanks for your time! Thanks for your testing, Wiktor. -- Maxim Konovalov From lucas at lucasrolff.com Wed Mar 7 16:55:15 2018 From: lucas at lucasrolff.com (Lucas Rolff) Date: Wed, 7 Mar 2018 16:55:15 +0000 Subject: location blocks, and if conditions in server context Message-ID: <5E6CC160-B698-4C44-A9AD-473F62F4034E@lucasrolff.com> Hi guys, I have a few hundred nginx zones, where I try to remove as much duplicate code as possible, and inherit as much as possible to prevent nginx from consuming memory (and also to keep things clean). However I came across something today, that I don?t know how to get my head around without duplicating code, even within a single server context. I have a set of distributed nginx servers, all these requires SSL certificates, where I use Let?s Encrypt to do this. When doing the Let?s Encrypt validation, it uses a path such as /.well-known/acme-challenge/ For this, I made a location block such as: location ~* /.well-known { proxy_pass http://letsencrypt.validation.backend.com$request_uri; } Basically, I proxy_pass to the backend where I actually run the acme client ? works great. However, I have an option to force a redirect from http to https, and I?ve implemented that by doing an if condition on the server block level (so not within a location): if ($sslproxy_protocol = "http") { return 301 https://$host$request_uri; } This means I have something like: 1: location ~* /.well-known 2: if condition doing redirect if protocol is http 3: location / 4: location /api 5: location /test All my templates include 1 to 3, and *might* have additional locations. I?ve decided to not put e.g. location /api inside the location / - because there?s things I don?t want to inherit, thus keeping them at the same ?level?, and not a location context inside a location context. Things I don?t want to inherit, is stuff such as headers, max_ranges directive etc. My issue is ? because of this if condition that does the redirect to https ? it also applies to my location ~* /.well-known ? thus causing a redirect, and I want to prevent this, since it breaks the Let?s Encrypt validation (they do not accept 301 redirects). A solution would be to move the if condition into each location block that I want to have redirected, but then I start repeating myself 1, 2 or even 10 times ? which I don?t wanna do. Is there a smart way without adding too much complexity, which is still super-fast (I know if is evil) ? A config example is seen below: server { listen 80; listen 443 ssl http2; server_name secure.domain.com; access_log /var/log/nginx/secure.domain.com main; location ~* /.well-known { proxy_pass http://letsencrypt.validation.backend.com$request_uri; } if ($sslproxy_protocol = "http") { return 301 https://$host$request_uri; } location / { expires 10m; etag off; proxy_ignore_client_abort on; proxy_intercept_errors on; proxy_next_upstream error timeout invalid_header; proxy_ignore_headers Set-Cookie Vary X-Accel-Expires Expires Cache-Control; more_clear_headers Set-Cookie Cookie Upgrade; proxy_cache one; proxy_cache_min_uses 1; proxy_cache_lock off; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; proxy_cache_valid 200 10m; proxy_cache_valid any 1m; proxy_cache_revalidate on; proxy_ssl_server_name on; include /etc/nginx/server.conf; proxy_set_header Host backend-host.com; proxy_cache_key "http://backend-host.com-1-$request_uri"; proxy_pass http://backend-host.com$request_uri; proxy_redirect off; } } Thank you in advance! Best Regards, Lucas Rolff -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Wed Mar 7 22:08:40 2018 From: peter_booth at me.com (Peter Booth) Date: Wed, 07 Mar 2018 17:08:40 -0500 Subject: location blocks, and if conditions in server context In-Reply-To: <5E6CC160-B698-4C44-A9AD-473F62F4034E@lucasrolff.com> References: <5E6CC160-B698-4C44-A9AD-473F62F4034E@lucasrolff.com> Message-ID: <8F4BDF34-E64E-43C1-A6BB-A5A0D2669852@me.com> I agree that avoiding if is a good thing. But avoiding duplication isn?t always good. Have you considered a model where your configuration file is generated with a templating engine? The input file that you modify to add/remove/change configurations could be free of duplication but the conf file that nginx reads could be concrete and verbose Sent from my iPhone > On Mar 7, 2018, at 11:55, Lucas Rolff wrote: > > Hi guys, > > I have a few hundred nginx zones, where I try to remove as much duplicate code as possible, and inherit as much as possible to prevent nginx from consuming memory (and also to keep things clean). > > However I came across something today, that I don?t know how to get my head around without duplicating code, even within a single server context. > > I have a set of distributed nginx servers, all these requires SSL certificates, where I use Let?s Encrypt to do this. > When doing the Let?s Encrypt validation, it uses a path such as /.well-known/acme-challenge/ > > For this, I made a location block such as: > > location ~* /.well-known { > proxy_pass http://letsencrypt.validation.backend.com$request_uri; > } > > Basically, I proxy_pass to the backend where I actually run the acme client ? works great. > > However, I have an option to force a redirect from http to https, and I?ve implemented that by doing an if condition on the server block level (so not within a location): > > if ($sslproxy_protocol = "http") { > return 301 https://$host$request_uri; > } > > This means I have something like: > > 1: location ~* /.well-known > 2: if condition doing redirect if protocol is http > 3: location / > 4: location /api > 5: location /test > > All my templates include 1 to 3, and *might* have additional locations. > I?ve decided to not put e.g. location /api inside the location / - because there?s things I don?t want to inherit, thus keeping them at the same ?level?, and not a location context inside a location context. > Things I don?t want to inherit, is stuff such as headers, max_ranges directive etc. > > My issue is ? because of this if condition that does the redirect to https ? it also applies to my location ~* /.well-known ? thus causing a redirect, and I want to prevent this, since it breaks the Let?s Encrypt validation (they do not accept 301 redirects). > > A solution would be to move the if condition into each location block that I want to have redirected, but then I start repeating myself 1, 2 or even 10 times ? which I don?t wanna do. > > Is there a smart way without adding too much complexity, which is still super-fast (I know if is evil) ? > > A config example is seen below: > > server { > listen 80; > listen 443 ssl http2; > > server_name secure.domain.com; > > access_log /var/log/nginx/secure.domain.com main; > > location ~* /.well-known { > proxy_pass http://letsencrypt.validation.backend.com$request_uri; > } > > if ($sslproxy_protocol = "http") { > return 301 https://$host$request_uri; > } > > location / { > > expires 10m; > etag off; > > proxy_ignore_client_abort on; > proxy_intercept_errors on; > proxy_next_upstream error timeout invalid_header; > proxy_ignore_headers Set-Cookie Vary X-Accel-Expires Expires Cache-Control; > more_clear_headers Set-Cookie Cookie Upgrade; > > proxy_cache one; > proxy_cache_min_uses 1; > proxy_cache_lock off; > proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; > > proxy_cache_valid 200 10m; > proxy_cache_valid any 1m; > > proxy_cache_revalidate on; > proxy_ssl_server_name on; > > include /etc/nginx/server.conf; > > proxy_set_header Host backend-host.com; > > proxy_cache_key "http://backend-host.com-1-$request_uri"; > proxy_pass http://backend-host.com$request_uri; > > proxy_redirect off; > } > } > > Thank you in advance! > > Best Regards, > Lucas Rolff > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at lucasrolff.com Wed Mar 7 22:18:45 2018 From: lucas at lucasrolff.com (Lucas Rolff) Date: Wed, 7 Mar 2018 22:18:45 +0000 Subject: location blocks, and if conditions in server context In-Reply-To: <8F4BDF34-E64E-43C1-A6BB-A5A0D2669852@me.com> References: <5E6CC160-B698-4C44-A9AD-473F62F4034E@lucasrolff.com>, <8F4BDF34-E64E-43C1-A6BB-A5A0D2669852@me.com> Message-ID: Hi peter, I generate configs already using a template engine (more specific Laravel Blade), so creating the functionality in the template is easy, however, I generally don?t like having server blocks that can be 100s of lines because of repeating things I don?t know the internals of nginx fully, how it uses memory when storing configs, but I would assume that inheritance is better than duplication in terms of memory usage. I?m just wondering if there?s a way I can avoid the if condition within the location blocks. - lucas Get Outlook for iOS ________________________________ From: nginx on behalf of Peter Booth Sent: Wednesday, March 7, 2018 11:08:40 PM To: nginx at nginx.org Subject: Re: location blocks, and if conditions in server context I agree that avoiding if is a good thing. But avoiding duplication isn?t always good. Have you considered a model where your configuration file is generated with a templating engine? The input file that you modify to add/remove/change configurations could be free of duplication but the conf file that nginx reads could be concrete and verbose Sent from my iPhone On Mar 7, 2018, at 11:55, Lucas Rolff > wrote: Hi guys, I have a few hundred nginx zones, where I try to remove as much duplicate code as possible, and inherit as much as possible to prevent nginx from consuming memory (and also to keep things clean). However I came across something today, that I don?t know how to get my head around without duplicating code, even within a single server context. I have a set of distributed nginx servers, all these requires SSL certificates, where I use Let?s Encrypt to do this. When doing the Let?s Encrypt validation, it uses a path such as /.well-known/acme-challenge/ For this, I made a location block such as: location ~* /.well-known { proxy_pass http://letsencrypt.validation.backend.com$request_uri; } Basically, I proxy_pass to the backend where I actually run the acme client ? works great. However, I have an option to force a redirect from http to https, and I?ve implemented that by doing an if condition on the server block level (so not within a location): if ($sslproxy_protocol = "http") { return 301 https://$host$request_uri; } This means I have something like: 1: location ~* /.well-known 2: if condition doing redirect if protocol is http 3: location / 4: location /api 5: location /test All my templates include 1 to 3, and *might* have additional locations. I?ve decided to not put e.g. location /api inside the location / - because there?s things I don?t want to inherit, thus keeping them at the same ?level?, and not a location context inside a location context. Things I don?t want to inherit, is stuff such as headers, max_ranges directive etc. My issue is ? because of this if condition that does the redirect to https ? it also applies to my location ~* /.well-known ? thus causing a redirect, and I want to prevent this, since it breaks the Let?s Encrypt validation (they do not accept 301 redirects). A solution would be to move the if condition into each location block that I want to have redirected, but then I start repeating myself 1, 2 or even 10 times ? which I don?t wanna do. Is there a smart way without adding too much complexity, which is still super-fast (I know if is evil) ? A config example is seen below: server { listen 80; listen 443 ssl http2; server_name secure.domain.com; access_log /var/log/nginx/secure.domain.com main; location ~* /.well-known { proxy_pass http://letsencrypt.validation.backend.com$request_uri; } if ($sslproxy_protocol = "http") { return 301 https://$host$request_uri; } location / { expires 10m; etag off; proxy_ignore_client_abort on; proxy_intercept_errors on; proxy_next_upstream error timeout invalid_header; proxy_ignore_headers Set-Cookie Vary X-Accel-Expires Expires Cache-Control; more_clear_headers Set-Cookie Cookie Upgrade; proxy_cache one; proxy_cache_min_uses 1; proxy_cache_lock off; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; proxy_cache_valid 200 10m; proxy_cache_valid any 1m; proxy_cache_revalidate on; proxy_ssl_server_name on; include /etc/nginx/server.conf; proxy_set_header Host backend-host.com; proxy_cache_key "http://backend-host.com-1-$request_uri"; proxy_pass http://backend-host.com$request_uri; proxy_redirect off; } } Thank you in advance! Best Regards, Lucas Rolff _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Mar 8 06:36:59 2018 From: nginx-forum at forum.nginx.org (shivramg94) Date: Thu, 08 Mar 2018 01:36:59 -0500 Subject: Make nginx ignore unresolvable upstream server host names during reload or boot up Message-ID: <3b4f0e2d72be58ae629d27c461e01234.NginxMailingListEnglish@forum.nginx.org> Hi, I have multiple upstream servers configured in an upstream block in my nginx configuration. upstream example2 { server example2.service.example.com:8001; server example1.service.example.com:8002; } server { listen 80; server_name example2.com; location / { proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://example2/; } } When i try to reload Nginx and at that time if one of my upstream servers (say example2.service.example.com) is not DNS resolvable, then the reload fails with an error "host not found in upstream". Is there any way we can ask nginx to ignore such unresolvable host names or rather configure Nginx to resolve these upstream server host names at run time instead of resolving it during the boot up or reload process? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278968,278968#msg-278968 From francis at daoine.org Thu Mar 8 08:43:40 2018 From: francis at daoine.org (Francis Daly) Date: Thu, 8 Mar 2018 08:43:40 +0000 Subject: location blocks, and if conditions in server context In-Reply-To: <5E6CC160-B698-4C44-A9AD-473F62F4034E@lucasrolff.com> References: <5E6CC160-B698-4C44-A9AD-473F62F4034E@lucasrolff.com> Message-ID: <20180308084340.GK3280@daoine.org> On Wed, Mar 07, 2018 at 04:55:15PM +0000, Lucas Rolff wrote: Hi there, > This means I have something like: > > 1: location ~* /.well-known > 2: if condition doing redirect if protocol is http > 3: location / > 4: location /api > 5: location /test > > All my templates include 1 to 3, and *might* have additional locations. > My issue is ? because of this if condition that does the redirect to https ? it also applies to my location ~* /.well-known ? thus causing a redirect, and I want to prevent this, since it breaks the Let?s Encrypt validation (they do not accept 301 redirects). > Is there a smart way without adding too much complexity, which is still super-fast (I know if is evil) ? As phrased, I think the short answer to your question is "no". However... You optionally redirect things from http to https. Is that "you want to redirect *everything* from http to https, apart from the letsencrypt thing"? If so, you could potentially have just one server { listen 80; location / { return 301 https://$host$uri; } location /.well-known/ { proxy_pass http://letsencrypt.validation.backend.com; } } and a bunch of server { listen 443; } blocks. Or: you use $sslproxy_protocol. Where does that come from? If it is a thing that you create to decide whether or not to redirect to https, then could you include a check for whether the request starts with /.well-known/, and if so set it to something other than "http"? f -- Francis Daly francis at daoine.org From lucas at lucasrolff.com Thu Mar 8 08:57:29 2018 From: lucas at lucasrolff.com (Lucas Rolff) Date: Thu, 8 Mar 2018 08:57:29 +0000 Subject: location blocks, and if conditions in server context In-Reply-To: <20180308084340.GK3280@daoine.org> References: <5E6CC160-B698-4C44-A9AD-473F62F4034E@lucasrolff.com> <20180308084340.GK3280@daoine.org> Message-ID: <220270CB-4121-4B5C-A57F-D62E1E244236@lucasrolff.com> Hi Francis, I indeed thought about having a separate server {} block in case there?s the http to https redirect for a specific domain. Since it depends on the domain, I can?t make a general one to match everything. > Or: you use $sslproxy_protocol. Where does that come from? $sslproxy_protocol is a simple map doing: map $https $sslproxy _protocol { default "http"; SSL "https"; on "https"; } Best Regards, Lucas Rolff On 08/03/2018, 09.44, "nginx on behalf of Francis Daly" wrote: On Wed, Mar 07, 2018 at 04:55:15PM +0000, Lucas Rolff wrote: Hi there, > This means I have something like: > > 1: location ~* /.well-known > 2: if condition doing redirect if protocol is http > 3: location / > 4: location /api > 5: location /test > > All my templates include 1 to 3, and *might* have additional locations. > My issue is ? because of this if condition that does the redirect to https ? it also applies to my location ~* /.well-known ? thus causing a redirect, and I want to prevent this, since it breaks the Let?s Encrypt validation (they do not accept 301 redirects). > Is there a smart way without adding too much complexity, which is still super-fast (I know if is evil) ? As phrased, I think the short answer to your question is "no". However... You optionally redirect things from http to https. Is that "you want to redirect *everything* from http to https, apart from the letsencrypt thing"? If so, you could potentially have just one server { listen 80; location / { return 301 https://$host$uri; } location /.well-known/ { proxy_pass http://letsencrypt.validation.backend.com; } } and a bunch of server { listen 443; } blocks. Or: you use $sslproxy_protocol. Where does that come from? If it is a thing that you create to decide whether or not to redirect to https, then could you include a check for whether the request starts with /.well-known/, and if so set it to something other than "http"? f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu Mar 8 10:40:00 2018 From: nginx-forum at forum.nginx.org (neuronetv) Date: Thu, 08 Mar 2018 05:40:00 -0500 Subject: newbie: nginx rtmp module In-Reply-To: References: Message-ID: <4abcfd7ce31c77f1f9791c0fcae43ef6.NginxMailingListEnglish@forum.nginx.org> thankyou for that. ------------------------ # nginx -V nginx version: nginx/1.10.2 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-17) (GCC) built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --user=nginx --group=nginx --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_perl_module=dynamic --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_ssl_module --with-debug --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-ld-opt=' -Wl,-E' ---------------------------- hmmm...can't see rtmp in there anywhere. I'm running nginx on a centos 6 vps and its taken me ages to get the system up and running properly, so it's a case of 'if it ain't broke don't fix it'. Although you've got me thinking now... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278950,278971#msg-278971 From nginx-forum at forum.nginx.org Thu Mar 8 10:59:54 2018 From: nginx-forum at forum.nginx.org (shivramg94) Date: Thu, 08 Mar 2018 05:59:54 -0500 Subject: Make nginx ignore unresolvable upstream server host names during reload or boot up Message-ID: <16d240ef2feff918ac01bbb7d9a0b041.NginxMailingListEnglish@forum.nginx.org> Hi, I have multiple upstream servers configured in an upstream block in my nginx configuration. upstream example2 { server example2.service.example.com:8001; server example1.service.example.com:8002; } server { listen 80; server_name example2.com; location / { proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://example2/; } } When i try to reload Nginx and at that time if one of my upstream servers (say example2.service.example.com) is not DNS resolvable, then the reload fails with an error "host not found in upstream". Is there any way we can ask nginx to ignore such unresolvable host names or rather configure Nginx to resolve these upstream server host names at run time instead of resolving it during the boot up or reload process? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278972,278972#msg-278972 From abiliojr at gmail.com Thu Mar 8 11:16:50 2018 From: abiliojr at gmail.com (Abilio Marques) Date: Thu, 8 Mar 2018 12:16:50 +0100 Subject: ERR_SSL_BAD_RECORD_MAC_ALERT when trying to reuse SSL session Message-ID: Using NGINX 1.12.2 on MIPS (haven't tested on x86), if I set: ssl_session_cache shared:SSL:1m; # it also fails with 10m And the client reestablishes the connection, it gets: net::ERR_SSL_BAD_RECORD_MAC_ALERT when trying to reuse SSL session. Has anyone seen anything like this? More detail: This was tested on 1.12.2, on a MIPS CPU, using OpenSSL 1.0.2j, and built by gcc 4.8.3 (OpenWrt/Linaro GCC 4.8-2014.04 r47070). Interesting portion of my configuration file: server { listen 443 ssl; ssl_certificate /etc/ssl/certs/bridge.cert.pem; ssl_certificate_key /etc/ssl/private/bridge.key.pem; ssl_protocols TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256; ssl_ecdh_curve prime256v1; ssl_session_timeout 24h; ssl_session_tickets on; ssl_session_cache shared:SSL:1m; # set to 10m, still fails, remove, the problem seems to disappear keepalive_timeout 1s; # reduced during troubleshooting to make it trigger easily keepalive_requests 1; # reduced during troubleshooting to make it trigger easily include apiv1.conf; # where all the location rules are } -------------- next part -------------- An HTML attachment was scrubbed... URL: From smntov at gmail.com Thu Mar 8 11:48:41 2018 From: smntov at gmail.com (ST) Date: Thu, 08 Mar 2018 13:48:41 +0200 Subject: nginx + php-fpm: REQUEST_URI disappears for files that end with .php Message-ID: <1520509721.1782.72.camel@gmail.com> Hello, I have following nginx + php-fpm configuration but for some reasons files that end with .php miss REQUEST_URI when they arrive to php-fpm. For instance: https://n.example.com/audio/radio/ -> array(1) { ["REQUEST_URI"]=> string(15) "/audio/radio/" } https://n.example.com/rus_example.html -> array(1) { ["REQUEST_URI"]=> string(15) "rus_example.html" } https://n.example.com/rus_example.php -> array(0) { } What is wrong? Thank you! Here is my configuration: location / { try_files $uri $uri/ @netcat-rewrite; } location @netcat-rewrite { rewrite ^/(.*)$ /netcat/require/e404.php?REQUEST_URI=$1 last; } error_page 404 = /netcat/require/e404.php; location ~ \.php$ { if ($args ~ "netcat_files/") { expires 7d; add_header Cache-Control "public"; } fastcgi_split_path_info ^(.+\.php)(/.+)$; try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root $fastcgi_script_name; fastcgi_param DOCUMENT_URI $document_uri; include fastcgi_params; } PHP-FPM log: no .php file: 08/???/2018:13:44:13 +0200 "GET /netcat/require/e404.php?REQUEST_URI=audio/radio/" 200 .php file: 08/???/2018:13:44:14 +0200 "GET /netcat/require/e404.php" 404 From maxima078 at gmail.com Thu Mar 8 21:34:36 2018 From: maxima078 at gmail.com (max) Date: Thu, 8 Mar 2018 22:34:36 +0100 Subject: proxy_pass and trailing / decode uri Message-ID: Hi, Sorry if it was already asked but I'd like to know if the only way to prevent Nginx from decoding uri while using proxy_pass is: https://stackoverflow.com/a/37584656/3515745 Here is my (simplified) conf: server { server_name domain1.com; location / { proxy_pass http://127.0.0.1:81; } location /api { proxy_pass http://127.0.0.1:82/; } } Location "/" is perfectly working. My problem is that "/api" location will decode special character. To illustrate my problem: http://domain1.com/image1.png => HTTP 200 http://domain1.com/*api*/resource1.png => HTTP 200 http://domain1.com/image1+2.png => HTTP 200 http://domain1.com/*api*/resource1+2.png => HTTP 404 http://domain1.com/image1 2.png => HTTP 200 http://domain1.com/*api*/resource1 2.png => HTTP 404 I would like to know how to make my "/api" location respond like "/" without decoding %? This solution https://stackoverflow.com/a/37584656/3515745 seems to be just a workaround. Thanks for any hints ! Max -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Mar 8 23:31:57 2018 From: francis at daoine.org (Francis Daly) Date: Thu, 8 Mar 2018 23:31:57 +0000 Subject: location blocks, and if conditions in server context In-Reply-To: <220270CB-4121-4B5C-A57F-D62E1E244236@lucasrolff.com> References: <5E6CC160-B698-4C44-A9AD-473F62F4034E@lucasrolff.com> <20180308084340.GK3280@daoine.org> <220270CB-4121-4B5C-A57F-D62E1E244236@lucasrolff.com> Message-ID: <20180308233157.GL3280@daoine.org> On Thu, Mar 08, 2018 at 08:57:29AM +0000, Lucas Rolff wrote: Hi there, > I indeed thought about having a separate server {} block in case there?s the http to https redirect for a specific domain. > Since it depends on the domain, I can?t make a general one to match everything. So, if I read this correctly, the new "requirement statement" is: some domains want to redirect everything (apart from the letsencrypt piece) from http to https; and some domains do not want to redirect anything from http to https. In that case, the one server with "listen 80 default;" and the two locations, one with "return 301" and the other with "proxy_pass"; plus the multiple servers with "listen 443" should Just Work. If you do want the to-https redirect for this domain, do not add "listen 80" in the 443 server. If you do not want the to-https redirect for that domain, do add "listen 80" in the 443 server. Am I missing something? > > Or: you use $sslproxy_protocol. Where does that come from? > > $sslproxy_protocol is a simple map doing: > > map $https $sslproxy _protocol { > default "http"; > SSL "https"; > on "https"; > } Because I don't know what else you use that variable for, perhaps you could make a new variable $redirect_to_https, like so (untested): map $https$uri $redirect_to_https { default "yes"; ~^SSL "no"; ~^on "no"; ~^/.well-known/ "no"; } and then redirect based on the value of that variable, where it might matter. (I presume that $https is empty in http-mode, per http://nginx.org/r/$https) I prefer the first solution, without the extra variable-and-if; but it's not my server. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Mar 9 00:01:16 2018 From: francis at daoine.org (Francis Daly) Date: Fri, 9 Mar 2018 00:01:16 +0000 Subject: nginx + php-fpm: REQUEST_URI disappears for files that end with .php In-Reply-To: <1520509721.1782.72.camel@gmail.com> References: <1520509721.1782.72.camel@gmail.com> Message-ID: <20180309000116.GM3280@daoine.org> On Thu, Mar 08, 2018 at 01:48:41PM +0200, ST wrote: Hi there, * What request do you make? (e.g. /rus_example.php) * Does the matching file exist on the filesystem (e.g. /usr/local/nginx/html/rus_example.php)? * If yes - what response do you want, and what response do you get? * If no - what response do you want, and what response do you get? > error_page 404 = /netcat/require/e404.php; > > location ~ \.php$ { > if ($args ~ "netcat_files/") { > expires 7d; > add_header Cache-Control "public"; > } > > fastcgi_split_path_info ^(.+\.php)(/.+)$; > try_files $uri =404; If /usr/local/nginx/html/rus_example.php does not exist, that line says "return 404", which the error_page line turns into a request for /netcat/require/e404.php f -- Francis Daly francis at daoine.org From smntov at gmail.com Fri Mar 9 09:40:46 2018 From: smntov at gmail.com (ST) Date: Fri, 09 Mar 2018 11:40:46 +0200 Subject: nginx + php-fpm: REQUEST_URI disappears for files that end with .php In-Reply-To: <20180309000116.GM3280@daoine.org> References: <1520509721.1782.72.camel@gmail.com> <20180309000116.GM3280@daoine.org> Message-ID: <1520588446.1782.99.camel@gmail.com> Hi Francis, you are correct. You explained exactly what happens - thank you! On Fri, 2018-03-09 at 00:01 +0000, Francis Daly wrote: > On Thu, Mar 08, 2018 at 01:48:41PM +0200, ST wrote: > > Hi there, > > * What request do you make? (e.g. /rus_example.php) > * Does the matching file exist on the filesystem > (e.g. /usr/local/nginx/html/rus_example.php)? > * If yes - what response do you want, and what response do you get? > * If no - what response do you want, and what response do you get? > > > error_page 404 = /netcat/require/e404.php; > > > > location ~ \.php$ { > > if ($args ~ "netcat_files/") { > > expires 7d; > > add_header Cache-Control "public"; > > } > > > > fastcgi_split_path_info ^(.+\.php)(/.+)$; > > try_files $uri =404; > > If /usr/local/nginx/html/rus_example.php does not exist, that line > says "return 404", which the error_page line turns into a request for > /netcat/require/e404.php > > f From edigarov at qarea.com Fri Mar 9 14:06:06 2018 From: edigarov at qarea.com (Gregory Edigarov) Date: Fri, 9 Mar 2018 16:06:06 +0200 Subject: How to stop nginx from adding a trailing slash Message-ID: Hello, somesite.com/blog is 301 redirected to somesite.com/blog/ by nginx. this is not the behaviour i want. is there any way to stop it from doing so? Thank you. From igor at sysoev.ru Fri Mar 9 14:52:31 2018 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 9 Mar 2018 17:52:31 +0300 Subject: How to stop nginx from adding a trailing slash In-Reply-To: References: Message-ID: <43AE84AC-48CD-4FBC-8AF0-4E2E053D6755@sysoev.ru> > On 9 Mar 2018, at 17:06, Gregory Edigarov wrote: > > Hello, > > somesite.com/blog is 301 redirected to somesite.com/blog/ by nginx. > > this is not the behaviour i want. > > is there any way to stop it from doing so? A special location for "/blog": location = /blog { ... } location /blog/ { ... } -- Igor Sysoev http://nginx.com From edigarov at qarea.com Fri Mar 9 16:17:58 2018 From: edigarov at qarea.com (Gregory Edigarov) Date: Fri, 9 Mar 2018 18:17:58 +0200 Subject: How to stop nginx from adding a trailing slash In-Reply-To: <43AE84AC-48CD-4FBC-8AF0-4E2E053D6755@sysoev.ru> References: <43AE84AC-48CD-4FBC-8AF0-4E2E053D6755@sysoev.ru> Message-ID: <6b5a0664-9e60-00a2-cffa-1716234fe971@qarea.com> On 09.03.18 16:52, Igor Sysoev wrote: >> On 9 Mar 2018, at 17:06, Gregory Edigarov wrote: >> >> Hello, >> >> somesite.com/blog is 301 redirected to somesite.com/blog/ by nginx. >> >> this is not the behaviour i want. >> >> is there any way to stop it from doing so? > A special location for "/blog": > > location = /blog { > ... > } > > location /blog/ { > ... > } > > sorry, doesn't work... It somehow seems to me, some time ago there was an option, to switch the related behaviour. but may be it is my false memory. From ianmcgraw3 at gmail.com Fri Mar 9 16:50:03 2018 From: ianmcgraw3 at gmail.com (Ian McGraw) Date: Fri, 9 Mar 2018 11:50:03 -0500 Subject: Planned Features for gRPC Proxy Message-ID: Hi all, I am new to the nginx community so my apologies if this is not the correct place for this kind of question. I see gRPC proxy is in progress for 1.13: https://trac.nginx.org/nginx/roadmap Does anyone know if the proxy will support host/path based routing for gRPC calls? I have a use case in Kubernetes where I am trying to expose many gRPC microservices through a single nginx ingress controller. I?m trying to find out if context based routing will be supported so I can setup rules to be able to proxy to different services. Thanks for the help, -Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Mar 9 16:59:30 2018 From: nginx-forum at forum.nginx.org (neuronetv) Date: Fri, 09 Mar 2018 11:59:30 -0500 Subject: newbie: nginx rtmp module In-Reply-To: References: Message-ID: <041656c5beaa7885f943e1d2bbaec517.NginxMailingListEnglish@forum.nginx.org> I've resigned myself to the fact that there is no rtmp module here which leads me to the obvious question: is it possible to install an rtmp module into this 'yum install' version of nginx? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278950,278984#msg-278984 From igor at sysoev.ru Fri Mar 9 20:49:43 2018 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 9 Mar 2018 23:49:43 +0300 Subject: How to stop nginx from adding a trailing slash In-Reply-To: <6b5a0664-9e60-00a2-cffa-1716234fe971@qarea.com> References: <43AE84AC-48CD-4FBC-8AF0-4E2E053D6755@sysoev.ru> <6b5a0664-9e60-00a2-cffa-1716234fe971@qarea.com> Message-ID: <7167B926-5768-450A-B55F-2660A7038558@sysoev.ru> > On 9 Mar 2018, at 19:17, Gregory Edigarov wrote: > > On 09.03.18 16:52, Igor Sysoev wrote: >>> On 9 Mar 2018, at 17:06, Gregory Edigarov wrote: >>> >>> Hello, >>> >>> somesite.com/blog is 301 redirected to somesite.com/blog/ by nginx. >>> >>> this is not the behaviour i want. >>> >>> is there any way to stop it from doing so? >> A special location for "/blog": >> >> location = /blog { >> ... >> } >> >> location /blog/ { >> ... >> } >> >> > sorry, doesn't work... > > It somehow seems to me, some time ago there was an option, to switch the related behaviour. > but may be it is my false memory. It should work. 301 redirect is usually cached by browser for long time. Try curl, another browser or try to clean cache. -- Igor Sysoev http://nginx.com From lists at lazygranch.com Fri Mar 9 22:14:44 2018 From: lists at lazygranch.com (Gary) Date: Fri, 09 Mar 2018 14:14:44 -0800 Subject: newbie: nginx rtmp module In-Reply-To: <041656c5beaa7885f943e1d2bbaec517.NginxMailingListEnglish@forum.nginx.org> Message-ID: I believe you need to compile with the appropriate module. If this was freeBSD, no problem. Just use ports. (Of course FreeBSD has many other problems.) With centos, you will need to compile the code and use all the "with" options for each module you want to install. Potentially you will need to set up systemd to run your version of Nginx. It wouldn't surprise me that this is complicated enough that someone had already done a write up. Regarding updating to Centos 7, if you are on VPS, you can easily image your Centos 6 installation should you get bogged down in the update. Or you set up a second VPS on a fresh install of centos 7 with a different domain name. That is how a currently do major changes. However I'm thinking I might set up DNS so that www1.example.com go to the experimental server. That would save the cost of owning an additional domain plus make it easier to deal with let'sencrypt. I found this: https://github.com/thonatos/notes/blob/master/backend-notes/install-and-conf-nginx-with-rtmp-on-Centos-7-64.md This isn't up to date, but it is a start. Looking at the Nginx download page, they don't use git for source, but there is a mirror of sorts of the Nginx source on github. It would do that for both the source and module. But the deal with Centos is you shouldn't be compiling code unless there is no other alternative. That is the idea is you do "yum update" and everything is secure. I don't go so far as to crontab the process, but some do. The point here being maybe there is some other way to rtmp without the module. ? Original Message ? From: nginx-forum at forum.nginx.org Sent: March 9, 2018 8:59 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Re: newbie: nginx rtmp module I've resigned myself to the fact that there is no rtmp module here which leads me to the obvious question: is it possible to install an rtmp module into this 'yum install' version of nginx? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278950,278984#msg-278984 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From lists at lazygranch.com Sat Mar 10 03:36:37 2018 From: lists at lazygranch.com (lists at lazygranch.com) Date: Fri, 9 Mar 2018 19:36:37 -0800 Subject: newbie: nginx rtmp module In-Reply-To: <041656c5beaa7885f943e1d2bbaec517.NginxMailingListEnglish@forum.nginx.org> References: <041656c5beaa7885f943e1d2bbaec517.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180309193637.4dd83281.lists@lazygranch.com> I had a few neurons fire. I forgot nginx can load dynamic modules. https://www.nginx.com/blog/nginx-dynamic-modules-how-they-work/ I haven't done this myself, so you are on your own at this point. On Fri, 09 Mar 2018 11:59:30 -0500 "neuronetv" wrote: > I've resigned myself to the fact that there is no rtmp module here > which leads me to the obvious question: > > is it possible to install an rtmp module into this 'yum install' > version of nginx? > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,278950,278984#msg-278984 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Sat Mar 10 09:07:47 2018 From: nginx-forum at forum.nginx.org (neuronetv) Date: Sat, 10 Mar 2018 04:07:47 -0500 Subject: newbie: nginx rtmp module In-Reply-To: <20180309193637.4dd83281.lists@lazygranch.com> References: <20180309193637.4dd83281.lists@lazygranch.com> Message-ID: <3f2d40ec36457197d660d6efb39349b0.NginxMailingListEnglish@forum.nginx.org> thanks again for your feedback on this thread and I see now I will have to strip out the 'yum install' and re-compile nginx like I did before. I was able to configure in the rtmp module using that method and video streaming worked. The 'aaaarrrgh' bit is just working out how to get the compiled install to serve multiple domains. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278950,278989#msg-278989 From lists at lazygranch.com Sat Mar 10 16:29:00 2018 From: lists at lazygranch.com (Gary) Date: Sat, 10 Mar 2018 08:29:00 -0800 Subject: newbie: nginx rtmp module In-Reply-To: <3f2d40ec36457197d660d6efb39349b0.NginxMailingListEnglish@forum.nginx.org> Message-ID: I believe you shouldn't have to compile Nginx but use the disty binary. Then you do the dynamic load trick. This way you can do "yum update" periodically without having to compile Nginx, but rather just download the latest binary. However don't break what is working! ? Original Message ? From: nginx-forum at forum.nginx.org Sent: March 10, 2018 1:08 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Re: newbie: nginx rtmp module thanks again for your feedback on this thread and I see now I will have to strip out the 'yum install' and re-compile nginx like I did before. I was able to configure in the rtmp module using that method and video streaming worked. The 'aaaarrrgh' bit is just working out how to get the compiled install to serve multiple domains. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278950,278989#msg-278989 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Sat Mar 10 16:57:49 2018 From: nginx-forum at forum.nginx.org (neuronetv) Date: Sat, 10 Mar 2018 11:57:49 -0500 Subject: newbie: nginx rtmp module In-Reply-To: References: Message-ID: <84e591153922029ebddb849c3d010ca6.NginxMailingListEnglish@forum.nginx.org> hi sorry but I'm not quite clear. You said 'you shouldn't have to compile Nginx but use the disty binary'. I'm not sure what the disty binary is. Do you mean installing from the nginx repo at http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm? or from the latest tarball? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278950,278991#msg-278991 From lists at lazygranch.com Sat Mar 10 17:23:58 2018 From: lists at lazygranch.com (Gary) Date: Sat, 10 Mar 2018 09:23:58 -0800 Subject: newbie: nginx rtmp module In-Reply-To: <84e591153922029ebddb849c3d010ca6.NginxMailingListEnglish@forum.nginx.org> Message-ID: Yum install nginx gets you the binary. I'm not really sure how the dynamic module load works, but my understanding (or perhaps lack thereof) means you supplement the precompiled binary with the module. Solve your other problems first, then you can investigate this if you want to beat your head against the wall some more. Once you figure the dynamic module load, you could do a post about how it works. I often do this just so I can find my old post if I have trouble doing the same thing a year or two later. Presumably once you figure this out, your transition to Centos 7 will be easier. ? Original Message ? From: nginx-forum at forum.nginx.org Sent: March 10, 2018 8:58 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Re: newbie: nginx rtmp module hi sorry but I'm not quite clear. You said 'you shouldn't have to compile Nginx but use the disty binary'. I'm not sure what the disty binary is. Do you mean installing from the nginx repo at http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm? or from the latest tarball? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278950,278991#msg-278991 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Sat Mar 10 18:05:40 2018 From: nginx-forum at forum.nginx.org (neuronetv) Date: Sat, 10 Mar 2018 13:05:40 -0500 Subject: newbie: nginx rtmp module In-Reply-To: References: Message-ID: yes I've uninstalled nginx and reinstalled, this time using the nginx repo and it gave me a newer version but still no rtmp module to be seen. Fortunately I've solved my other problems but nginx is no good to me without rtmp as I have to do video streaming, this is my whole reason for migrating from apache to nginx. I'm struggling to understand the dynamic modules page (thanks for that link) and I'm not even sure it applies to a 'yum install' version as it refers to configuring the module into nginx during the build. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278950,278993#msg-278993 From mdounin at mdounin.ru Sun Mar 11 22:26:47 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 12 Mar 2018 01:26:47 +0300 Subject: proxy_pass and trailing / decode uri In-Reply-To: References: Message-ID: <20180311222647.GZ89840@mdounin.ru> Hello! On Thu, Mar 08, 2018 at 10:34:36PM +0100, max wrote: > Sorry if it was already asked but I'd like to know if the only way to > prevent Nginx from decoding uri while using proxy_pass is: > https://stackoverflow.com/a/37584656/3515745 > > Here is my (simplified) conf: > > server { > server_name domain1.com; > > location / { > proxy_pass http://127.0.0.1:81; > > } > > location /api { > proxy_pass http://127.0.0.1:82/; > > } > } > > Location "/" is perfectly working. My problem is that "/api" location will > decode special character. When you want nginx to replace matching part of the URI with "/", it will do so on the decoded/normalized URI, and will re-encode special characters in what's left. If you want nginx to preserve original URI as sent by the client, consider using proxy_pass without the URI part. That is, instead of proxy_pass http://127.0.0.1:82/; use proxy_pass http://127.0.0.1:82; Note no trailing "/". This way the original URI as sent by the client will be preserved without any modifications. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Sun Mar 11 23:10:57 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 12 Mar 2018 02:10:57 +0300 Subject: Planned Features for gRPC Proxy In-Reply-To: References: Message-ID: <20180311231057.GA89840@mdounin.ru> Hello! On Fri, Mar 09, 2018 at 11:50:03AM -0500, Ian McGraw wrote: > Hi all, > > I am new to the nginx community so my apologies if this is not > the correct place for this kind of question. > > I see gRPC proxy is in progress for 1.13: > https://trac.nginx.org/nginx/roadmap > > Does anyone know if the proxy will support host/path based > routing for gRPC calls? I have a use case in Kubernetes where I > am trying to expose many gRPC microservices through a single > nginx ingress controller. I?m trying to find out if context > based routing will be supported so I can setup rules to be able > to proxy to different services. Yes, it will be possible to proxy to different backend servers based on normal server and location matching, much like it is possible with proxy_pass and fastcgi_pass. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Sun Mar 11 23:57:03 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 12 Mar 2018 02:57:03 +0300 Subject: ERR_SSL_BAD_RECORD_MAC_ALERT when trying to reuse SSL session In-Reply-To: References: Message-ID: <20180311235703.GC89840@mdounin.ru> Hello! On Thu, Mar 08, 2018 at 12:16:50PM +0100, Abilio Marques wrote: > Using NGINX 1.12.2 on MIPS (haven't tested on x86), if I set: > > ssl_session_cache shared:SSL:1m; # it also fails with 10m > > > And the client reestablishes the connection, it > gets: net::ERR_SSL_BAD_RECORD_MAC_ALERT when trying to reuse SSL session. > > Has anyone seen anything like this? > > > More detail: > > This was tested on 1.12.2, on a MIPS CPU, using OpenSSL 1.0.2j, and built > by gcc 4.8.3 (OpenWrt/Linaro GCC 4.8-2014.04 r47070). This certainly works on x86, so it must be something MIPS-specific or something specific to your particular build. Last time I saw OpenWrt/Linaro nginx builds, they were compiled using buggy 3rd party crossbuild patches, and didn't work due to this (see https://trac.nginx.org/nginx/ticket/899). You may want to check your build before trying to do anything else. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon Mar 12 02:56:44 2018 From: nginx-forum at forum.nginx.org (mslee) Date: Sun, 11 Mar 2018 22:56:44 -0400 Subject: How can i configure proxy multiple hosts for a domain? Message-ID: <83c0c481efbf2132223f3423f9feac5f.NginxMailingListEnglish@forum.nginx.org> It's not load balancing like round robin, least conn, ip bash. I want to know how to proxy simultaneously to the registered proxy host for one domain. I searched for this method, but all documents were about load balancing. Please help me if you are aware of this problem. Thank you in advance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278997,278997#msg-278997 From maxima078 at gmail.com Mon Mar 12 08:55:15 2018 From: maxima078 at gmail.com (max) Date: Mon, 12 Mar 2018 09:55:15 +0100 Subject: proxy_pass and trailing / decode uri In-Reply-To: <20180311222647.GZ89840@mdounin.ru> References: <20180311222647.GZ89840@mdounin.ru> Message-ID: Hi, When you want nginx to replace matching part of the URI with "/", > it will do so on the decoded/normalized URI, and will re-encode > special characters in what's left. > > If you want nginx to preserve original URI as sent by the client, > consider using proxy_pass without the URI part. That is, > instead of > > proxy_pass http://127.0.0.1:82/; > > use > > proxy_pass http://127.0.0.1:82; > > Note no trailing "/". This way the original URI as sent by the > client will be preserved without any modifications. > Thank you for your answer but it is not correct for location different than '/'. With your proposal, targeting http://domain1.com/api/foo/bar, socket on port 82 receives: /api/foo/bar. I guess the only way to remove the /api part is "rewrite" and involves re-encoding... Max. -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxima078 at gmail.com Mon Mar 12 08:59:01 2018 From: maxima078 at gmail.com (max) Date: Mon, 12 Mar 2018 09:59:01 +0100 Subject: proxy_pass and trailing / decode uri In-Reply-To: References: <20180311222647.GZ89840@mdounin.ru> Message-ID: Sorry for double post: > > I guess the only way to remove the /api part is "rewrite" and involves > re-encoding... => I guess the only way to remove the /api without re-encoding URI is "rewrite" ... Max 2018-03-12 9:55 GMT+01:00 max : > Hi, > > When you want nginx to replace matching part of the URI with "/", >> it will do so on the decoded/normalized URI, and will re-encode >> special characters in what's left. >> >> If you want nginx to preserve original URI as sent by the client, >> consider using proxy_pass without the URI part. That is, >> instead of >> >> proxy_pass http://127.0.0.1:82/; >> >> use >> >> proxy_pass http://127.0.0.1:82; >> >> Note no trailing "/". This way the original URI as sent by the >> client will be preserved without any modifications. >> > > > Thank you for your answer but it is not correct for location different > than '/'. With your proposal, targeting http://domain1.com/api/foo/bar, > socket on port 82 receives: /api/foo/bar. I guess the only way to remove > the /api part is "rewrite" and involves re-encoding... > > Max. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arozyev at nginx.com Mon Mar 12 09:52:51 2018 From: arozyev at nginx.com (Aziz Rozyev) Date: Mon, 12 Mar 2018 12:52:51 +0300 Subject: How can i configure proxy multiple hosts for a domain? In-Reply-To: <83c0c481efbf2132223f3423f9feac5f.NginxMailingListEnglish@forum.nginx.org> References: <83c0c481efbf2132223f3423f9feac5f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <455A5858-DFA0-420D-85FA-D486E46B818E@nginx.com> Hi, perhaps you should?ve explained your intention bit more in details, may be with some functional schema. As of my current understanding, you want something like port mirroring to duplicate your network traffic. anyways, it?s out of scope of nginx, search for ?port/traffic mirroring?. br, Aziz. > On 12 Mar 2018, at 05:56, mslee wrote: > > It's not load balancing like round robin, least conn, ip bash. > I want to know how to proxy simultaneously to the registered proxy host for > one domain. > > I searched for this method, but all documents were about load balancing. > Please help me if you are aware of this problem. > > Thank you in advance. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278997,278997#msg-278997 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Mar 12 10:32:23 2018 From: nginx-forum at forum.nginx.org (Evgenij Krupchenko) Date: Mon, 12 Mar 2018 06:32:23 -0400 Subject: 1.13.9 compile errors Message-ID: i'm building custom nginx binary on debian 9 using latest zlib and openssl sources. ./configure \ --prefix=/nginx \ --sbin-path=/nginx/sbin/nginx-https \ --conf-path=/nginx/conf/https \ --pid-path=/run/nginx-https.pid \ --error-log-path=/log/nginx-https_error.log \ --http-log-path=/log/nginx-https_access.log \ --http-client-body-temp-path=/nginx/tmp/https_client-body \ --http-proxy-temp-path=/nginx/tmp/https_proxy \ --http-fastcgi-temp-path=/nginx/tmp/http_fastcgi \ --user=www \ --group=www \ --without-select_module \ --without-poll_module \ --without-http_ssi_module \ --without-http_userid_module \ --without-http_geo_module \ --without-http_map_module \ --without-http_split_clients_module \ --without-http_referer_module \ --without-http_uwsgi_module \ --without-http_scgi_module \ --without-http_memcached_module \ --without-http_limit_conn_module \ --without-http_limit_req_module \ --without-http_empty_gif_module \ --without-http_browser_module \ --without-http_upstream_hash_module \ --without-http_upstream_ip_hash_module \ --without-http_upstream_least_conn_module \ --without-http_upstream_keepalive_module \ --without-http_upstream_zone_module \ --with-threads \ --with-file-aio \ --with-zlib=/install/zlib-1.2.11 \ --with-openssl=/install/openssl-1.1.1-pre2 \ --with-http_ssl_module in the end of "make" i got this: objs/ngx_modules.o \ -ldl -lpthread -lpthread -lcrypt -lpcre /install/openssl-1.1.1-pre2/.openssl/lib/libssl.a /install/openssl-1.1.1-pre2/.openssl/lib/libcrypto.a -ldl /install/zlib-1.2.11/libz.a \ -Wl,-E /install/openssl-1.1.1-pre2/.openssl/lib/libcrypto.a(threads_pthread.o): In function `fork_once_func': threads_pthread.c:(.text+0x16): undefined reference to `pthread_atfork' collect2: error: ld returned 1 exit status objs/Makefile:223: recipe for target 'objs/nginx' failed make[1]: *** [objs/nginx] Error 1 make[1]: Leaving directory '/install/nginx-1.13.9' Makefile:8: recipe for target 'build' failed make: *** [build] Error 2 previous version 1.13.8 and all before was build successfully with the same configure parameters. also, i've found this post: https://www.coldawn.com/compile-nginx-on-centos-7-to-enable-tls13/ as suggested, after "configure" i've modified objs/Makefile: removed the first -lpthread and the second -lpthread moved to the end of the line. in my case it was the line #331: before: -ldl -lpthread -lpthread -lcrypt -lpcre /install/openssl-1.1.1-pre2/.openssl/lib/libssl.a /install/openssl-1.1.1-pre2/.openssl/lib/libcrypto.a -ldl /install/zlib-1.2.11/libz.a \ after: -ldl -lcrypt -lpcre /install/openssl-1.1.1-pre2/.openssl/lib/libssl.a /install/openssl-1.1.1-pre2/.openssl/lib/libcrypto.a -ldl /install/zlib-1.2.11/libz.a -lpthread \ and then it builds successfully. and also success when i'm using openssl-1.0.2n in configure parameters. so the problem only occurs in combination nginx-1.13.9 + openssl-1.1.1-pre2 and my question is: someone would fix bug this in the next 1.13.10 or should we now always edit the makefile before compiling? or this is not a bug and i'm just missing something? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279002,279002#msg-279002 From mdounin at mdounin.ru Mon Mar 12 12:28:15 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 12 Mar 2018 15:28:15 +0300 Subject: proxy_pass and trailing / decode uri In-Reply-To: References: <20180311222647.GZ89840@mdounin.ru> Message-ID: <20180312122815.GD89840@mdounin.ru> Hello! On Mon, Mar 12, 2018 at 09:55:15AM +0100, max wrote: > > When you want nginx to replace matching part of the URI with "/", > > it will do so on the decoded/normalized URI, and will re-encode > > special characters in what's left. > > > > If you want nginx to preserve original URI as sent by the client, > > consider using proxy_pass without the URI part. That is, > > instead of > > > > proxy_pass http://127.0.0.1:82/; > > > > use > > > > proxy_pass http://127.0.0.1:82; > > > > Note no trailing "/". This way the original URI as sent by the > > client will be preserved without any modifications. > > > > > Thank you for your answer but it is not correct for location different than > '/'. With your proposal, targeting http://domain1.com/api/foo/bar, socket > on port 82 receives: /api/foo/bar. I guess the only way to remove the /api > part is "rewrite" and involves re-encoding... Whether this is correct or no depends on the particular setup - in particular, it depends on what your backend expects as an URI. If your backend is picky about specific forms of encoding, preserving full original URI might be much better option than trying to invent hacky workarounds like the one you've linked. Obviously enough, this might either envolve re-configuring the backend to accept full original URIs, or hosting things on a dedicated domain. -- Maxim Dounin http://mdounin.ru/ From maxima078 at gmail.com Mon Mar 12 12:53:16 2018 From: maxima078 at gmail.com (max) Date: Mon, 12 Mar 2018 13:53:16 +0100 Subject: proxy_pass and trailing / decode uri In-Reply-To: <20180312122815.GD89840@mdounin.ru> References: <20180311222647.GZ89840@mdounin.ru> <20180312122815.GD89840@mdounin.ru> Message-ID: > > Whether this is correct or no depends on the particular setup - in > particular, it depends on what your backend expects as an URI. If > your backend is picky about specific forms of encoding, preserving > full original URI might be much better option than trying to > invent hacky workarounds like the one you've linked. Obviously > enough, this might either envolve re-configuring the backend to > accept full original URIs, or hosting things on a dedicated > domain. > Yes sub-domains would be the best option but I'm already listening on sub-domain and do not want to make an other sub-level. I cannot configure my backends (several backends) to be able to listen on /api, this is exactly why I needed to use Nginx as reverse proxy. 2018-03-12 13:28 GMT+01:00 Maxim Dounin : > Hello! > > On Mon, Mar 12, 2018 at 09:55:15AM +0100, max wrote: > > > > When you want nginx to replace matching part of the URI with "/", > > > it will do so on the decoded/normalized URI, and will re-encode > > > special characters in what's left. > > > > > > If you want nginx to preserve original URI as sent by the client, > > > consider using proxy_pass without the URI part. That is, > > > instead of > > > > > > proxy_pass http://127.0.0.1:82/; > > > > > > use > > > > > > proxy_pass http://127.0.0.1:82; > > > > > > Note no trailing "/". This way the original URI as sent by the > > > client will be preserved without any modifications. > > > > > > > > > Thank you for your answer but it is not correct for location different > than > > '/'. With your proposal, targeting http://domain1.com/api/foo/bar, > socket > > on port 82 receives: /api/foo/bar. I guess the only way to remove the > /api > > part is "rewrite" and involves re-encoding... > > Whether this is correct or no depends on the particular setup - in > particular, it depends on what your backend expects as an URI. If > your backend is picky about specific forms of encoding, preserving > full original URI might be much better option than trying to > invent hacky workarounds like the one you've linked. Obviously > enough, this might either envolve re-configuring the backend to > accept full original URIs, or hosting things on a dedicated > domain. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ianmcgraw3 at gmail.com Mon Mar 12 14:43:55 2018 From: ianmcgraw3 at gmail.com (Ian McGraw) Date: Mon, 12 Mar 2018 10:43:55 -0400 Subject: Planned Features for gRPC Proxy In-Reply-To: <20180311231057.GA89840@mdounin.ru> References: <20180311231057.GA89840@mdounin.ru> Message-ID: <07954F70-5208-47DE-B81F-72C854D04047@gmail.com> Thanks for the reply. My team and I are eagerly awaiting these features. Is there a ticket I can follow to track their status? I wasn?t able to find one on the roadmap page. Thanks, -Ian > On Mar 11, 2018, at 7:10 PM, Maxim Dounin wrote: > > Hello! > >> On Fri, Mar 09, 2018 at 11:50:03AM -0500, Ian McGraw wrote: >> >> Hi all, >> >> I am new to the nginx community so my apologies if this is not >> the correct place for this kind of question. >> >> I see gRPC proxy is in progress for 1.13: >> https://trac.nginx.org/nginx/roadmap >> >> Does anyone know if the proxy will support host/path based >> routing for gRPC calls? I have a use case in Kubernetes where I >> am trying to expose many gRPC microservices through a single >> nginx ingress controller. I?m trying to find out if context >> based routing will be supported so I can setup rules to be able >> to proxy to different services. > > Yes, it will be possible to proxy to different backend servers > based on normal server and location matching, much like it is > possible with proxy_pass and fastcgi_pass. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Tue Mar 13 05:51:28 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Mar 2018 08:51:28 +0300 Subject: 1.13.9 compile errors In-Reply-To: References: Message-ID: <20180313055128.GI89840@mdounin.ru> Hello! On Mon, Mar 12, 2018 at 06:32:23AM -0400, Evgenij Krupchenko wrote: [...] > in the end of "make" i got this: > > objs/ngx_modules.o \ > -ldl -lpthread -lpthread -lcrypt -lpcre > /install/openssl-1.1.1-pre2/.openssl/lib/libssl.a > /install/openssl-1.1.1-pre2/.openssl/lib/libcrypto.a -ldl > /install/zlib-1.2.11/libz.a \ > -Wl,-E > /install/openssl-1.1.1-pre2/.openssl/lib/libcrypto.a(threads_pthread.o): In > function `fork_once_func': > threads_pthread.c:(.text+0x16): undefined reference to `pthread_atfork' > collect2: error: ld returned 1 exit status > objs/Makefile:223: recipe for target 'objs/nginx' failed > make[1]: *** [objs/nginx] Error 1 > make[1]: Leaving directory '/install/nginx-1.13.9' > Makefile:8: recipe for target 'build' failed > make: *** [build] Error 2 > > previous version 1.13.8 and all before was build successfully with the same > configure parameters. > > also, i've found this post: > https://www.coldawn.com/compile-nginx-on-centos-7-to-enable-tls13/ > as suggested, after "configure" i've modified objs/Makefile: removed the > first -lpthread and the second -lpthread moved to the end of the line. in my > case it was the line #331: > > before: > -ldl -lpthread -lpthread -lcrypt -lpcre > /install/openssl-1.1.1-pre2/.openssl/lib/libssl.a > /install/openssl-1.1.1-pre2/.openssl/lib/libcrypto.a -ldl > /install/zlib-1.2.11/libz.a \ > > after: > -ldl -lcrypt -lpcre /install/openssl-1.1.1-pre2/.openssl/lib/libssl.a > /install/openssl-1.1.1-pre2/.openssl/lib/libcrypto.a -ldl > /install/zlib-1.2.11/libz.a -lpthread \ > > and then it builds successfully. > and also success when i'm using openssl-1.0.2n in configure parameters. > > so the problem only occurs in combination nginx-1.13.9 + openssl-1.1.1-pre2 > > and my question is: someone would fix bug this in the next 1.13.10 or should > we now always edit the makefile before compiling? > or this is not a bug and i'm just missing something? The problem is that OpenSSL 1.1.1-pre2 requires -lpthread for static linking on Linux. This wasn't the case with previous OpenSSL versions, hence nginx doesn't try to provide -lpthread for it. The same problem will occur with any nginx version when trying to compile with OpenSSL 1.1.1-pre2. The following patch should fix this, please test if it works for you: # HG changeset patch # User Maxim Dounin # Date 1520919437 -10800 # Tue Mar 13 08:37:17 2018 +0300 # Node ID 649427794a74c74eca80c942477d893678fb6036 # Parent 0b1eb40de6da32196b21d1ed086f7030c10b40d2 Configure: fixed static compilation with OpenSSL 1.1.1-pre2. OpenSSL now uses pthread_atfork(), and this requires -lpthread on Linux. Introduced NGX_LIBPTHREAD to add it as appropriate, similar to existing NGX_LIBDL. diff -r 0b1eb40de6da -r 649427794a74 auto/lib/openssl/conf --- a/auto/lib/openssl/conf Wed Mar 07 18:28:12 2018 +0300 +++ b/auto/lib/openssl/conf Tue Mar 13 08:37:17 2018 +0300 @@ -41,6 +41,7 @@ CORE_LIBS="$CORE_LIBS $OPENSSL/.openssl/lib/libssl.a" CORE_LIBS="$CORE_LIBS $OPENSSL/.openssl/lib/libcrypto.a" CORE_LIBS="$CORE_LIBS $NGX_LIBDL" + CORE_LIBS="$CORE_LIBS $NGX_LIBPTHREAD" if [ "$NGX_PLATFORM" = win32 ]; then CORE_LIBS="$CORE_LIBS -lgdi32 -lcrypt32 -lws2_32" @@ -59,7 +60,7 @@ ngx_feature_run=no ngx_feature_incs="#include " ngx_feature_path= - ngx_feature_libs="-lssl -lcrypto $NGX_LIBDL" + ngx_feature_libs="-lssl -lcrypto $NGX_LIBDL $NGX_LIBPTHREAD" ngx_feature_test="SSL_CTX_set_options(NULL, 0)" . auto/feature @@ -71,11 +72,13 @@ ngx_feature_path="/usr/local/include" if [ $NGX_RPATH = YES ]; then - ngx_feature_libs="-R/usr/local/lib -L/usr/local/lib -lssl -lcrypto $NGX_LIBDL" + ngx_feature_libs="-R/usr/local/lib -L/usr/local/lib -lssl -lcrypto" else - ngx_feature_libs="-L/usr/local/lib -lssl -lcrypto $NGX_LIBDL" + ngx_feature_libs="-L/usr/local/lib -lssl -lcrypto" fi + ngx_feature_libs="$ngx_feature_libs $NGX_LIBDL $NGX_LIBPTHREAD" + . auto/feature fi @@ -87,11 +90,13 @@ ngx_feature_path="/usr/pkg/include" if [ $NGX_RPATH = YES ]; then - ngx_feature_libs="-R/usr/pkg/lib -L/usr/pkg/lib -lssl -lcrypto $NGX_LIBDL" + ngx_feature_libs="-R/usr/pkg/lib -L/usr/pkg/lib -lssl -lcrypto" else - ngx_feature_libs="-L/usr/pkg/lib -lssl -lcrypto $NGX_LIBDL" + ngx_feature_libs="-L/usr/pkg/lib -lssl -lcrypto" fi + ngx_feature_libs="$ngx_feature_libs $NGX_LIBDL $NGX_LIBPTHREAD" + . auto/feature fi @@ -103,11 +108,13 @@ ngx_feature_path="/opt/local/include" if [ $NGX_RPATH = YES ]; then - ngx_feature_libs="-R/opt/local/lib -L/opt/local/lib -lssl -lcrypto $NGX_LIBDL" + ngx_feature_libs="-R/opt/local/lib -L/opt/local/lib -lssl -lcrypto" else - ngx_feature_libs="-L/opt/local/lib -lssl -lcrypto $NGX_LIBDL" + ngx_feature_libs="-L/opt/local/lib -lssl -lcrypto" fi + ngx_feature_libs="$ngx_feature_libs $NGX_LIBDL $NGX_LIBPTHREAD" + . auto/feature fi diff -r 0b1eb40de6da -r 649427794a74 auto/unix --- a/auto/unix Wed Mar 07 18:28:12 2018 +0300 +++ b/auto/unix Tue Mar 13 08:37:17 2018 +0300 @@ -901,6 +901,7 @@ if [ $ngx_found = yes ]; then CORE_LIBS="$CORE_LIBS -lpthread" + NGX_LIBPTHREAD="-lpthread" fi fi -- Maxim Dounin http://mdounin.ru/ From arut at nginx.com Tue Mar 13 12:07:59 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 13 Mar 2018 15:07:59 +0300 Subject: Routing based on ALPN In-Reply-To: <77f279db-374d-02dc-417e-fbb1fea29889@metacode.biz> References: <8febc163-7206-e530-7ea1-019d476a2271@metacode.biz> <20180306144347.GA704@vlpc> <77f279db-374d-02dc-417e-fbb1fea29889@metacode.biz> Message-ID: <20180313120759.GA832@Romans-MacBook-Air.local> Wiktor, On Wed, Mar 07, 2018 at 12:38:51PM +0100, Wiktor Kwapisiewicz via nginx wrote: > > below is the initial version of patch that creates the > > "$ssl_preread_alpn_protocols" variable; the content is a comma-separated > > list of protocols, sent by client in ALPN extension, if present. > > > > Any feedback is appretiated. > > > > I have just tested this patch and can confirm it's working perfectly fine. We have committed the patch. http://hg.nginx.org/nginx/rev/79eb4f7b6725 Thanks for cooperation. -- Roman Arutyunyan From lagged at gmail.com Tue Mar 13 12:14:39 2018 From: lagged at gmail.com (Andrei) Date: Tue, 13 Mar 2018 07:14:39 -0500 Subject: Upstream requests via proxies Message-ID: Hello everyone, I ran into a corner case with a project I'm fiddling with which requires making upstream requests via IP restricted 3rd party proxies (no auth). Would this, or anything similar even be possible? -------------- next part -------------- An HTML attachment was scrubbed... URL: From lje at napatech.com Tue Mar 13 12:39:02 2018 From: lje at napatech.com (lje at napatech.com) Date: Tue, 13 Mar 2018 12:39:02 -0000 Subject: One upstream connection blocks another upstream connection Message-ID: Hi, I???m investing a problem where I get delays in requests that is proxied to an upstream service. By looking at the debug log it seems that one request is blocking the worker so it can???t complete another requests. If you look at the log below request *352 is handled by worker 18728. This worker then starts to process request *360 but for reason it is blocked at 13:53:53 until 13:54:03. How can that happen ? The upstream service for request *360 doesn???t send any data in the time interval 13:53:53-15:54:03. But the upstream service for request *352 responses nearly immediately. (I have examined the communication between nginx and the upstream in wireshark) Another observation, in the time interval 13:53:53-15:54:03 the worker process seems to be in state D (uninterruptible sleep) So my question is: What can block worker 18728 so it doesn???t complete request *352 OS: Redhat 7.4 Nginx: 1.12.2 Hopefully I have provided enough details. Thanks in advance Lars ---------------- nginx debug error log --------------------------------------------------------------------- 2018/03/09 13:53:40 [debug] 18726#0: *249 input buf #95517 2018/03/09 13:53:32 [debug] 18728#0: *189 pipe temp offset: 588029952 2018/03/09 13:53:40 [debug] 18726#0: *249 input buf #95518 2018/03/09 13:53:32 [debug] 18728#0: *189 readv: eof:0, avail:1 2018/03/09 13:53:40 [debug] 18726#0: *249 pipe offset: 368738304 2018/03/09 13:53:32 [debug] 18728#0: *189 readv: 2, last:4096 2018/03/09 13:53:40 [debug] 18726#0: *249 pipe buf ls:1 0000556CF522BAF0, pos 0000556CF522BAF0, size: 4096 2018/03/09 13:53:40 [debug] 18726#0: *249 pipe buf ls:1 0000556CF5233250, pos 0000556CF5233250, size: 4096 2018/03/09 13:53:32 [debug] 18728#0: *189 pipe recv chain: 8192 2018/03/09 13:53:40 [debug] 18726#0: *249 pipe buf ls:1 0000556CF522EB20, pos 0000556CF522EB20, size: 4096 2018/03/09 13:53:32 [debug] 18728#0: *189 input buf #144060 2018/03/09 13:53:40 [debug] 18726#0: *249 size: 8192 2018/03/09 13:53:32 [debug] 18728#0: *189 input buf #144061 2018/03/09 13:53:40 [debug] 18726#0: *249 writev: 22, 8192, 368738304 2018/03/09 13:53:32 [debug] 18728#0: *189 pipe offset: 588029952 2018/03/09 13:53:32 [debug] 18728#0: *189 pipe buf ls:1 0000556CF5220480, pos 0000556CF5220480, size: 4096 2018/03/09 13:53:32 [debug] 18728#0: *189 pipe buf ls:1 0000556CF5233250, pos 0000556CF5233250, size: 4096 2018/03/09 13:53:40 [debug] 18726#0: *249 pipe temp offset: 368746496 2018/03/09 13:53:32 [debug] 18728#0: *189 pipe buf ls:1 0000556CF522CAF0, pos 0000556CF522CAF0, size: 4096 2018/03/09 13:53:53 [debug] 18729#0: *352 writev: 418 of 418 2018/03/09 13:53:40 [debug] 18726#0: *249 readv: eof:0, avail:1 2018/03/09 13:53:32 [debug] 18728#0: *189 size: 8192 2018/03/09 13:53:53 [debug] 18729#0: *352 chain writer out: 0000000000000000 2018/03/09 13:53:40 [debug] 18726#0: *249 readv: 2, last:4096 2018/03/09 13:53:53 [debug] 18729#0: *352 event timer del: 14: 1520603638051 2018/03/09 13:53:32 [debug] 18728#0: *189 writev: 18, 8192, 588029952 2018/03/09 13:53:53 [debug] 18729#0: *352 event timer add: 14: 86400000:1520690033051 ----------- lines removed ------------------------------------------ 2018/03/09 13:53:53 [debug] 18729#0: *360 pipe buf in s:1 t:1 f:0 0000556CF5232240, pos 0000556CF5232240, size: 4096 file: 0, size: 0 2018/03/09 13:53:53 [debug] 18729#0: *360 pipe buf free s:0 t:1 f:0 0000556CF5236280, pos 0000556CF5236280, size: 75 file: 0, size: 0 2018/03/09 13:53:53 [debug] 18729#0: *360 pipe length: -1 2018/03/09 13:53:53 [debug] 18729#0: *360 event timer: 18, old: 1520690033267, new: 1520690033348 2018/03/09 13:53:53 [debug] 18729#0: *360 event timer add: 16: 86400000:1520690033348 2018/03/09 13:53:53 [debug] 18729#0: *360 http upstream request: "/download_stream/http://127.0.0.1:46766/5ad8c4fe-c9a5-4d3e-ad57-18ad8 18bc29f.pcap?" 2018/03/09 13:53:53 [debug] 18729#0: *360 http upstream dummy handler 2018/03/09 13:54:03 [debug] 18729#0: *360 http upstream request: "/download_stream/http://127.0.0.1:46766/5ad8c4fe-c9a5-4d3e-ad57-18ad8 18bc29f.pcap?" 2018/03/09 13:54:03 [debug] 18729#0: *360 http upstream process upstream 2018/03/09 13:54:03 [debug] 18729#0: *360 pipe read upstream: 1 2018/03/09 13:54:03 [debug] 18729#0: *360 readv: eof:1, avail:1 ----------- lines removed ------------------------------------------ 2018/03/09 13:54:03 [debug] 18729#0: *360 SSL_write: -1 2018/03/09 13:54:03 [debug] 18729#0: *360 SSL_get_error: 3 2018/03/09 13:54:03 [debug] 18729#0: *360 http write filter 0000556CF5235600 2018/03/09 13:54:03 [debug] 18729#0: *360 http copy filter: -2 "/download_stream/http://127.0.0.1:46766/5ad8c4fe-c9a5-4d3e-ad57-18ad818 bc29f.pcap?" 2018/03/09 13:54:03 [debug] 18729#0: *360 http writer output filter: -2, "/download_stream/http://127.0.0.1:46766/5ad8c4fe-c9a5-4d3e-ad 57-18ad818bc29f.pcap?" 2018/03/09 13:54:03 [debug] 18729#0: *360 event timer: 16, old: 1520690043209, new: 1520690043209 2018/03/09 13:54:03 [debug] 18729#0: *352 http upstream request: "/1/search/74dc9b3c-dd89-11e6-9b3b-0894ef39879c/99fbbec6-a6f5-4f42-82f 6-5cc2f9651f45?" 2018/03/09 13:54:03 [debug] 18729#0: *352 http upstream process header 2018/03/09 13:54:03 [debug] 18729#0: *352 malloc: 0000556CF524C2D0:4096 2018/03/09 13:54:03 [debug] 18729#0: *352 recv: eof:1, avail:1 2018/03/09 13:54:03 [debug] 18729#0: *352 recv: fd:14 278 of 4096 2018/03/09 13:54:03 [debug] 18729#0: *352 http proxy status 200 "200 OK" 2018/03/09 13:54:03 [debug] 18729#0: *352 http proxy header: "Syncuuid: 5304cf38-1cd8-11e6-bddc-0894ef1af70f" 2018/03/09 13:54:03 [debug] 18729#0: *352 http proxy header: "Content-Length: 78" 2018/03/09 13:54:03 [debug] 18729#0: *352 http proxy header: "Syncmagic: 3" 2018/03/09 13:54:03 [debug] 18729#0: *352 http proxy header: "Content-Type: application/json" 2018/03/09 13:54:03 [debug] 18729#0: *352 http proxy header: "Server: Pandion/7.3.3-3933" 2018/03/09 13:54:03 [debug] 18729#0: *352 http proxy header: "Transfer-Encoding: chunked" 2018/03/09 13:54:03 [debug] 18729#0: *352 http proxy header done 2018/03/09 13:54:03 [debug] 18729#0: *352 HTTP/1.1 200 OK Date: Fri, 09 Mar 2018 13:54:03 GMT Content-Type: application/json From jeppesen.lars at gmail.com Tue Mar 13 13:10:03 2018 From: jeppesen.lars at gmail.com (Lars Jeppesen) Date: Tue, 13 Mar 2018 14:10:03 +0100 Subject: One upstream connection blocks another upstream connection Message-ID: Hi, I?m investing a problem where I get delays in requests that is proxied to an upstream service. By looking at the debug log it seems that one request is blocking the worker so it can?t complete another requests. If you look at the log below request *352 is handled by worker 18728. This worker then starts to process request *360 but for reason it is blocked at 13:53:53 until 13:54:03. How can that happen ? The upstream service for request *360 doesn?t send any data in the time interval 13:53:53-15:54:03. But the upstream service for request *352 responses nearly immediately. (I have examined the communication between nginx and the upstream in wireshark) Another observation, in the time interval 13:53:53-15:54:03 the worker process seems to be in state D (uninterruptible sleep) So my question is: What can block worker 18728 so it doesn?t complete request *352 OS: Redhat 7.4 Nginx: 1.12.2 Hopefully I have provided enough details. Thanks in advance Lars ---------------- nginx debug error log --------------------------------------------------------------------- 2018/03/09 13:53:40 [debug] 18726#0: *249 input buf #95517 2018/03/09 13:53:32 [debug] 18728#0: *189 pipe temp offset: 588029952 2018/03/09 13:53:40 [debug] 18726#0: *249 input buf #95518 2018/03/09 13:53:32 [debug] 18728#0: *189 readv: eof:0, avail:1 2018/03/09 13:53:40 [debug] 18726#0: *249 pipe offset: 368738304 2018/03/09 13:53:32 [debug] 18728#0: *189 readv: 2, last:4096 2018/03/09 13:53:40 [debug] 18726#0: *249 pipe buf ls:1 0000556CF522BAF0, pos 0000556CF522BAF0, size: 4096 2018/03/09 13:53:40 [debug] 18726#0: *249 pipe buf ls:1 0000556CF5233250, pos 0000556CF5233250, size: 4096 2018/03/09 13:53:32 [debug] 18728#0: *189 pipe recv chain: 8192 2018/03/09 13:53:40 [debug] 18726#0: *249 pipe buf ls:1 0000556CF522EB20, pos 0000556CF522EB20, size: 4096 2018/03/09 13:53:32 [debug] 18728#0: *189 input buf #144060 2018/03/09 13:53:40 [debug] 18726#0: *249 size: 8192 2018/03/09 13:53:32 [debug] 18728#0: *189 input buf #144061 2018/03/09 13:53:40 [debug] 18726#0: *249 writev: 22, 8192, 368738304 2018/03/09 13:53:32 [debug] 18728#0: *189 pipe offset: 588029952 2018/03/09 13:53:32 [debug] 18728#0: *189 pipe buf ls:1 0000556CF5220480, pos 0000556CF5220480, size: 4096 2018/03/09 13:53:32 [debug] 18728#0: *189 pipe buf ls:1 0000556CF5233250, pos 0000556CF5233250, size: 4096 2018/03/09 13:53:40 [debug] 18726#0: *249 pipe temp offset: 368746496 2018/03/09 13:53:32 [debug] 18728#0: *189 pipe buf ls:1 0000556CF522CAF0, pos 0000556CF522CAF0, size: 4096 2018/03/09 13:53:53 [debug] 18729#0: *352 writev: 418 of 418 2018/03/09 13:53:40 [debug] 18726#0: *249 readv: eof:0, avail:1 2018/03/09 13:53:32 [debug] 18728#0: *189 size: 8192 2018/03/09 13:53:53 [debug] 18729#0: *352 chain writer out: 0000000000000000 2018/03/09 13:53:40 [debug] 18726#0: *249 readv: 2, last:4096 2018/03/09 13:53:53 [debug] 18729#0: *352 event timer del: 14: 1520603638051 2018/03/09 13:53:32 [debug] 18728#0: *189 writev: 18, 8192, 588029952 2018/03/09 13:53:53 [debug] 18729#0: *352 event timer add: 14: 86400000:1520690033051 ----------- lines removed ------------------------------------------ 2018/03/09 13:53:53 [debug] 18729#0: *360 pipe buf in s:1 t:1 f:0 0000556CF5232240, pos 0000556CF5232240, size: 4096 file: 0, size: 0 2018/03/09 13:53:53 [debug] 18729#0: *360 pipe buf free s:0 t:1 f:0 0000556CF5236280, pos 0000556CF5236280, size: 75 file: 0, size: 0 2018/03/09 13:53:53 [debug] 18729#0: *360 pipe length: -1 2018/03/09 13:53:53 [debug] 18729#0: *360 event timer: 18, old: 1520690033267, new: 1520690033348 2018/03/09 13:53:53 [debug] 18729#0: *360 event timer add: 16: 86400000:1520690033348 2018/03/09 13:53:53 [debug] 18729#0: *360 http upstream request: "/download_stream/http://127.0.0.1:46766/5ad8c4fe-c9a5-4d3e-ad57-18ad8 18bc29f.pcap?" 2018/03/09 13:53:53 [debug] 18729#0: *360 http upstream dummy handler 2018/03/09 13:54:03 [debug] 18729#0: *360 http upstream request: "/download_stream/http://127.0.0.1:46766/5ad8c4fe-c9a5-4d3e-ad57-18ad8 18bc29f.pcap?" 2018/03/09 13:54:03 [debug] 18729#0: *360 http upstream process upstream 2018/03/09 13:54:03 [debug] 18729#0: *360 pipe read upstream: 1 2018/03/09 13:54:03 [debug] 18729#0: *360 readv: eof:1, avail:1 ----------- lines removed ------------------------------------------ 2018/03/09 13:54:03 [debug] 18729#0: *360 SSL_write: -1 2018/03/09 13:54:03 [debug] 18729#0: *360 SSL_get_error: 3 2018/03/09 13:54:03 [debug] 18729#0: *360 http write filter 0000556CF5235600 2018/03/09 13:54:03 [debug] 18729#0: *360 http copy filter: -2 "/download_stream/http://127.0.0.1:46766/5ad8c4fe-c9a5-4d3e-ad57-18ad818 bc29f.pcap?" 2018/03/09 13:54:03 [debug] 18729#0: *360 http writer output filter: -2, "/download_stream/http://127.0.0.1:46766/5ad8c4fe-c9a5-4d3e-ad 57-18ad818bc29f.pcap?" 2018/03/09 13:54:03 [debug] 18729#0: *360 event timer: 16, old: 1520690043209, new: 1520690043209 2018/03/09 13:54:03 [debug] 18729#0: *352 http upstream request: "/1/search/74dc9b3c-dd89-11e6-9b3b-0894ef39879c/99fbbec6-a6f5-4f42-82f 6-5cc2f9651f45?" 2018/03/09 13:54:03 [debug] 18729#0: *352 http upstream process header 2018/03/09 13:54:03 [debug] 18729#0: *352 malloc: 0000556CF524C2D0:4096 2018/03/09 13:54:03 [debug] 18729#0: *352 recv: eof:1, avail:1 2018/03/09 13:54:03 [debug] 18729#0: *352 recv: fd:14 278 of 4096 2018/03/09 13:54:03 [debug] 18729#0: *352 http proxy status 200 "200 OK" 2018/03/09 13:54:03 [debug] 18729#0: *352 http proxy header: "Syncuuid: 5304cf38-1cd8-11e6-bddc-0894ef1af70f" 2018/03/09 13:54:03 [debug] 18729#0: *352 http proxy header: "Content-Length: 78" 2018/03/09 13:54:03 [debug] 18729#0: *352 http proxy header: "Syncmagic: 3" 2018/03/09 13:54:03 [debug] 18729#0: *352 http proxy header: "Content-Type: application/json" 2018/03/09 13:54:03 [debug] 18729#0: *352 http proxy header: "Server: Pandion/7.3.3-3933" 2018/03/09 13:54:03 [debug] 18729#0: *352 http proxy header: "Transfer-Encoding: chunked" 2018/03/09 13:54:03 [debug] 18729#0: *352 http proxy header done 2018/03/09 13:54:03 [debug] 18729#0: *352 HTTP/1.1 200 OK Date: Fri, 09 Mar 2018 13:54:03 GMT Content-Type: application/json -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Mar 13 15:13:30 2018 From: nginx-forum at forum.nginx.org (shiz) Date: Tue, 13 Mar 2018 11:13:30 -0400 Subject: 1.13.9 compile errors In-Reply-To: <20180313055128.GI89840@mdounin.ru> References: <20180313055128.GI89840@mdounin.ru> Message-ID: <6adc7c3348b06f2a91da42f7a07703ea.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > > The following patch should fix this, please test if it works for > you: > > # HG changeset patch > # User Maxim Dounin > # Date 1520919437 -10800 > # Tue Mar 13 08:37:17 2018 +0300 > # Node ID 649427794a74c74eca80c942477d893678fb6036 > # Parent 0b1eb40de6da32196b21d1ed086f7030c10b40d2 > Configure: fixed static compilation with OpenSSL 1.1.1-pre2. > > OpenSSL now uses pthread_atfork(), and this requires -lpthread on > Linux. > Introduced NGX_LIBPTHREAD to add it as appropriate, similar to > existing > NGX_LIBDL. > Patch works beautifully for me. Thanks Maxim! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279002,279019#msg-279019 From nginx-forum at forum.nginx.org Tue Mar 13 15:46:48 2018 From: nginx-forum at forum.nginx.org (avpdnepr) Date: Tue, 13 Mar 2018 11:46:48 -0400 Subject: I can not configure the python module through the official Nginx Unit documentation? Message-ID: <4dd344a03bea57edb942c7e223a1de32.NginxMailingListEnglish@forum.nginx.org> 2018/03/13 12:10:37 [info] 2220#2220 "example_python" application started 2018/03/13 12:10:37 [emerg] 2220#2220 Python failed to import module "wsgi" 2018/03/13 12:10:37 [notice] 1625#1625 process 2220 exited with code 1 2018/03/13 12:10:37 [warn] 1632#1632 failed to start application "example_python" Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279020,279020#msg-279020 From vbart at nginx.com Tue Mar 13 15:58:26 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 13 Mar 2018 18:58:26 +0300 Subject: I can not configure the python module through the official Nginx Unit documentation? In-Reply-To: <4dd344a03bea57edb942c7e223a1de32.NginxMailingListEnglish@forum.nginx.org> References: <4dd344a03bea57edb942c7e223a1de32.NginxMailingListEnglish@forum.nginx.org> Message-ID: <11105294.fke2Rhk1mA@vbart-workstation> On Tuesday 13 March 2018 11:46:48 avpdnepr wrote: > 2018/03/13 12:10:37 [info] 2220#2220 "example_python" application started > 2018/03/13 12:10:37 [emerg] 2220#2220 Python failed to import module "wsgi" > 2018/03/13 12:10:37 [notice] 1625#1625 process 2220 exited with code 1 > 2018/03/13 12:10:37 [warn] 1632#1632 failed to start application > "example_python" > This means that Python interpreter is unable to load your application. You should check that the path is correct and Unit application process have enough rights. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Tue Mar 13 16:27:29 2018 From: nginx-forum at forum.nginx.org (avpdnepr) Date: Tue, 13 Mar 2018 12:27:29 -0400 Subject: I can not configure the python module through the official Nginx Unit documentation? In-Reply-To: <11105294.fke2Rhk1mA@vbart-workstation> References: <11105294.fke2Rhk1mA@vbart-workstation> Message-ID: Valentin V. Bartenev Wrote: ------------------------------------------------------- > On Tuesday 13 March 2018 11:46:48 avpdnepr wrote: > > 2018/03/13 12:10:37 [info] 2220#2220 "example_python" application > started > > 2018/03/13 12:10:37 [emerg] 2220#2220 Python failed to import module > "wsgi" > > 2018/03/13 12:10:37 [notice] 1625#1625 process 2220 exited with code > 1 > > 2018/03/13 12:10:37 [warn] 1632#1632 failed to start application > > "example_python" > > > > This means that Python interpreter is unable to load your application. > You should check that the path is correct and Unit application process > have enough rights. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx root 1625 0.0 0.0 22632 2212 ? Ss 11:48 0:00 unit: main [/usr/sbin/unitd --log /var/log/unit.log --pid /run/unit.pid] nobody 1631 0.0 0.0 32744 2072 ? S 11:48 0:00 unit: controller nobody 1632 0.0 0.0 106736 2176 ? Sl 11:48 0:00 unit: router www-php 1633 0.0 0.3 129272 15056 ? S 11:48 0:00 unit: "blogs" application www-php 1634 0.0 0.3 129276 15112 ? S 11:48 0:00 unit: "blogs" application www-php 1635 0.0 0.3 129276 15112 ? S 11:48 0:00 unit: "blogs" application www-php 1636 0.0 0.3 129276 15112 ? S 11:48 0:00 unit: "blogs" application www-php 1637 0.0 0.3 129276 15116 ? S 11:48 0:00 unit: "blogs" application www-php 1638 0.0 0.3 129276 15112 ? S 11:48 0:00 unit: "blogs" application www-php 1639 0.0 0.3 129276 15116 ? S 11:48 0:00 unit: "blogs" application www-php 1640 0.0 0.3 129276 15112 ? S 11:48 0:00 unit: "blogs" application www-php 1641 0.0 0.3 129276 15116 ? S 11:48 0:00 unit: "blogs" application www-php 1642 0.0 0.3 129276 15120 ? S 11:48 0:00 unit: "blogs" application www-php 1643 0.0 0.3 129276 15120 ? S 11:48 0:00 unit: "blogs" application www-php 1644 0.0 0.3 129276 15120 ? S 11:48 0:00 unit: "blogs" application www-php 1645 0.0 0.3 129276 15120 ? S 11:48 0:00 unit: "blogs" application www-php 1646 0.0 0.3 129276 15116 ? S 11:48 0:00 unit: "blogs" application www-php 1647 0.0 0.3 129276 15116 ? S 11:48 0:00 unit: "blogs" application www-php 1648 0.0 0.3 129276 15116 ? S 11:48 0:00 unit: "blogs" application www-php 1649 0.0 0.3 129276 15116 ? S 11:48 0:00 unit: "blogs" application www-php 1650 0.0 0.3 129276 15116 ? S 11:48 0:00 unit: "blogs" application www-php 1651 0.0 0.3 129276 15116 ? S 11:48 0:00 unit: "blogs" application www-php 1652 0.0 0.3 129280 15116 ? S 11:48 0:00 unit: "blogs" application root 3910 0.0 0.0 12944 932 pts/0 S+ 16:24 0:00 grep --color=auto unit Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279020,279024#msg-279024 From nginx-forum at forum.nginx.org Tue Mar 13 16:28:00 2018 From: nginx-forum at forum.nginx.org (avpdnepr) Date: Tue, 13 Mar 2018 12:28:00 -0400 Subject: I can not configure the python module through the official Nginx Unit documentation? In-Reply-To: <11105294.fke2Rhk1mA@vbart-workstation> References: <11105294.fke2Rhk1mA@vbart-workstation> Message-ID: <5ac51c37d63edc01d449dda3637f2cc1.NginxMailingListEnglish@forum.nginx.org> { "listeners": { "*:8300": { "application": "blogs" }, "*:8301": { "application": "cart" } }, "applications": { "blogs": { "type": "php", "processes": 20, "user": "www-php", "group": "www-php", "root": "/var/www", "index": "index.php" }, "cart": { "type": "python", "processes": 10, "user": "root", "group": "root", "path": "/var/www/app" } } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279020,279025#msg-279025 From vbart at nginx.com Tue Mar 13 16:38:54 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 13 Mar 2018 19:38:54 +0300 Subject: I can not configure the python module through the official Nginx Unit documentation? In-Reply-To: <5ac51c37d63edc01d449dda3637f2cc1.NginxMailingListEnglish@forum.nginx.org> References: <11105294.fke2Rhk1mA@vbart-workstation> <5ac51c37d63edc01d449dda3637f2cc1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2220562.gcRFKCoHCV@vbart-workstation> On Tuesday 13 March 2018 12:28:00 avpdnepr wrote: > { > "listeners": { > "*:8300": { > "application": "blogs" > }, > "*:8301": { > "application": "cart" > } > }, > > "applications": { > "blogs": { > "type": "php", > "processes": 20, > "user": "www-php", > "group": "www-php", > "root": "/var/www", > "index": "index.php" > }, > "cart": { > "type": "python", > "processes": 10, > "user": "root", > "group": "root", > "path": "/var/www/app" > } > } > } > Could you provide ls -l /var/www/app ? Also in you Unit log the application was "example_python" but in your configuration the only Python application is called "cart". wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Tue Mar 13 16:43:10 2018 From: nginx-forum at forum.nginx.org (avpdnepr) Date: Tue, 13 Mar 2018 12:43:10 -0400 Subject: I can not configure the python module through the official Nginx Unit documentation? In-Reply-To: <2220562.gcRFKCoHCV@vbart-workstation> References: <2220562.gcRFKCoHCV@vbart-workstation> Message-ID: root at instance-1:~# ls -l /var/www/app total 0 root at instance-1:~# Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279020,279027#msg-279027 From vbart at nginx.com Tue Mar 13 16:47:37 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 13 Mar 2018 19:47:37 +0300 Subject: I can not configure the python module through the official Nginx Unit documentation? In-Reply-To: References: <2220562.gcRFKCoHCV@vbart-workstation> Message-ID: <6793419.3jT3FCKhyi@vbart-workstation> On Tuesday 13 March 2018 12:43:10 avpdnepr wrote: > root at instance-1:~# ls -l /var/www/app > total 0 > root at instance-1:~# > So, you have no Python application in /var/www/app. What are you trying to run then and where it is? wbr, Valentin V. Bartenev From mdounin at mdounin.ru Tue Mar 13 17:01:23 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Mar 2018 20:01:23 +0300 Subject: One upstream connection blocks another upstream connection In-Reply-To: <20180313123905.ADBB22C50D90@mail.nginx.com> References: <20180313123905.ADBB22C50D90@mail.nginx.com> Message-ID: <20180313170122.GJ89840@mdounin.ru> Hello! On Tue, Mar 13, 2018 at 12:39:05PM +0000, lje at napatech.com wrote: > I???m investing a problem where I get delays in requests that is > proxied to an upstream service. > By looking at the debug log it seems that one request is > blocking the worker so it can???t complete another requests. > > If you look at the log below request *352 is handled by worker > 18728. This worker then starts to process request *360 but for > reason it is blocked at 13:53:53 until 13:54:03. How can that > happen ? > > The upstream service for request *360 doesn???t send any data in > the time interval 13:53:53-15:54:03. > But the upstream service for request *352 responses nearly > immediately. > (I have examined the communication between nginx and the > upstream in wireshark) > > Another observation, in the time interval 13:53:53-15:54:03 the > worker process seems to be in state D (uninterruptible sleep) > > So my question is: What can block worker 18728 so it doesn???t > complete request *352 > > OS: Redhat 7.4 > Nginx: 1.12.2 > > Hopefully I have provided enough details. > > 2018/03/09 13:53:40 [debug] 18726#0: *249 input buf #95517 > 2018/03/09 13:53:32 [debug] 18728#0: *189 pipe temp offset: 588029952 > 2018/03/09 13:53:40 [debug] 18726#0: *249 input buf #95518 > 2018/03/09 13:53:32 [debug] 18728#0: *189 readv: eof:0, avail:1 > 2018/03/09 13:53:40 [debug] 18726#0: *249 pipe offset: 368738304 > 2018/03/09 13:53:32 [debug] 18728#0: *189 readv: 2, last:4096 > 2018/03/09 13:53:40 [debug] 18726#0: *249 pipe buf ls:1 0000556CF522BAF0, pos 0000556CF522BAF0, size: 4096 > 2018/03/09 13:53:40 [debug] 18726#0: *249 pipe buf ls:1 0000556CF5233250, pos 0000556CF5233250, size: 4096 > 2018/03/09 13:53:32 [debug] 18728#0: *189 pipe recv chain: 8192 > 2018/03/09 13:53:40 [debug] 18726#0: *249 pipe buf ls:1 0000556CF522EB20, pos 0000556CF522EB20, size: 4096 > 2018/03/09 13:53:32 [debug] 18728#0: *189 input buf #144060 > 2018/03/09 13:53:40 [debug] 18726#0: *249 size: 8192 > 2018/03/09 13:53:32 [debug] 18728#0: *189 input buf #144061 > 2018/03/09 13:53:40 [debug] 18726#0: *249 writev: 22, 8192, 368738304 > 2018/03/09 13:53:32 [debug] 18728#0: *189 pipe offset: 588029952 > 2018/03/09 13:53:32 [debug] 18728#0: *189 pipe buf ls:1 0000556CF5220480, pos 0000556CF5220480, size: 4096 > 2018/03/09 13:53:32 [debug] 18728#0: *189 pipe buf ls:1 0000556CF5233250, pos 0000556CF5233250, size: 4096 > 2018/03/09 13:53:40 [debug] 18726#0: *249 pipe temp offset: 368746496 > 2018/03/09 13:53:32 [debug] 18728#0: *189 pipe buf ls:1 0000556CF522CAF0, pos 0000556CF522CAF0, size: 4096 > 2018/03/09 13:53:53 [debug] 18729#0: *352 writev: 418 of 418 > 2018/03/09 13:53:40 [debug] 18726#0: *249 readv: eof:0, avail:1 > 2018/03/09 13:53:32 [debug] 18728#0: *189 size: 8192 > 2018/03/09 13:53:53 [debug] 18729#0: *352 chain writer out: 0000000000000000 > 2018/03/09 13:53:40 [debug] 18726#0: *249 readv: 2, last:4096 > 2018/03/09 13:53:53 [debug] 18729#0: *352 event timer del: 14: 1520603638051 > 2018/03/09 13:53:32 [debug] 18728#0: *189 writev: 18, 8192, 588029952 > 2018/03/09 13:53:53 [debug] 18729#0: *352 event timer add: 14: 86400000:1520690033051 Mix of different times in logs suggests that workers are blocked for a long time doing something (and hence the time in some worker process are not updated for a long time). Reasons can be different, and more information/logs are needed to say anything for sure. In this particular case my best guess is that your backend server is much faster than the disk you use for proxy_temp_path, and so nginx loops buffering a response to disk for a long time. For example, the response in *189 already buffered about 600M, and there is no indication in the log lines quoted that it stopped reading from the upstream somewhere. At the same time the process thinks current time is 13:53:32, which is 21 seconds behind 13:53:53 as logged by pid 18729 at the same time. An obvious workaround would be to disable or limit disk buffering, "proxy_max_temp_file_size 0;". Additionally, using larger memory buffers (proxy_buffer_size, proxy_buffers) might help to avoid such monopolization of a worker process. See also https://trac.nginx.org/nginx/ticket/1431 for a detailed explanation of a similar problem as observed with websocket proxying. [...] -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Mar 13 17:25:07 2018 From: nginx-forum at forum.nginx.org (avpdnepr) Date: Tue, 13 Mar 2018 13:25:07 -0400 Subject: I can not configure the python module through the official Nginx Unit documentation? In-Reply-To: <6793419.3jT3FCKhyi@vbart-workstation> References: <6793419.3jT3FCKhyi@vbart-workstation> Message-ID: <5f8c5c10b881adf0014cf64d47f766ed.NginxMailingListEnglish@forum.nginx.org> root at instance-1:~# cat /var/www/app/index.py import sys def application(environ, start_response): body = sys.version.encode("utf-8") status = "200 OK" headers = [('Content-type','text/plain')] start_response(status, headers) return body root at instance-1:~# curl -X PUT -d @/root/unit_json/start.json --unix-socket /var/run/control.unit.sock http://localhost/ { "error": "Failed to apply new configuration." } root at instance-1:~# Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279020,279031#msg-279031 From nginx-forum at forum.nginx.org Tue Mar 13 17:25:47 2018 From: nginx-forum at forum.nginx.org (avpdnepr) Date: Tue, 13 Mar 2018 13:25:47 -0400 Subject: I can not configure the python module through the official Nginx Unit documentation? In-Reply-To: <6793419.3jT3FCKhyi@vbart-workstation> References: <6793419.3jT3FCKhyi@vbart-workstation> Message-ID: <7d3bbd1c9a0fe6d3ce7aa0e6a6024649.NginxMailingListEnglish@forum.nginx.org> root at instance-1:~# cat /root/unit_json/start.json { "listeners": { "*:8300": { "application": "blogs" }, "*:8301": { "application": "cart" } }, "applications": { "blogs": { "type": "php", "processes": 20, "user": "www-php", "group": "www-php", "root": "/var/www", "index": "index.php" }, "cart": { "type": "python", "processes": 10, "module": "wsgi", "user": "root", "group": "root", "path": "/var/www/app" } } } root at instance-1:~# Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279020,279032#msg-279032 From anoopalias01 at gmail.com Tue Mar 13 17:33:56 2018 From: anoopalias01 at gmail.com (Anoop Alias) Date: Tue, 13 Mar 2018 23:03:56 +0530 Subject: Handling URL with the percentage character Message-ID: Hi, Is there a way URL like ++++++++++++++++++++++++++++++++++++ http://domain.com/%product_cat%/myproduct +++++++++++++++++++++++++++++++++++++ to be passed as is to an Apache proxy backend. Currently, Nginx is throwing a 400 bad request error (which is correct), but the Apache httpd using a php script can handle this . so is there a way I can do like ..hey this will be handled someplace else so i just need to pass on whatever i get to upstream? Also if I encode the URL with http://domain.com/%25product_cat%25/myproduct That works too. So if the first is not possible is there a way to rewrite all % to %25 ? -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Mar 13 17:48:54 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 13 Mar 2018 20:48:54 +0300 Subject: I can not configure the python module through the official Nginx Unit documentation? In-Reply-To: <7d3bbd1c9a0fe6d3ce7aa0e6a6024649.NginxMailingListEnglish@forum.nginx.org> References: <6793419.3jT3FCKhyi@vbart-workstation> <7d3bbd1c9a0fe6d3ce7aa0e6a6024649.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1991608.qN2fRJvp2V@vbart-workstation> On Tuesday 13 March 2018 13:25:47 avpdnepr wrote: > root at instance-1:~# cat /root/unit_json/start.json > { > "listeners": { > "*:8300": { > "application": "blogs" > }, > "*:8301": { > "application": "cart" > } > }, > > "applications": { > "blogs": { > "type": "php", > "processes": 20, > "user": "www-php", > "group": "www-php", > "root": "/var/www", > "index": "index.php" > }, > "cart": { > "type": "python", > "processes": 10, > "module": "wsgi", > "user": "root", > "group": "root", > "path": "/var/www/app" > } > } > } Since your Python application() callable is inside the /var/www/app/index.py file, then your module should be called "index", not "wsgi". See the Python documentation about how it works: https://docs.python.org/3/tutorial/modules.html#the-module-search-path wbr, Valentin V. Bartenev From knny.myer at gmail.com Tue Mar 13 20:37:52 2018 From: knny.myer at gmail.com (Kenny Meyer) Date: Tue, 13 Mar 2018 17:37:52 -0300 Subject: Using the mirror module Message-ID: Hi, I?m having trouble using the new mirror module. I want to mirror incoming requests from Nginx to other upstream servers. 1) a production server 2) a staging server This is my config: server { listen 80 default_server; listen [::]:80 default_server; location / { mirror /mirror; proxy_pass http://www.example.com; } location /mirror { internal; proxy_pass http://staging.example.com$request_uri; } } So, I request http://myserver.com (where Nginx is hosted) and it successfully redirects me to www.example.com, however I don?t see any requests hitting staging.example.com. What could be the error? From arut at nginx.com Tue Mar 13 21:34:56 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 14 Mar 2018 00:34:56 +0300 Subject: Using the mirror module In-Reply-To: References: Message-ID: <20180313213456.GB832@Romans-MacBook-Air.local> Hi Kenny, On Tue, Mar 13, 2018 at 05:37:52PM -0300, Kenny Meyer wrote: > Hi, > > I?m having trouble using the new mirror module. I want to mirror incoming requests from Nginx to other upstream servers. 1) a production server 2) a staging server > > This is my config: > > server { > listen 80 default_server; > listen [::]:80 default_server; > > location / { > mirror /mirror; > proxy_pass http://www.example.com; > } > > location /mirror { > internal; > proxy_pass http://staging.example.com$request_uri; > } > } > > So, I request http://myserver.com (where Nginx is hosted) and it successfully redirects me to www.example.com, however I don?t see any requests hitting staging.example.com. > > What could be the error? The configuration looks fine. Are there any errors in error.log? And what happens if you switch www.example.com and staging.example.com? -- Roman Arutyunyan From knny.myer at gmail.com Tue Mar 13 21:58:25 2018 From: knny.myer at gmail.com (Kenny Meyer) Date: Tue, 13 Mar 2018 18:58:25 -0300 Subject: Using the mirror module In-Reply-To: <20180313213456.GB832@Romans-MacBook-Air.local> References: <20180313213456.GB832@Romans-MacBook-Air.local> Message-ID: <04F9BAAB-4F40-4E82-8F4A-891F656BD759@gmail.com> Hi Roman, > Are there any errors in error.log? No errors? > And what happens if you switch www.example.com and staging.example.com? Then I get redirected to staging.example.com and I don?t see any requests being logged on example.com > On 13 Mar, 2018, at 18:34, Roman Arutyunyan wrote: > > Hi Kenny, > > On Tue, Mar 13, 2018 at 05:37:52PM -0300, Kenny Meyer wrote: >> Hi, >> >> I?m having trouble using the new mirror module. I want to mirror incoming requests from Nginx to other upstream servers. 1) a production server 2) a staging server >> >> This is my config: >> >> server { >> listen 80 default_server; >> listen [::]:80 default_server; >> >> location / { >> mirror /mirror; >> proxy_pass http://www.example.com; >> } >> >> location /mirror { >> internal; >> proxy_pass http://staging.example.com$request_uri; >> } >> } >> >> So, I request http://myserver.com (where Nginx is hosted) and it successfully redirects me to www.example.com, however I don?t see any requests hitting staging.example.com. >> >> What could be the error? > > The configuration looks fine. > Are there any errors in error.log? > And what happens if you switch www.example.com and staging.example.com? > > -- > Roman Arutyunyan > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Kenny Meyer www.kennymeyer.net From peter_booth at me.com Tue Mar 13 22:29:59 2018 From: peter_booth at me.com (Peter Booth) Date: Tue, 13 Mar 2018 18:29:59 -0400 Subject: Using the mirror module In-Reply-To: <04F9BAAB-4F40-4E82-8F4A-891F656BD759@gmail.com> References: <20180313213456.GB832@Romans-MacBook-Air.local> <04F9BAAB-4F40-4E82-8F4A-891F656BD759@gmail.com> Message-ID: <7A85E943-35D0-4A42-95A2-E2FB8677D223@me.com> This is the point where I would jump to using the debug log. You need to build you nginx binary with ?with-debug switch and change the log level to debug innginx.conf. Debug generates a *huge* amount of logs but it really is invaluable. I would also want to double check what is actually happening and use ss or tcpdump to confirm that no request is sent to your staging destination. I?m assuming that both ww.example.com and staging.example.com are hosted on different hosts, different IPs and are both functional. Peter > On Mar 13, 2018, at 5:58 PM, Kenny Meyer wrote: > > Hi Roman, > >> Are there any errors in error.log? > No errors? > >> And what happens if you switch www.example.com and staging.example.com? > Then I get redirected to staging.example.com and I don?t see any requests being logged on example.com > > > >> On 13 Mar, 2018, at 18:34, Roman Arutyunyan wrote: >> >> Hi Kenny, >> >> On Tue, Mar 13, 2018 at 05:37:52PM -0300, Kenny Meyer wrote: >>> Hi, >>> >>> I?m having trouble using the new mirror module. I want to mirror incoming requests from Nginx to other upstream servers. 1) a production server 2) a staging server >>> >>> This is my config: >>> >>> server { >>> listen 80 default_server; >>> listen [::]:80 default_server; >>> >>> location / { >>> mirror /mirror; >>> proxy_pass http://www.example.com; >>> } >>> >>> location /mirror { >>> internal; >>> proxy_pass http://staging.example.com$request_uri; >>> } >>> } >>> >>> So, I request http://myserver.com (where Nginx is hosted) and it successfully redirects me to www.example.com, however I don?t see any requests hitting staging.example.com. >>> >>> What could be the error? >> >> The configuration looks fine. >> Are there any errors in error.log? >> And what happens if you switch www.example.com and staging.example.com? >> >> -- >> Roman Arutyunyan >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > Kenny Meyer > www.kennymeyer.net > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From arut at nginx.com Tue Mar 13 22:36:11 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 14 Mar 2018 01:36:11 +0300 Subject: Using the mirror module In-Reply-To: <04F9BAAB-4F40-4E82-8F4A-891F656BD759@gmail.com> References: <20180313213456.GB832@Romans-MacBook-Air.local> <04F9BAAB-4F40-4E82-8F4A-891F656BD759@gmail.com> Message-ID: <20180313223611.GC832@Romans-MacBook-Air.local> On Tue, Mar 13, 2018 at 06:58:25PM -0300, Kenny Meyer wrote: > Hi Roman, > > > Are there any errors in error.log? > No errors? > > > And what happens if you switch www.example.com and staging.example.com? > Then I get redirected to staging.example.com and I don?t see any requests being logged on example.com Do you have a resolver defined in the http{} block? Proxying to "http://staging.example.com$request_uri" requires a resolver. Normally, you would have "no resolver defined to resolve ..." error logged if resolver is missing, but you may not see it if, for example, your log level is too high. > > On 13 Mar, 2018, at 18:34, Roman Arutyunyan wrote: > > > > Hi Kenny, > > > > On Tue, Mar 13, 2018 at 05:37:52PM -0300, Kenny Meyer wrote: > >> Hi, > >> > >> I?m having trouble using the new mirror module. I want to mirror incoming requests from Nginx to other upstream servers. 1) a production server 2) a staging server > >> > >> This is my config: > >> > >> server { > >> listen 80 default_server; > >> listen [::]:80 default_server; > >> > >> location / { > >> mirror /mirror; > >> proxy_pass http://www.example.com; > >> } > >> > >> location /mirror { > >> internal; > >> proxy_pass http://staging.example.com$request_uri; > >> } > >> } > >> > >> So, I request http://myserver.com (where Nginx is hosted) and it successfully redirects me to www.example.com, however I don?t see any requests hitting staging.example.com. > >> > >> What could be the error? > > > > The configuration looks fine. > > Are there any errors in error.log? > > And what happens if you switch www.example.com and staging.example.com? > > > > -- > > Roman Arutyunyan > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > Kenny Meyer > www.kennymeyer.net > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From knny.myer at gmail.com Wed Mar 14 11:55:39 2018 From: knny.myer at gmail.com (Kenny Meyer) Date: Wed, 14 Mar 2018 08:55:39 -0300 Subject: Using the mirror module In-Reply-To: <20180313223611.GC832@Romans-MacBook-Air.local> References: <20180313213456.GB832@Romans-MacBook-Air.local> <04F9BAAB-4F40-4E82-8F4A-891F656BD759@gmail.com> <20180313223611.GC832@Romans-MacBook-Air.local> Message-ID: <315D4D02-EDEF-41E7-91FC-4DF59ADE4D5E@gmail.com> How do you define a resolver? > On 13 Mar, 2018, at 19:36, Roman Arutyunyan wrote: > > On Tue, Mar 13, 2018 at 06:58:25PM -0300, Kenny Meyer wrote: >> Hi Roman, >> >>> Are there any errors in error.log? >> No errors? >> >>> And what happens if you switch www.example.com and staging.example.com? >> Then I get redirected to staging.example.com and I don?t see any requests being logged on example.com > > Do you have a resolver defined in the http{} block? > Proxying to "http://staging.example.com$request_uri" requires a resolver. > Normally, you would have "no resolver defined to resolve ..." > error logged if resolver is missing, but you may not see it if, for example, > your log level is too high. > >>> On 13 Mar, 2018, at 18:34, Roman Arutyunyan wrote: >>> >>> Hi Kenny, >>> >>> On Tue, Mar 13, 2018 at 05:37:52PM -0300, Kenny Meyer wrote: >>>> Hi, >>>> >>>> I?m having trouble using the new mirror module. I want to mirror incoming requests from Nginx to other upstream servers. 1) a production server 2) a staging server >>>> >>>> This is my config: >>>> >>>> server { >>>> listen 80 default_server; >>>> listen [::]:80 default_server; >>>> >>>> location / { >>>> mirror /mirror; >>>> proxy_pass http://www.example.com; >>>> } >>>> >>>> location /mirror { >>>> internal; >>>> proxy_pass http://staging.example.com$request_uri; >>>> } >>>> } >>>> >>>> So, I request http://myserver.com (where Nginx is hosted) and it successfully redirects me to www.example.com, however I don?t see any requests hitting staging.example.com. >>>> >>>> What could be the error? >>> >>> The configuration looks fine. >>> Are there any errors in error.log? >>> And what happens if you switch www.example.com and staging.example.com? >>> >>> -- >>> Roman Arutyunyan >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> Kenny Meyer >> www.kennymeyer.net >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Roman Arutyunyan > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Kenny Meyer www.kennymeyer.net From peter_booth at me.com Wed Mar 14 12:30:47 2018 From: peter_booth at me.com (Peter Booth) Date: Wed, 14 Mar 2018 08:30:47 -0400 Subject: Using the mirror module In-Reply-To: <04F9BAAB-4F40-4E82-8F4A-891F656BD759@gmail.com> References: <20180313213456.GB832@Romans-MacBook-Air.local> <04F9BAAB-4F40-4E82-8F4A-891F656BD759@gmail.com> Message-ID: <36E4409E-CF95-405B-82EA-7AAD286DCC7E@me.com> Suggestion: Define two more locations - one that proxies www.example.com and another that proxies staging.example.com. If both locations work then your problem is probably mirroring. If one doesn?t work then the issue is your configuration and not mirroring. Either way you have reduced the size of your problem space. Peter Sent from my iPhone > On Mar 13, 2018, at 5:58 PM, Kenny Meyer wrote: > > Hi Roman, > >> Are there any errors in error.log? > No errors? > >> And what happens if you switch www.example.com and staging.example.com? > Then I get redirected to staging.example.com and I don?t see any requests being logged on example.com > > > >> On 13 Mar, 2018, at 18:34, Roman Arutyunyan wrote: >> >> Hi Kenny, >> >>> On Tue, Mar 13, 2018 at 05:37:52PM -0300, Kenny Meyer wrote: >>> Hi, >>> >>> I?m having trouble using the new mirror module. I want to mirror incoming requests from Nginx to other upstream servers. 1) a production server 2) a staging server >>> >>> This is my config: >>> >>> server { >>> listen 80 default_server; >>> listen [::]:80 default_server; >>> >>> location / { >>> mirror /mirror; >>> proxy_pass http://www.example.com; >>> } >>> >>> location /mirror { >>> internal; >>> proxy_pass http://staging.example.com$request_uri; >>> } >>> } >>> >>> So, I request http://myserver.com (where Nginx is hosted) and it successfully redirects me to www.example.com, however I don?t see any requests hitting staging.example.com. >>> >>> What could be the error? >> >> The configuration looks fine. >> Are there any errors in error.log? >> And what happens if you switch www.example.com and staging.example.com? >> >> -- >> Roman Arutyunyan >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > Kenny Meyer > www.kennymeyer.net > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From arut at nginx.com Wed Mar 14 12:33:25 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 14 Mar 2018 15:33:25 +0300 Subject: Using the mirror module In-Reply-To: <315D4D02-EDEF-41E7-91FC-4DF59ADE4D5E@gmail.com> References: <20180313213456.GB832@Romans-MacBook-Air.local> <04F9BAAB-4F40-4E82-8F4A-891F656BD759@gmail.com> <20180313223611.GC832@Romans-MacBook-Air.local> <315D4D02-EDEF-41E7-91FC-4DF59ADE4D5E@gmail.com> Message-ID: <20180314123325.GD832@Romans-MacBook-Air.local> On Wed, Mar 14, 2018 at 08:55:39AM -0300, Kenny Meyer wrote: > How do you define a resolver? Use the "resolver" directive: http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver This only makes sense if your nginx resolves domain names in runtime. In your configuration "staging.example.com" is resolved in runtime because the entire URL contains a variable ($request_uri). However, if you put an ip address instead, you will not need a resolver. > > On 13 Mar, 2018, at 19:36, Roman Arutyunyan wrote: > > > > On Tue, Mar 13, 2018 at 06:58:25PM -0300, Kenny Meyer wrote: > >> Hi Roman, > >> > >>> Are there any errors in error.log? > >> No errors? > >> > >>> And what happens if you switch www.example.com and staging.example.com? > >> Then I get redirected to staging.example.com and I don?t see any requests being logged on example.com > > > > Do you have a resolver defined in the http{} block? > > Proxying to "http://staging.example.com$request_uri" requires a resolver. > > Normally, you would have "no resolver defined to resolve ..." > > error logged if resolver is missing, but you may not see it if, for example, > > your log level is too high. > > > >>> On 13 Mar, 2018, at 18:34, Roman Arutyunyan wrote: > >>> > >>> Hi Kenny, > >>> > >>> On Tue, Mar 13, 2018 at 05:37:52PM -0300, Kenny Meyer wrote: > >>>> Hi, > >>>> > >>>> I?m having trouble using the new mirror module. I want to mirror incoming requests from Nginx to other upstream servers. 1) a production server 2) a staging server > >>>> > >>>> This is my config: > >>>> > >>>> server { > >>>> listen 80 default_server; > >>>> listen [::]:80 default_server; > >>>> > >>>> location / { > >>>> mirror /mirror; > >>>> proxy_pass http://www.example.com; > >>>> } > >>>> > >>>> location /mirror { > >>>> internal; > >>>> proxy_pass http://staging.example.com$request_uri; > >>>> } > >>>> } > >>>> > >>>> So, I request http://myserver.com (where Nginx is hosted) and it successfully redirects me to www.example.com, however I don?t see any requests hitting staging.example.com. > >>>> > >>>> What could be the error? > >>> > >>> The configuration looks fine. > >>> Are there any errors in error.log? > >>> And what happens if you switch www.example.com and staging.example.com? > >>> > >>> -- > >>> Roman Arutyunyan > >>> _______________________________________________ > >>> nginx mailing list > >>> nginx at nginx.org > >>> http://mailman.nginx.org/mailman/listinfo/nginx > >> > >> Kenny Meyer > >> www.kennymeyer.net > >> > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > > Roman Arutyunyan > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > Kenny Meyer > www.kennymeyer.net > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From michael.friscia at yale.edu Wed Mar 14 16:07:22 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Wed, 14 Mar 2018 16:07:22 +0000 Subject: $upstream_cache_status output definitions Message-ID: <96B9053E-5A5A-4067-A1C8-514B22B2B327@yale.edu> I read this in the documentation $upstream_cache_status keeps the status of accessing a response cache (0.8.3). The status can be either ?MISS?, ?BYPASS?, ?EXPIRED?, ?STALE?, ?UPDATING?, ?REVALIDATED?, or ?HIT?. But I?m sort of at a loss as to what the meanings are, specifically what is Hit and Miss? The only two I really understand are Stale and Bypass. Does Hit mean it hit the upstream server or that it hit the cached copy? I?m basically trying to map these to determine whether a cache copy was served or if a fresh copy was pulled from the upstream server because I am getting some mixed results compared to my settings and realized I might be using incorrect definitions for each status. Thanks, -mike ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Mar 14 16:34:03 2018 From: francis at daoine.org (Francis Daly) Date: Wed, 14 Mar 2018 16:34:03 +0000 Subject: $upstream_cache_status output definitions In-Reply-To: <96B9053E-5A5A-4067-A1C8-514B22B2B327@yale.edu> References: <96B9053E-5A5A-4067-A1C8-514B22B2B327@yale.edu> Message-ID: <20180314163403.GP3280@daoine.org> On Wed, Mar 14, 2018 at 04:07:22PM +0000, Friscia, Michael wrote: Hi there, > $upstream_cache_status > keeps the status of accessing a response cache (0.8.3). The status can be either ?MISS?, ?BYPASS?, ?EXPIRED?, ?STALE?, ?UPDATING?, ?REVALIDATED?, or ?HIT?. It may not be as easily findable as it should be, but if you read https://www.nginx.com/blog/nginx-caching-guide/ and search down to the FAQ, you'll find what you want. (In general, the words are the status of the response from the perspective of the cache, so "HIT" means the cached version was used.) f -- Francis Daly francis at daoine.org From michael.friscia at yale.edu Wed Mar 14 16:51:58 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Wed, 14 Mar 2018 16:51:58 +0000 Subject: $upstream_cache_status output definitions In-Reply-To: <20180314163403.GP3280@daoine.org> References: <96B9053E-5A5A-4067-A1C8-514B22B2B327@yale.edu> <20180314163403.GP3280@daoine.org> Message-ID: <4F383401-1FC7-4844-A079-FEDA02A7A8BD@yale.edu> Excellent, thank you so much! I had a feeling it was there and I was just missing it. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu On 3/14/18, 12:34 PM, "nginx on behalf of Francis Daly" wrote: On Wed, Mar 14, 2018 at 04:07:22PM +0000, Friscia, Michael wrote: Hi there, > $upstream_cache_status > keeps the status of accessing a response cache (0.8.3). The status can be either ?MISS?, ?BYPASS?, ?EXPIRED?, ?STALE?, ?UPDATING?, ?REVALIDATED?, or ?HIT?. It may not be as easily findable as it should be, but if you read https://urldefense.proofpoint.com/v2/url?u=https-3A__www.nginx.com_blog_nginx-2Dcaching-2Dguide_&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=gg5fef1YxXRy91XSh0655k7_ZfxTprE8xUaorH2ega8&s=SDwgtmpY-sY6c6SSJpTS-jdxKvaPMZtlSY9fEEHkchw&e= and search down to the FAQ, you'll find what you want. (In general, the words are the status of the response from the perspective of the cache, so "HIT" means the cached version was used.) f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=gg5fef1YxXRy91XSh0655k7_ZfxTprE8xUaorH2ega8&s=48OO2ffmc8EHnkEPi3dshE8HX6NdSi6LAQIGJ9KZ5vU&e= From matthew.smith at acquia.com Wed Mar 14 17:05:42 2018 From: matthew.smith at acquia.com (Matthew Smith) Date: Wed, 14 Mar 2018 17:05:42 +0000 Subject: Nginx 1.12.1 Memory Consumption Message-ID: Hello, I have encountered what I consider to be an interesting behavior. We have Nginx 1.12.1 configured to do SSL termination as well as reverse proxy. Whenever there is a traffic spike (300 req/s > 1000 req/s, 3k active connections > 20k active connections), there is a corresponding spike in Nginx memory consumption. In this case 500M > 8G across 10 worker processes. What is interesting is that Nginx never seems to release this memory after the traffic returns to normal. Is this expected? What is Nginx using this memory for? Is there a configuration that will rotate the workers based on some metric in order to return memory to the system? Requests per second: https://www.dropbox.com/s/cl2yqdxgqk2fn89/Screenshot%202018-03-14%2012.38.10.png?dl=0 Active connections: https://www.dropbox.com/s/s3j4oux77op3svo/Screenshot%202018-03-14%2012.44.14.png?dl=0 Total Nginx memory usage: https://www.dropbox.com/s/ihp5zxky2mgd2hr/Screenshot%202018-03-14%2012.44.43.png?dl=0 Thanks, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From smntov at gmail.com Wed Mar 14 20:32:22 2018 From: smntov at gmail.com (ST) Date: Wed, 14 Mar 2018 22:32:22 +0200 Subject: redirect to a .php file with try_files if required .php file not found Message-ID: <1521059542.1930.129.camel@gmail.com> Hello, I would like to redirect to /virtual_new.php with try_files if required .php file not found, is it the right way to do so: location ~ \.php$ { if ($args ~ "netcat_files/") { expires 7d; add_header Cache-Control "public"; } fastcgi_split_path_info ^(.+\.php)(/.+)$; try_files $uri /virtual_new.php =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root $fastcgi_script_name; include fastcgi_params; } Thank you! From mdounin at mdounin.ru Wed Mar 14 20:54:34 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 14 Mar 2018 23:54:34 +0300 Subject: Nginx 1.12.1 Memory Consumption In-Reply-To: References: Message-ID: <20180314205434.GW89840@mdounin.ru> Hello! On Wed, Mar 14, 2018 at 05:05:42PM +0000, Matthew Smith wrote: > I have encountered what I consider to be an interesting behavior. We have > Nginx 1.12.1 configured to do SSL termination as well as reverse proxy. > Whenever there is a traffic spike (300 req/s > 1000 req/s, 3k active > connections > 20k active connections), there is a corresponding spike in > Nginx memory consumption. In this case 500M > 8G across 10 worker > processes. What is interesting is that Nginx never seems to release this > memory after the traffic returns to normal. Is this expected? What is Nginx > using this memory for? Is there a configuration that will rotate the > workers based on some metric in order to return memory to the system? All memory allocated for request handling nginx returns to the allocator once a request is closed. If you see memory not actually released to the system, this may be one of the following: - System allocator fails to return memory to the system. Tuning system allocator might help here. - Given that you are using nginx for SSL termination, this may be an OpenSSL [mis]feature called "buf-freelists". It implies caching of up to 2 megabytes of allocated buffers per SSL context, and this may be a problem if there are multiple server{} blocks with SSL enabled. Fix is to recompile OpenSSL with the "no-buf-freelists" option, or upgrade to OpenSSL 1.1.x where this feature is disabled. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Mar 15 06:52:24 2018 From: nginx-forum at forum.nginx.org (Manali) Date: Thu, 15 Mar 2018 02:52:24 -0400 Subject: nginx Connections Message-ID: <9e9f89756f6c9a862cfb42a19f96c866.NginxMailingListEnglish@forum.nginx.org> I want to limit the connections used by nginx using CLI. I know that we can set worker connections to different values in nginx conf file. But no of worker connections will include not only the connections to the host. It also includes proxy connections too. If I want to give user flexibility to limit the connections, user will not know about proxy connections. Is there any flexibility in nginx source code to know whether the connection established by nginx is to the proxy server or host connections ? Can you please help me with this ? Let me know if more information is needed. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279052,279052#msg-279052 From arozyev at nginx.com Thu Mar 15 07:21:26 2018 From: arozyev at nginx.com (Aziz Rozyev) Date: Thu, 15 Mar 2018 10:21:26 +0300 Subject: nginx Connections In-Reply-To: <9e9f89756f6c9a862cfb42a19f96c866.NginxMailingListEnglish@forum.nginx.org> References: <9e9f89756f6c9a862cfb42a19f96c866.NginxMailingListEnglish@forum.nginx.org> Message-ID: <08C3BE9B-11B1-4DC7-B469-B2DA54E7EB09@nginx.com> check the limit_req_module, http://nginx.org/ru/docs/http/ngx_http_limit_req_module.html our beloved and hugely useful search engines gave this: https://www.nginx.com/blog/rate-limiting-nginx/ it?s not possible to manipulate limits with cli though. br, Aziz. > On 15 Mar 2018, at 09:52, Manali wrote: > > I want to limit the connections used by nginx using CLI. > > I know that we can set worker connections to different values in nginx conf > file. But no of worker connections will include not only the connections to > the host. It also includes proxy connections too. > > If I want to give user flexibility to limit the connections, user will not > know about proxy connections. > > Is there any flexibility in nginx source code to know whether the connection > established by nginx is to the proxy server or host connections ? > > Can you please help me with this ? > > Let me know if more information is needed. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279052,279052#msg-279052 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From jeppesen.lars at gmail.com Thu Mar 15 08:41:49 2018 From: jeppesen.lars at gmail.com (Lars Jeppesen) Date: Thu, 15 Mar 2018 09:41:49 +0100 Subject: One upstream connection blocks another upstream connection In-Reply-To: <20180313170122.GJ89840@mdounin.ru> References: <20180313123905.ADBB22C50D90@mail.nginx.com> <20180313170122.GJ89840@mdounin.ru> Message-ID: > Mix of different times in logs suggests that workers are blocked for a long time doing something > (and hence the time in some worker process are not updated for a long time). > Reasons can be different, and more information/logs are needed to say anything for sure. In this particular case my best guess is that your backend server is much faster than the disk you use for proxy_temp_path, > and so nginx loops buffering a response to disk for a long time. For example, the response in *189 already buffered about 600M, and there is no indication in the log lines quoted that it stopped reading from the upstream somewhere. > At the same time the process thinks current time is 13:53:32, which is 21 seconds behind 13:53:53 as logged by pid 18729 at the same time. > > An obvious workaround would be to disable or limit disk buffering, "proxy_max_temp_file_size 0;". Additionally, using larger memory buffers (proxy_buffer_size, proxy_buffers) might help to avoid such monopolization of a worker process. > > See also https://trac.nginx.org/nginx/ticket/1431 for a detailed explanation of a similar problem as observed with websocket proxying. > > Maxim Dounin This seemed to be the problem. The upstream was delivering data too fast compared to writing the temp file. I tried to increase the buffer but that didn't help. I disabled the temp file completely at the problem disappeared. I no longer see this monopolization of the worker process. Thanks for the help Maxim. Best regards Lars -------------- next part -------------- An HTML attachment was scrubbed... URL: From smntov at gmail.com Thu Mar 15 11:32:34 2018 From: smntov at gmail.com (ST) Date: Thu, 15 Mar 2018 13:32:34 +0200 Subject: redirect to a .php file with try_files if required .php file not found Message-ID: <1521113554.1930.132.camel@gmail.com> PS: maybe I pasted too much of my config, basically the important line is: try_files $uri /virtual_new.php =404; Does it look legitim to you? Is it the proper way to redirect in such a case or should I better use rewrite/redirect? Thank you! ------------------------------------------- Hello, I would like to redirect to /virtual_new.php with try_files if required .php file not found, is it the right way to do so: location ~ \.php$ { if ($args ~ "netcat_files/") { expires 7d; add_header Cache-Control "public"; } fastcgi_split_path_info ^(.+\.php)(/.+)$; try_files $uri /virtual_new.php =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root $fastcgi_script_name; include fastcgi_params; } Thank you! From peter_booth at me.com Thu Mar 15 17:06:10 2018 From: peter_booth at me.com (Peter Booth) Date: Thu, 15 Mar 2018 13:06:10 -0400 Subject: Nginx 1.12.1 Memory Consumption In-Reply-To: References: Message-ID: <72AEF1D0-21EF-40A7-AF7C-C3E73EA3807B@me.com> Two questions: 1. how are you measuring memory consumption? 2. How much physical memory do you have on your host? Assuming that you are running on Linux, can you use pidstat -r -t -u -v -w -C ?nginx? to confirm the process?s memory consumption, and cat /var/meminfo to view a detailed description of how memory is being used onto entire host. > On Mar 14, 2018, at 1:05 PM, Matthew Smith wrote: > > Hello, > > I have encountered what I consider to be an interesting behavior. We have Nginx 1.12.1 configured to do SSL termination as well as reverse proxy. Whenever there is a traffic spike (300 req/s > 1000 req/s, 3k active connections > 20k active connections), there is a corresponding spike in Nginx memory consumption. In this case 500M > 8G across 10 worker processes. What is interesting is that Nginx never seems to release this memory after the traffic returns to normal. Is this expected? What is Nginx using this memory for? Is there a configuration that will rotate the workers based on some metric in order to return memory to the system? > > Requests per second: > https://www.dropbox.com/s/cl2yqdxgqk2fn89/Screenshot%202018-03-14%2012.38.10.png?dl=0 > > Active connections: > https://www.dropbox.com/s/s3j4oux77op3svo/Screenshot%202018-03-14%2012.44.14.png?dl=0 > > Total Nginx memory usage: > https://www.dropbox.com/s/ihp5zxky2mgd2hr/Screenshot%202018-03-14%2012.44.43.png?dl=0 > > Thanks, > > Matt > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.friscia at yale.edu Thu Mar 15 20:04:13 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Thu, 15 Mar 2018 20:04:13 +0000 Subject: Proxy requests that return a 403 error - issue with sending headers Message-ID: <1F21DF77-63C6-4BA7-BA78-D41C27736032@yale.edu> I hope I can explain this well enough to understand what I?m doing wrong. The problem I am trying to solve is that I am making proxy requests to a site that has IP restrictions. Nginx is making a request to another Proxy URL rewrite server we use which then makes the request to the web application. So what happens without any work is that the second proxy server is making the request with the Nginx server IP address. So we made some changes to headers in Nginx to pass the client IP and then it would forward through the second proxy, make it to the web app and process the IP restriction. I have a block in my global settings that offers these header additions. add_header X-Origin-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Server $hostname; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Origin-Forwarded-For $remote_addr; proxy_set_header Accept-Encoding identity; It?s really the X-Origin? that I care about. But what seems to be happening is that for any normal request, the client IP address is being passed to the web app but when I make the request for a page that returns the 403 error because of the IP restriction, none of the headers above are being applied to the request. So the web app is never getting passed my custom headers. My question is if there is some sort of setting I am missing and I ask that making an assumption that the problem is that Nginx is making a request without sending headers, getting the 403 error and then all processing stops and I just get an access denied page. Any thoughts on how to handle this problem would be appreciated. I?ve tried numerous things and the root of the problem seems to be that Nginx is not making the full request. My next assumption is that this global configuration is to blame by having ?error? in the list proxy_cache_use_stale error timeout updating invalid_header http_500 http_502 http_503 http_504; Thanks, -mike ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jason.Whittington at equifax.com Thu Mar 15 20:40:20 2018 From: Jason.Whittington at equifax.com (Jason Whittington) Date: Thu, 15 Mar 2018 20:40:20 +0000 Subject: Proxy requests that return a 403 error - issue with sending headers In-Reply-To: <1F21DF77-63C6-4BA7-BA78-D41C27736032@yale.edu> References: <1F21DF77-63C6-4BA7-BA78-D41C27736032@yale.edu> Message-ID: <995C5C9AD54A3C419AF1C20A8B6AB9A432B187EA@STLEISEXCMBX3.eis.equifax.com> add_header is used to add a header to a response. It?s not entirely clear to me that that?s what you want to do. But if so, add_header won?t run for non-200 return values by default. If you want to propagate the header for error conditions add the ?always? option: add_header X-Origin-Forwarded-For $remote_addr always; But this still feels weird to me so maybe I am missing something. Why would you want to add that header to the response (other than for debugging)? The equivalent proxy_set_header (line 4 in your example) seems like all you should need to me. Jason From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Friscia, Michael Sent: Thursday, March 15, 2018 3:04 PM To: nginx at nginx.org Subject: [IE] Proxy requests that return a 403 error - issue with sending headers I hope I can explain this well enough to understand what I?m doing wrong. The problem I am trying to solve is that I am making proxy requests to a site that has IP restrictions. Nginx is making a request to another Proxy URL rewrite server we use which then makes the request to the web application. So what happens without any work is that the second proxy server is making the request with the Nginx server IP address. So we made some changes to headers in Nginx to pass the client IP and then it would forward through the second proxy, make it to the web app and process the IP restriction. I have a block in my global settings that offers these header additions. add_header X-Origin-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Server $hostname; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Origin-Forwarded-For $remote_addr; proxy_set_header Accept-Encoding identity; It?s really the X-Origin? that I care about. But what seems to be happening is that for any normal request, the client IP address is being passed to the web app but when I make the request for a page that returns the 403 error because of the IP restriction, none of the headers above are being applied to the request. So the web app is never getting passed my custom headers. My question is if there is some sort of setting I am missing and I ask that making an assumption that the problem is that Nginx is making a request without sending headers, getting the 403 error and then all processing stops and I just get an access denied page. Any thoughts on how to handle this problem would be appreciated. I?ve tried numerous things and the root of the problem seems to be that Nginx is not making the full request. My next assumption is that this global configuration is to blame by having ?error? in the list proxy_cache_use_stale error timeout updating invalid_header http_500 http_502 http_503 http_504; Thanks, -mike ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.friscia at yale.edu Thu Mar 15 21:33:26 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Thu, 15 Mar 2018 21:33:26 +0000 Subject: Proxy requests that return a 403 error - issue with sending headers In-Reply-To: <995C5C9AD54A3C419AF1C20A8B6AB9A432B187EA@STLEISEXCMBX3.eis.equifax.com> References: <1F21DF77-63C6-4BA7-BA78-D41C27736032@yale.edu> <995C5C9AD54A3C419AF1C20A8B6AB9A432B187EA@STLEISEXCMBX3.eis.equifax.com> Message-ID: <74213DBA-9D83-486C-919B-B2B18959E913@yale.edu> Even though it seems wrong, I?m still going to try adding ?always? to that just to test. But I agree that it is not likely going to make a difference since my goal is to send a value upstream and not apply it to the return from upstream. To answer the other, if I inspect the page that comes back with the 403 error, none of the headers I listed below appear. But if I inspect a page that comes back as 200, then the headers are present. The order of operation is this User make request to Nginx -> Nginx makes proxy request to our URL rewrite proxy -> the url rewrite proxy makes a request to the web application The goal is for the client IP to make it to the web application. It is clear that nginx needs to pass the header to the second proxy which will then pass it along to the web app. It seems that the proxy_set_header is what I need to add but it does not seem to be happening. Is there any explanation why the proxy_set_header may not actually get set? ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 ? office (203) 931-5381 ? mobile http://web.yale.edu From: nginx on behalf of Jason Whittington Reply-To: "nginx at nginx.org" Date: Thursday, March 15, 2018 at 4:40 PM To: "nginx at nginx.org" Subject: RE: Proxy requests that return a 403 error - issue with sending headers add_header is used to add a header to a response. It?s not entirely clear to me that that?s what you want to do. But if so, add_header won?t run for non-200 return values by default. If you want to propagate the header for error conditions add the ?always? option: add_header X-Origin-Forwarded-For $remote_addr always; But this still feels weird to me so maybe I am missing something. Why would you want to add that header to the response (other than for debugging)? The equivalent proxy_set_header (line 4 in your example) seems like all you should need to me. Jason From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Friscia, Michael Sent: Thursday, March 15, 2018 3:04 PM To: nginx at nginx.org Subject: [IE] Proxy requests that return a 403 error - issue with sending headers I hope I can explain this well enough to understand what I?m doing wrong. The problem I am trying to solve is that I am making proxy requests to a site that has IP restrictions. Nginx is making a request to another Proxy URL rewrite server we use which then makes the request to the web application. So what happens without any work is that the second proxy server is making the request with the Nginx server IP address. So we made some changes to headers in Nginx to pass the client IP and then it would forward through the second proxy, make it to the web app and process the IP restriction. I have a block in my global settings that offers these header additions. add_header X-Origin-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Server $hostname; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Origin-Forwarded-For $remote_addr; proxy_set_header Accept-Encoding identity; It?s really the X-Origin? that I care about. But what seems to be happening is that for any normal request, the client IP address is being passed to the web app but when I make the request for a page that returns the 403 error because of the IP restriction, none of the headers above are being applied to the request. So the web app is never getting passed my custom headers. My question is if there is some sort of setting I am missing and I ask that making an assumption that the problem is that Nginx is making a request without sending headers, getting the 403 error and then all processing stops and I just get an access denied page. Any thoughts on how to handle this problem would be appreciated. I?ve tried numerous things and the root of the problem seems to be that Nginx is not making the full request. My next assumption is that this global configuration is to blame by having ?error? in the list proxy_cache_use_stale error timeout updating invalid_header http_500 http_502 http_503 http_504; Thanks, -mike ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Mar 16 08:24:05 2018 From: francis at daoine.org (Francis Daly) Date: Fri, 16 Mar 2018 08:24:05 +0000 Subject: Proxy requests that return a 403 error - issue with sending headers In-Reply-To: <1F21DF77-63C6-4BA7-BA78-D41C27736032@yale.edu> References: <1F21DF77-63C6-4BA7-BA78-D41C27736032@yale.edu> Message-ID: <20180316082405.GQ3280@daoine.org> On Thu, Mar 15, 2018 at 08:04:13PM +0000, Friscia, Michael wrote: Hi there, > I have a block in my global settings that offers these header additions. > proxy_set_header X-Origin-Forwarded-For $remote_addr; > But what seems to be happening is that for any normal request, the client IP address is being passed to the web app but when I make the request for a page that returns the 403 error because of the IP restriction, none of the headers above are being applied to the request. So the web app is never getting passed my custom headers. If I'm reading this right, you report that one request from the client to nginx is proxy_pass'ed to another server with the "proxy_set_header X-Origin-Forwarded-For" having the desired value; while another request from the client to nginx is proxy_pass'ed to the other server without the "proxy_set_header X-Origin-Forwarded-For" having the desired value. If that is the case: are the two requests to nginx handled in the same location{} in nginx? Are you aware of directive inheritance in nginx? Briefly, it is either "none" or "replacement", and never "addition". > My question is if there is some sort of setting I am missing and I ask that making an assumption that the problem is that Nginx is making a request without sending headers, getting the 403 error and then all processing stops and I just get an access denied page. tcpdump or the equivalent, (or nginx logs, or next-server logs), can show what actual headers and values are sent from nginx to the next server. The server returning the 403 should have logs saying why it is returning 403. With that information, you should be able to remove most guesswork. > Any thoughts on how to handle this problem would be appreciated. If it's not clear from the above steps, show your nginx config, and give one example request that does do what you want it to and one that does not do what you want it to. f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Mar 16 09:26:45 2018 From: francis at daoine.org (Francis Daly) Date: Fri, 16 Mar 2018 09:26:45 +0000 Subject: redirect to a .php file with try_files if required .php file not found In-Reply-To: <1521113554.1930.132.camel@gmail.com> References: <1521113554.1930.132.camel@gmail.com> Message-ID: <20180316092645.GR3280@daoine.org> On Thu, Mar 15, 2018 at 01:32:34PM +0200, ST wrote: Hi there, > maybe I pasted too much of my config, basically the important line is: > > try_files $uri /virtual_new.php =404; The try_files documentation is at http://nginx.org/r/try_files It is not clear to me what specific response you want, to a particular request. I would expect that something like try_files $uri /virtual_new.php; or try_files $uri @php_missing; would be better; but it does depend on what you want. So: if I request /not-there.php should I get a http 301 redirect to /virtual_new.php, or should I get the http 200 response as if I had requested /virtual_new.php, or should I get a http 404 response but with content from /virtual_new.php, or should I get something else? And: if I request /not-there.php/extra, what response should I get? f -- Francis Daly francis at daoine.org From michael.friscia at yale.edu Fri Mar 16 11:20:31 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Fri, 16 Mar 2018 11:20:31 +0000 Subject: Proxy requests that return a 403 error - issue with sending headers In-Reply-To: <20180316082405.GQ3280@daoine.org> References: <1F21DF77-63C6-4BA7-BA78-D41C27736032@yale.edu> <20180316082405.GQ3280@daoine.org> Message-ID: First of all, your response caused me to review everything and I was able to solve it with you kicking me in the right direction. Yes, both the 403 and 200 requests are from the same location block. Yes, I?m well aware of the header inheritance but made a fatal mistake. I thought this only applied to add_header and not proxy_set_header so an old test configuration was getting in the way. That said, it still does not explain why I was seeing the correct headers on 200 requests and not 403. My configuration design is that I have a conf file with global settings which is where these headers are set. But inside the location block I had proxy_set_header Host $host; The reason this is not in the global config has to do with some rewrite rules used so I don?t want that in every server{} block. My fix took all the headers from the global config and pasted them just below that line and then everything works. So regardless that it is working now, why is it that on a 200 response the inheritance rule did not apply and when the response was 403 it did? I?m just going to change the way my configurations are setup but it seems like there?s a potential bug unless I?m just missing something really obvious when it comes to 4xx responses. Thanks again for pushing me into the right direction, -mike ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu On 3/16/18, 4:24 AM, "nginx on behalf of Francis Daly" wrote: On Thu, Mar 15, 2018 at 08:04:13PM +0000, Friscia, Michael wrote: Hi there, > I have a block in my global settings that offers these header additions. > proxy_set_header X-Origin-Forwarded-For $remote_addr; > But what seems to be happening is that for any normal request, the client IP address is being passed to the web app but when I make the request for a page that returns the 403 error because of the IP restriction, none of the headers above are being applied to the request. So the web app is never getting passed my custom headers. If I'm reading this right, you report that one request from the client to nginx is proxy_pass'ed to another server with the "proxy_set_header X-Origin-Forwarded-For" having the desired value; while another request from the client to nginx is proxy_pass'ed to the other server without the "proxy_set_header X-Origin-Forwarded-For" having the desired value. If that is the case: are the two requests to nginx handled in the same location{} in nginx? Are you aware of directive inheritance in nginx? Briefly, it is either "none" or "replacement", and never "addition". > My question is if there is some sort of setting I am missing and I ask that making an assumption that the problem is that Nginx is making a request without sending headers, getting the 403 error and then all processing stops and I just get an access denied page. tcpdump or the equivalent, (or nginx logs, or next-server logs), can show what actual headers and values are sent from nginx to the next server. The server returning the 403 should have logs saying why it is returning 403. With that information, you should be able to remove most guesswork. > Any thoughts on how to handle this problem would be appreciated. If it's not clear from the above steps, show your nginx config, and give one example request that does do what you want it to and one that does not do what you want it to. f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=uNhxRAXKtcHgaF6JCJGEe8vpEdqxA7Cfh4cadBz_AP0&s=pESnOYNbk_E7ebVdyD0F714EEyjSd92-0YLVTvAFuM8&e= From francis at daoine.org Fri Mar 16 12:23:12 2018 From: francis at daoine.org (Francis Daly) Date: Fri, 16 Mar 2018 12:23:12 +0000 Subject: Proxy requests that return a 403 error - issue with sending headers In-Reply-To: References: <1F21DF77-63C6-4BA7-BA78-D41C27736032@yale.edu> <20180316082405.GQ3280@daoine.org> Message-ID: <20180316122312.GS3280@daoine.org> On Fri, Mar 16, 2018 at 11:20:31AM +0000, Friscia, Michael wrote: Hi there, > First of all, your response caused me to review everything and I was able to solve it with you kicking me in the right direction. Good that you have it solved now. > Yes, I?m well aware of the header inheritance but made a fatal mistake. I thought this only applied to add_header and not proxy_set_header It applies to (almost) every directive -- either "inherited fully", or "not inherited at all". > So regardless that it is working now, why is it that on a 200 response the inheritance rule did not apply and when the response was 403 it did? I?m just going to change the way my configurations are setup but it seems like there?s a potential bug unless I?m just missing something really obvious when it comes to 4xx responses. > At the time nginx makes the proxy_pass request to upstream, it does not know what the response will be. So it cannot request differently based on the response. Show a config that does not act the way you want, if you want help with that config. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Mar 16 21:53:32 2018 From: nginx-forum at forum.nginx.org (eustas) Date: Fri, 16 Mar 2018 17:53:32 -0400 Subject: What is canonical filter workflow Message-ID: Hello. I'm working on a zero-copy brotli compression filter. With zero-copy I wrap compressor output into a buffer and send it to next filter in a chain. The problem is - it is not clear how to properly wait until this buffer is released. If I just continue asking the next filter to do its work, until buffer is released, it is possible to get into infinite loop (see https://github.com/eustas/ngx_brotli/issues/9#issuecomment-373737792). If I return NGX_AGAIN in a case the next filter is not able to use more of the buffer data, the previous filter never gives a chance to continue compression (https://github.com/eustas/ngx_brotli/issues/9#issuecomment-371513645). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279070,279070#msg-279070 From smntov at gmail.com Sun Mar 18 09:39:47 2018 From: smntov at gmail.com (ST) Date: Sun, 18 Mar 2018 11:39:47 +0200 Subject: redirect to a .php file with try_files if required .php file not found In-Reply-To: <20180316092645.GR3280@daoine.org> References: <1521113554.1930.132.camel@gmail.com> <20180316092645.GR3280@daoine.org> Message-ID: <1521365987.17859.15.camel@gmail.com> Thank you for the detailed clarification! On Fri, 2018-03-16 at 09:26 +0000, Francis Daly wrote: > On Thu, Mar 15, 2018 at 01:32:34PM +0200, ST wrote: > > Hi there, > > > maybe I pasted too much of my config, basically the important line is: > > > > try_files $uri /virtual_new.php =404; > > The try_files documentation is at http://nginx.org/r/try_files > > It is not clear to me what specific response you want, to a particular request. > > I would expect that something like > > try_files $uri /virtual_new.php; > > or > > try_files $uri @php_missing; > > would be better; but it does depend on what you want. > > So: if I request /not-there.php should I get a http 301 redirect to > /virtual_new.php, or should I get the http 200 response as if I had > requested /virtual_new.php, or should I get a http 404 response but with > content from /virtual_new.php, or should I get something else? > > And: if I request /not-there.php/extra, what response should I get? > > f From maxim at nginx.com Sun Mar 18 10:03:10 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Sun, 18 Mar 2018 13:03:10 +0300 Subject: Fwd: [nginx] The gRPC proxy module. In-Reply-To: References: Message-ID: <876c1343-5f02-37b5-a186-cc4c041924f5@nginx.com> Hello, for those who don't follow nginx-devel at . We also published a blog post on this topic https://www.nginx.com/blog/nginx-1-13-10-grpc/ -------- Forwarded Message -------- Subject: [nginx] The gRPC proxy module. Date: Sat, 17 Mar 2018 20:08:27 +0000 From: Maxim Dounin Reply-To: nginx-devel at nginx.org To: nginx-devel at nginx.org details: http://hg.nginx.org/nginx/rev/2713b2dbf5bb branches: changeset: 7233:2713b2dbf5bb user: Maxim Dounin date: Sat Mar 17 23:04:24 2018 +0300 description: The gRPC proxy module. The module allows passing requests to upstream gRPC servers. The module is built by default as long as HTTP/2 support is compiled in. Example configuration: grpc_pass 127.0.0.1:9000; Alternatively, the "grpc://" scheme can be used: grpc_pass grpc://127.0.0.1:9000; Keepalive support is available via the upstream keepalive module. Note that keepalive connections won't currently work with grpc-go as it fails to handle SETTINGS_HEADER_TABLE_SIZE. To use with SSL: grpc_pass grpcs://127.0.0.1:9000; SSL connections use ALPN "h2" when available. At least grpc-go works fine without ALPN, so if ALPN is not available we just establish a connection without it. Tested with grpc-c++ and grpc-go. -- Maxim Konovalov From lagged at gmail.com Sun Mar 18 14:58:58 2018 From: lagged at gmail.com (Andrei) Date: Sun, 18 Mar 2018 16:58:58 +0200 Subject: Upstream connections using rotating local IPs In-Reply-To: References: Message-ID: Hello everyone, I have a server with 100+ IP addresses, and source IPs for outbound connections to remote upstreams are rotated in iptables using the method described at https://serverfault.com/questions/490854/rotating-outgoing-ips-using-iptables Is there a way to round robin through local IPs for remote upstream connections directly in nginx instead? -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Mon Mar 19 05:04:31 2018 From: lagged at gmail.com (Andrei) Date: Mon, 19 Mar 2018 07:04:31 +0200 Subject: Upstream connections using rotating local IPs In-Reply-To: References: Message-ID: Got it working using map, set_rotate and proxy_bind. Thanks though. On Mar 18, 2018 16:58, "Andrei" wrote: > Hello everyone, > > I have a server with 100+ IP addresses, and source IPs for outbound > connections to remote upstreams are rotated in iptables using the method > described at https://serverfault.com/questions/490854/rotating- > outgoing-ips-using-iptables > > Is there a way to round robin through local IPs for remote upstream > connections directly in nginx instead? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Mon Mar 19 08:51:55 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Mon, 19 Mar 2018 01:51:55 -0700 Subject: [nginx] The gRPC proxy module. In-Reply-To: <876c1343-5f02-37b5-a186-cc4c041924f5@nginx.com> References: <876c1343-5f02-37b5-a186-cc4c041924f5@nginx.com> Message-ID: <89D0FC0D-6FD5-43FA-9CB5-436A82896CC3@gmail.com> Congratulations on the grpc support! Since h2/h2c are used to talk to upstream grpc servers , does that mean we will also see proxy_pass support http/2? > On Mar 18, 2018, at 3:03 AM, Maxim Konovalov wrote: > > Hello, > > for those who don't follow nginx-devel at . > > We also published a blog post on this topic > > https://www.nginx.com/blog/nginx-1-13-10-grpc/ > > -------- Forwarded Message -------- > Subject: [nginx] The gRPC proxy module. > Date: Sat, 17 Mar 2018 20:08:27 +0000 > From: Maxim Dounin > Reply-To: nginx-devel at nginx.org > To: nginx-devel at nginx.org > > details: http://hg.nginx.org/nginx/rev/2713b2dbf5bb > branches: changeset: 7233:2713b2dbf5bb > user: Maxim Dounin > date: Sat Mar 17 23:04:24 2018 +0300 > description: > The gRPC proxy module. > > The module allows passing requests to upstream gRPC servers. The > module is built by default as long as HTTP/2 support is compiled in. > Example configuration: > > grpc_pass 127.0.0.1:9000; > > Alternatively, the "grpc://" scheme can be used: > > grpc_pass grpc://127.0.0.1:9000; > > Keepalive support is available via the upstream keepalive module. > Note that keepalive connections won't currently work with grpc-go as > it fails to handle SETTINGS_HEADER_TABLE_SIZE. > > To use with SSL: > > grpc_pass grpcs://127.0.0.1:9000; > > SSL connections use ALPN "h2" when available. At least grpc-go > works fine without ALPN, so if ALPN is not available we just > establish a connection without it. > > Tested with grpc-c++ and grpc-go. > > -- > Maxim Konovalov > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From michael.friscia at yale.edu Mon Mar 19 12:31:20 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Mon, 19 Mar 2018 12:31:20 +0000 Subject: Aborting malicious requests Message-ID: Just a thought before I start crafting one. I am creating a location{} block with the intention of populating it with a ton of requests I want to terminate immediately with a 444 response. Before I start, I thought I?d ask to see if anyone has a really good one I can use as a base. For example, we don?t serve PHP so I?m starting with Location ~* .php { Return 444; } Then I can just include this into all my server blocks so I can manage the aborts all in one place. This alone reduces errors in the logs significantly. But now I will have to start adding in all the wordpress stuff, then onto php myadmin, etc. I will end up with something like Location ~* (.php|wp-admin|my-admin) { Return 444; } I can imagine the chunk inside the parenthesis is going to be pretty huge which is why I thought I?d reach out to see if anyone has one already. Thanks, -mike ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Mar 19 12:47:33 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 19 Mar 2018 15:47:33 +0300 Subject: [nginx] The gRPC proxy module. In-Reply-To: <89D0FC0D-6FD5-43FA-9CB5-436A82896CC3@gmail.com> References: <876c1343-5f02-37b5-a186-cc4c041924f5@nginx.com> <89D0FC0D-6FD5-43FA-9CB5-436A82896CC3@gmail.com> Message-ID: <20180319124732.GI77253@mdounin.ru> Hello! On Mon, Mar 19, 2018 at 01:51:55AM -0700, Frank Liu wrote: > Congratulations on the grpc support! Since h2/h2c are used to > talk to upstream grpc servers , does that mean we will also see > proxy_pass support http/2? There are no such plans. If you really want to use HTTP/2 to non-gRPC upstream servers for some reason, you may try to use grpc_pass for this. Note though that its primary target is gRPC, and hence various differences from normal proxying, including: - it doesn't use request buffering; - it doesn't use response buffering and hence doesn't support caching / storing responses; - it passes trailers returned in the response to the client, which might not be a good idea from security point of view. -- Maxim Dounin http://mdounin.ru/ From lists at lazygranch.com Mon Mar 19 13:36:41 2018 From: lists at lazygranch.com (Gary) Date: Mon, 19 Mar 2018 06:36:41 -0700 Subject: Aborting malicious requests In-Reply-To: Message-ID: An HTML attachment was scrubbed... URL: From abiliojr at gmail.com Mon Mar 19 14:04:14 2018 From: abiliojr at gmail.com (Abilio Marques) Date: Mon, 19 Mar 2018 15:04:14 +0100 Subject: ERR_SSL_BAD_RECORD_MAC_ALERT when trying to reuse SSL session In-Reply-To: References: Message-ID: Hi, After working a bit more on the issue, I also found that: - Using a new pair of key/certificate makes the problem not to show anymore. So, some files will make it fail, some files make it work. The files are of different length, so it seems to be correlated to that. - Using LD_PRELOAD with an "empty" (as in no C code) so file makes the problem disappear. I discover this while trying to hook the calls to OpenSSL, just to discover that even if I removed all my code, the problem will go away. As there are at least 3 different ways to make it disappear, looks to me that is not directly related to SSL session, but to something completely different. I cannot run valgrind on the MIPS hardware (no enough RAM), and I've been trying to reproduce it on QEMU, to no avail. Any ideas on how to proceed? Do you think Valgrind will help at all? Any other insights? On Thu, Mar 8, 2018 at 12:16 PM, Abilio Marques wrote: > Using NGINX 1.12.2 on MIPS (haven't tested on x86), if I set: > > ssl_session_cache shared:SSL:1m; # it also fails with 10m > > > And the client reestablishes the connection, it > gets: net::ERR_SSL_BAD_RECORD_MAC_ALERT when trying to reuse SSL session. > > Has anyone seen anything like this? > > > More detail: > > This was tested on 1.12.2, on a MIPS CPU, using OpenSSL 1.0.2j, and built > by gcc 4.8.3 (OpenWrt/Linaro GCC 4.8-2014.04 r47070). > > Interesting portion of my configuration file: > > server { > listen 443 ssl; > > ssl_certificate /etc/ssl/certs/bridge.cert.pem; > ssl_certificate_key /etc/ssl/private/bridge.key.pem; > > ssl_protocols TLSv1.2; > ssl_prefer_server_ciphers on; > ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256; > ssl_ecdh_curve prime256v1; > > ssl_session_timeout 24h; > ssl_session_tickets on; > ssl_session_cache shared:SSL:1m; # set to 10m, still fails, remove, > the problem seems to disappear > > keepalive_timeout 1s; # reduced during troubleshooting to make it > trigger easily > keepalive_requests 1; # reduced during troubleshooting to make it > trigger easily > > include apiv1.conf; # where all the location rules are > } > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.friscia at yale.edu Mon Mar 19 14:16:48 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Mon, 19 Mar 2018 14:16:48 +0000 Subject: Aborting malicious requests In-Reply-To: References: , Message-ID: <427C420A-20C0-4A87-9183-9A1FD1A54CC1@yale.edu> Thank you Gary, I really appreciate you moving me in the right direction. Sent from my iPhone with all its odd spell checks On Mar 19, 2018, at 9:36 AM, Gary > wrote: Your basic idea is right, but what you want to do is use a "map." I will follow up with more details when I can pull the code off my server. I 444 a number of services that I don't use. I have a script to find the IP addresses of those that trigger a 444 from access.log. If they come from a data center, hosting service, etc., they get on a blocking list for my firewall. I block the entire IP space. From: michael.friscia at yale.edu Sent: March 19, 2018 5:31 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Aborting malicious requests Just a thought before I start crafting one. I am creating a location{} block with the intention of populating it with a ton of requests I want to terminate immediately with a 444 response. Before I start, I thought I?d ask to see if anyone has a really good one I can use as a base. For example, we don?t serve PHP so I?m starting with Location ~* .php { Return 444; } Then I can just include this into all my server blocks so I can manage the aborts all in one place. This alone reduces errors in the logs significantly. But now I will have to start adding in all the wordpress stuff, then onto php myadmin, etc. I will end up with something like Location ~* (.php|wp-admin|my-admin) { Return 444; } I can imagine the chunk inside the parenthesis is going to be pretty huge which is why I thought I?d reach out to see if anyone has one already. Thanks, -mike ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=MMFd1g-YpouXJolEFUG9wADYPEA1sPlvQ_GvUe4zJHk&s=JRurMbCby9FTsTmkiXgHZcPzDsixrqBHKRyZb2qSny4&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.smith at acquia.com Mon Mar 19 14:38:30 2018 From: matthew.smith at acquia.com (Matthew Smith) Date: Mon, 19 Mar 2018 14:38:30 +0000 Subject: Nginx 1.12.1 Memory Consumption In-Reply-To: <72AEF1D0-21EF-40A7-AF7C-C3E73EA3807B@me.com> References: <72AEF1D0-21EF-40A7-AF7C-C3E73EA3807B@me.com> Message-ID: Hello, The host has 30G total memory. Nginx usage is being measured by summing the Pss values from /proc/$pid/smaps for all worker processes. Do you have any suggestions for differentiating between the two issues that might prevent memory from being returned to the system? Thanks! On Thu, Mar 15, 2018 at 1:06 PM Peter Booth wrote: > Two questions: > > 1. how are you measuring memory consumption? > 2. How much physical memory do you have on your host? > > > Assuming that you are running on Linux, can you use pidstat -r -t -u -v > -w -C ?nginx? > to confirm the process?s memory consumption, > > and cat /var/meminfo to view a detailed description of how memory is > being used onto entire host. > > > On Mar 14, 2018, at 1:05 PM, Matthew Smith > wrote: > > Hello, > > I have encountered what I consider to be an interesting behavior. We have > Nginx 1.12.1 configured to do SSL termination as well as reverse proxy. > Whenever there is a traffic spike (300 req/s > 1000 req/s, 3k active > connections > 20k active connections), there is a corresponding spike in > Nginx memory consumption. In this case 500M > 8G across 10 worker > processes. What is interesting is that Nginx never seems to release this > memory after the traffic returns to normal. Is this expected? What is Nginx > using this memory for? Is there a configuration that will rotate the > workers based on some metric in order to return memory to the system? > > Requests per second: > > https://www.dropbox.com/s/cl2yqdxgqk2fn89/Screenshot%202018-03-14%2012.38.10.png?dl=0 > > Active connections: > > https://www.dropbox.com/s/s3j4oux77op3svo/Screenshot%202018-03-14%2012.44.14.png?dl=0 > > Total Nginx memory usage: > > https://www.dropbox.com/s/ihp5zxky2mgd2hr/Screenshot%202018-03-14%2012.44.43.png?dl=0 > > Thanks, > > Matt > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jason.Whittington at equifax.com Mon Mar 19 14:44:58 2018 From: Jason.Whittington at equifax.com (Jason Whittington) Date: Mon, 19 Mar 2018 14:44:58 +0000 Subject: Aborting malicious requests In-Reply-To: <427C420A-20C0-4A87-9183-9A1FD1A54CC1@yale.edu> References: , <427C420A-20C0-4A87-9183-9A1FD1A54CC1@yale.edu> Message-ID: <995C5C9AD54A3C419AF1C20A8B6AB9A432B1A4E3@STLEISEXCMBX3.eis.equifax.com> Have you considered using something like mod_security to manage this sort of thing? From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Friscia, Michael Sent: Monday, March 19, 2018 9:17 AM To: nginx at nginx.org Subject: [IE] Re: Aborting malicious requests Thank you Gary, I really appreciate you moving me in the right direction. Sent from my iPhone with all its odd spell checks On Mar 19, 2018, at 9:36 AM, Gary > wrote: Your basic idea is right, but what you want to do is use a "map." I will follow up with more details when I can pull the code off my server. I 444 a number of services that I don't use. I have a script to find the IP addresses of those that trigger a 444 from access.log. If they come from a data center, hosting service, etc., they get on a blocking list for my firewall. I block the entire IP space. From: michael.friscia at yale.edu Sent: March 19, 2018 5:31 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Aborting malicious requests Just a thought before I start crafting one. I am creating a location{} block with the intention of populating it with a ton of requests I want to terminate immediately with a 444 response. Before I start, I thought I?d ask to see if anyone has a really good one I can use as a base. For example, we don?t serve PHP so I?m starting with Location ~* .php { Return 444; } Then I can just include this into all my server blocks so I can manage the aborts all in one place. This alone reduces errors in the logs significantly. But now I will have to start adding in all the wordpress stuff, then onto php myadmin, etc. I will end up with something like Location ~* (.php|wp-admin|my-admin) { Return 444; } I can imagine the chunk inside the parenthesis is going to be pretty huge which is why I thought I?d reach out to see if anyone has one already. Thanks, -mike ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwICAg&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=MMFd1g-YpouXJolEFUG9wADYPEA1sPlvQ_GvUe4zJHk&s=JRurMbCby9FTsTmkiXgHZcPzDsixrqBHKRyZb2qSny4&e= This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Mon Mar 19 14:58:04 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 19 Mar 2018 17:58:04 +0300 Subject: Nginx 1.12.1 Memory Consumption In-Reply-To: References: <72AEF1D0-21EF-40A7-AF7C-C3E73EA3807B@me.com> Message-ID: <0492222c-ef89-26db-f614-f7be54b4b835@nginx.com> Hi Matthew, On 19/03/2018 17:38, Matthew Smith wrote: > Hello, > > The host has 30G total memory. Nginx usage is being measured by > summing the Pss values from /proc/$pid/smaps for all worker processes. > > Do you have any suggestions for differentiating between the two > issues that might prevent memory from being returned to the system? > Did you read Maxim Dounin's comment about OpenSSL freelists issue? http://mailman.nginx.org/pipermail/nginx/2018-March/055869.html -- Maxim Konovalov From mdounin at mdounin.ru Mon Mar 19 15:35:12 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 19 Mar 2018 18:35:12 +0300 Subject: ERR_SSL_BAD_RECORD_MAC_ALERT when trying to reuse SSL session In-Reply-To: References: Message-ID: <20180319153512.GP77253@mdounin.ru> Hello! On Mon, Mar 19, 2018 at 03:04:14PM +0100, Abilio Marques wrote: > After working a bit more on the issue, I also found that: > > - Using a new pair of key/certificate makes the problem not to show > anymore. So, some files will make it fail, some files make it work. The > files are of different length, so it seems to be correlated to that. > - Using LD_PRELOAD with an "empty" (as in no C code) so file makes the > problem disappear. I discover this while trying to hook the calls to > OpenSSL, just to discover that even if I removed all my code, the problem > will go away. > > > As there are at least 3 different ways to make it disappear, looks to me > that is not directly related to SSL session, but to something completely > different. I cannot run valgrind on the MIPS hardware (no enough RAM), and > I've been trying to reproduce it on QEMU, to no avail. > > Any ideas on how to proceed? Do you think Valgrind will help at all? Any > other insights? As previously suggested, first of all you may want to check your build, see here: http://mailman.nginx.org/pipermail/nginx/2018-March/055829.html Check "nginx -V" output. If it contains something like "crossbuild", then recompile nginx yourself, without any 3rd party patches, ideally - on the host itself (a virtual machine with the same OS will be ok too), and check if the problem persists. Also, it might be a good idea to play with different OpenSSL versions (including compiling them statically into nginx using the "--with-openssl" configure option) and different compilers. -- Maxim Dounin http://mdounin.ru/ From peter_booth at me.com Mon Mar 19 17:11:21 2018 From: peter_booth at me.com (Peter Booth) Date: Mon, 19 Mar 2018 13:11:21 -0400 Subject: Nginx 1.12.1 Memory Consumption In-Reply-To: References: <72AEF1D0-21EF-40A7-AF7C-C3E73EA3807B@me.com> Message-ID: I?d use wrk2 or httperf to recreate a spike that hits an http endpoint. If you don?t see a spike but see one with https then you know ssl is one factor. It?s also interesting that this happens st around 23000 connections. If you reduce workr count to one or two And still see max connections around 23000 then it looks another factor is tcp resources. Sent from my iPhone > On Mar 19, 2018, at 10:38 AM, Matthew Smith wrote: > > Hello, > > The host has 30G total memory. Nginx usage is being measured by summing the Pss values from /proc/$pid/smaps for all worker processes. > > Do you have any suggestions for differentiating between the two issues that might prevent memory from being returned to the system? > > Thanks! > >> On Thu, Mar 15, 2018 at 1:06 PM Peter Booth wrote: >> Two questions: >> >> 1. how are you measuring memory consumption? >> 2. How much physical memory do you have on your host? >> >> >> Assuming that you are running on Linux, can you use pidstat -r -t -u -v -w -C ?nginx? >> to confirm the process?s memory consumption, >> >> and cat /var/meminfo to view a detailed description of how memory is being used onto entire host. >> >> >> >>> On Mar 14, 2018, at 1:05 PM, Matthew Smith wrote: >>> >> >>> Hello, >>> >>> I have encountered what I consider to be an interesting behavior. We have Nginx 1.12.1 configured to do SSL termination as well as reverse proxy. Whenever there is a traffic spike (300 req/s > 1000 req/s, 3k active connections > 20k active connections), there is a corresponding spike in Nginx memory consumption. In this case 500M > 8G across 10 worker processes. What is interesting is that Nginx never seems to release this memory after the traffic returns to normal. Is this expected? What is Nginx using this memory for? Is there a configuration that will rotate the workers based on some metric in order to return memory to the system? >>> >>> Requests per second: >>> https://www.dropbox.com/s/cl2yqdxgqk2fn89/Screenshot%202018-03-14%2012.38.10.png?dl=0 >>> >>> Active connections: >>> https://www.dropbox.com/s/s3j4oux77op3svo/Screenshot%202018-03-14%2012.44.14.png?dl=0 >>> >>> Total Nginx memory usage: >>> https://www.dropbox.com/s/ihp5zxky2mgd2hr/Screenshot%202018-03-14%2012.44.43.png?dl=0 >>> >>> Thanks, >>> >>> Matt >> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Mon Mar 19 17:43:28 2018 From: lists at lazygranch.com (lists at lazygranch.com) Date: Mon, 19 Mar 2018 10:43:28 -0700 Subject: Aborting malicious requests In-Reply-To: References: Message-ID: <20180319104328.4dfd2d13.lists@lazygranch.com> On Mon, 19 Mar 2018 12:31:20 +0000 "Friscia, Michael" wrote: > Just a thought before I start crafting one. I am creating a > location{} block with the intention of populating it with a ton of > requests I want to terminate immediately with a 444 response. Before > I start, I thought I?d ask to see if anyone has a really good one I > can use as a base. > > For example, we don?t serve PHP so I?m starting with > Location ~* .php { > Return 444; > } > > Then I can just include this into all my server blocks so I can > manage the aborts all in one place. This alone reduces errors in the > logs significantly. But now I will have to start adding in all the > wordpress stuff, then onto php myadmin, etc. I will end up with > something like > > Location ~* (.php|wp-admin|my-admin) { > Return 444; > } > > I can imagine the chunk inside the parenthesis is going to be pretty > huge which is why I thought I?d reach out to see if anyone has one > already. > > Thanks, > -mike > What follows is how I block requests that shouldn't be made with normal operation. I use a similar scheme for user agents and referrals. You should block referrals from spam/porn sites since they can trigger some browser blocking plugings. (AKA give you a bad reputation.) The procedure is similar to the returning 444 procedure I am about to outline, but you should 403 them or something other than 444. Remember 444 is a no reply method which is technically not kosher on the internet (though it makes sense in this application). Here is the procedure: In nginx.conf in the http section, add this line: include /etc/nginx/mapbaduri; In the nginx.conf server section, add this line: if ($bad_uri) { return 444; } This is the contents of the file mapbaduri that you need to create. It creates $bad_uri, used in the conditional statement in nginx.conf. If you actually use any of these resources, then obviously don't put them in the list. You can also accidentally match patterns in intended requests, so use caution. Most I created by actual request, though a few I found suggested on the interwebs. map $request_uri $bad_uri { default 0; /cms 1; /mscms 1; ~*\.asp 1; ~*\.cfg 1; ~*\.cgi 1; ~*\.json 1; ~*\.php 1; ~*\.ssh 1; ~*\.xml 1; ~*\.git 1; ~*\.svn 1; ~*\.hg 1; ~*docs 1; ~*id_dsa 1; ~*issmall 1; ~*moodletreinar 1; ~*new_gb 1; ~*tiny_mce 1; ~*vendor 1; ~*web 1; ~*_backup 1; ~*_core 1; ~*_sub 1; ~*authority 1; ~*/jmx 1; ~*/struts 1; ~*/action 1; ~*/lib 1; ~*/career 1; ~*/market 1; ~*elfinder1 1; ~*/assets 1; ~*place1 1; ~*/backup 1; ~*zecmd 1; ~*/mysql 1; ~*/sql 1; ~*/shop 1; ~*/plus 1; ~*/forum 1; /engine 1; ~*license.txt 1; ~*/includes 1; ~*/sites 1; ~*/plugins 1; ~*/jeecms 1; ~*gluten 1; ~*/admin 1; ~*/invoker 1; ~*/blog 1; ~*xmlrpc 1; ~*/wordpress 1; ~*/hndUnblock.cgi 1; ~*/test/ 1; ~*/cgi 1; ~*/plus 1; ~/wp/ 1; ~/wp-admin/ 1; ~*/proxy 1; ~*/wp-login.php 1; ~*/js 1; ~*/usr 1; ~*/user 1; ~*/var 1; ~*/bin/ 1; ~*/template 1; ~*/components 1; ~*/editor 1; ~*/common 1; ~*/include 1; ~*/manage 1; ~*/script 1; ~*/system 1; ~*/upload 1; ~*/utility 1; ~*/bei 1; ~*/ebak 1; ~*piwik 1; ~*muieblackcat 1; ~*pma 1; ~*apache 1; ~*cpanel 1; ~*/phpmyadmin 1; ~*clientapi\.ipip\.net 1; ~*freeapi\.ipip\.net 1; ~*/api.ipip.net 1; ~*/joomla 1; ~^/www 1; ~*/flashfxp 1; ~*w00tw00t 1; ~*/downloader 1; ~*/category 1; ~*netcat 1; } > From nginx-forum at forum.nginx.org Mon Mar 19 23:05:43 2018 From: nginx-forum at forum.nginx.org (mblancett) Date: Mon, 19 Mar 2018 19:05:43 -0400 Subject: nginx erroneously reports period character as illegal in request headers Message-ID: <40202b667d63500d6aebc56e7971318c.NginxMailingListEnglish@forum.nginx.org> Hello - Nginx is reporting invalid incoming headers with RFC-compliant headers that use a '.' (meaning, a period) within the name. As an example, I am curling to a very basic proxy setup while trailing the error log: The following is valid: # curl -vvvH "a-b-c: 999" localhost:81/test/v01 * About to connect() to localhost port 81 (#0) * Trying ::1... connected * Connected to localhost (::1) port 81 (#0) > GET /test/v01 HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > Host: localhost:81 > Accept: */* > a-b-c: 999 > < HTTP/1.1 204 No Content < Server: nginx < Date: Mon, 19 Mar 2018 22:58:35 GMT < Content-Length: 0 < Connection: keep-alive < Cache-Control: max-age=0, no-store < * Connection #0 to host localhost left intact * Closing connection #0 2018/03/19 22:58:35 [info] 432544#432544: *526 client ::1 closed keepalive connection However a very similar request but using a period within the header: [root at dtord01stg02p ~]# curl -vvvH "a.b.c: 999" localhost:81/test/v01 * About to connect() to localhost port 81 (#0) * Trying ::1... connected * Connected to localhost (::1) port 81 (#0) > GET /test/v01 HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > Host: localhost:81 > Accept: */* > a.b.c: 999 > 2018/03/19 22:58:38 [info] 432544#432544: *528 client sent invalid header line: "a.b.c: 999" while reading client request headers, client: ::1, server: , request: "GET /test/v01 HTTP/1.1", host: "localhost:81" < HTTP/1.1 204 No Content < Server: nginx < Date: Mon, 19 Mar 2018 22:58:38 GMT < Content-Length: 0 < Connection: keep-alive < Cache-Control: max-age=0, no-store < * Connection #0 to host localhost left intact * Closing connection #0 2018/03/19 22:58:38 [info] 432544#432544: *528 client ::1 closed keepalive connection I am aware that I can allow illegal requests, but standards compliance is a strict requirement in our enterprise. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279116,279116#msg-279116 From mdounin at mdounin.ru Tue Mar 20 13:00:30 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 20 Mar 2018 16:00:30 +0300 Subject: nginx erroneously reports period character as illegal in request headers In-Reply-To: <40202b667d63500d6aebc56e7971318c.NginxMailingListEnglish@forum.nginx.org> References: <40202b667d63500d6aebc56e7971318c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180320130030.GX77253@mdounin.ru> Hello! On Mon, Mar 19, 2018 at 07:05:43PM -0400, mblancett wrote: > Nginx is reporting invalid incoming headers with RFC-compliant headers that > use a '.' (meaning, a period) within the name. Yes. Because, while being RFC-complaint, these headers cause various problems, some are listed here: http://mailman.nginx.org/pipermail/nginx/2010-January/018271.html As such, nginx reports these headers as invalid and ignores them. Details on which headers are considered valid can be found here: http://nginx.org/r/ignore_invalid_headers [...] > I am aware that I can allow illegal requests, but standards compliance is a > strict requirement in our enterprise. No, you can't allow illegal requests. You can, however, switch off "ignore_invalid_headers", so nginx will accept and use headers with any characters. -- Maxim Dounin http://mdounin.ru/ From michael.friscia at yale.edu Tue Mar 20 13:03:09 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Tue, 20 Mar 2018 13:03:09 +0000 Subject: Aborting malicious requests In-Reply-To: <20180319104328.4dfd2d13.lists@lazygranch.com> References: <20180319104328.4dfd2d13.lists@lazygranch.com> Message-ID: This is great, thank you again, this is a huge jumpstart! ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu On 3/19/18, 1:43 PM, "nginx on behalf of lists at lazygranch.com" wrote: On Mon, 19 Mar 2018 12:31:20 +0000 "Friscia, Michael" wrote: > Just a thought before I start crafting one. I am creating a > location{} block with the intention of populating it with a ton of > requests I want to terminate immediately with a 444 response. Before > I start, I thought I?d ask to see if anyone has a really good one I > can use as a base. > > For example, we don?t serve PHP so I?m starting with > Location ~* .php { > Return 444; > } > > Then I can just include this into all my server blocks so I can > manage the aborts all in one place. This alone reduces errors in the > logs significantly. But now I will have to start adding in all the > wordpress stuff, then onto php myadmin, etc. I will end up with > something like > > Location ~* (.php|wp-admin|my-admin) { > Return 444; > } > > I can imagine the chunk inside the parenthesis is going to be pretty > huge which is why I thought I?d reach out to see if anyone has one > already. > > Thanks, > -mike > What follows is how I block requests that shouldn't be made with normal operation. I use a similar scheme for user agents and referrals. You should block referrals from spam/porn sites since they can trigger some browser blocking plugings. (AKA give you a bad reputation.) The procedure is similar to the returning 444 procedure I am about to outline, but you should 403 them or something other than 444. Remember 444 is a no reply method which is technically not kosher on the internet (though it makes sense in this application). Here is the procedure: In nginx.conf in the http section, add this line: include /etc/nginx/mapbaduri; In the nginx.conf server section, add this line: if ($bad_uri) { return 444; } This is the contents of the file mapbaduri that you need to create. It creates $bad_uri, used in the conditional statement in nginx.conf. If you actually use any of these resources, then obviously don't put them in the list. You can also accidentally match patterns in intended requests, so use caution. Most I created by actual request, though a few I found suggested on the interwebs. map $request_uri $bad_uri { default 0; /cms 1; /mscms 1; ~*\.asp 1; ~*\.cfg 1; ~*\.cgi 1; ~*\.json 1; ~*\.php 1; ~*\.ssh 1; ~*\.xml 1; ~*\.git 1; ~*\.svn 1; ~*\.hg 1; ~*docs 1; ~*id_dsa 1; ~*issmall 1; ~*moodletreinar 1; ~*new_gb 1; ~*tiny_mce 1; ~*vendor 1; ~*web 1; ~*_backup 1; ~*_core 1; ~*_sub 1; ~*authority 1; ~*/jmx 1; ~*/struts 1; ~*/action 1; ~*/lib 1; ~*/career 1; ~*/market 1; ~*elfinder1 1; ~*/assets 1; ~*place1 1; ~*/backup 1; ~*zecmd 1; ~*/mysql 1; ~*/sql 1; ~*/shop 1; ~*/plus 1; ~*/forum 1; /engine 1; ~*license.txt 1; ~*/includes 1; ~*/sites 1; ~*/plugins 1; ~*/jeecms 1; ~*gluten 1; ~*/admin 1; ~*/invoker 1; ~*/blog 1; ~*xmlrpc 1; ~*/wordpress 1; ~*/hndUnblock.cgi 1; ~*/test/ 1; ~*/cgi 1; ~*/plus 1; ~/wp/ 1; ~/wp-admin/ 1; ~*/proxy 1; ~*/wp-login.php 1; ~*/js 1; ~*/usr 1; ~*/user 1; ~*/var 1; ~*/bin/ 1; ~*/template 1; ~*/components 1; ~*/editor 1; ~*/common 1; ~*/include 1; ~*/manage 1; ~*/script 1; ~*/system 1; ~*/upload 1; ~*/utility 1; ~*/bei 1; ~*/ebak 1; ~*piwik 1; ~*muieblackcat 1; ~*pma 1; ~*apache 1; ~*cpanel 1; ~*/phpmyadmin 1; ~*clientapi\.ipip\.net 1; ~*freeapi\.ipip\.net 1; ~*/api.ipip.net 1; ~*/joomla 1; ~^/www 1; ~*/flashfxp 1; ~*w00tw00t 1; ~*/downloader 1; ~*/category 1; ~*netcat 1; } > _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=4TqQyHIefzaJqZRZNpiUcP4n2RCgumPOM3ux8inv7DA&s=fjYC5sRWHNfUYXxcb-3dAjLJEMJjKX-thsZei2dQwo8&e= From mdounin at mdounin.ru Tue Mar 20 16:12:12 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 20 Mar 2018 19:12:12 +0300 Subject: nginx-1.13.10 Message-ID: <20180320161212.GE77253@mdounin.ru> Changes with nginx 1.13.10 20 Mar 2018 *) Feature: the "set" parameter of the "include" SSI directive now allows writing arbitrary responses to a variable; the "subrequest_output_buffer_size" directive defines maximum response size. *) Feature: now nginx uses clock_gettime(CLOCK_MONOTONIC) if available, to avoid timeouts being incorrectly triggered on system time changes. *) Feature: the "escape=none" parameter of the "log_format" directive. Thanks to Johannes Baiter and Calin Don. *) Feature: the $ssl_preread_alpn_protocols variable in the ngx_stream_ssl_preread_module. *) Feature: the ngx_http_grpc_module. *) Bugfix: in memory allocation error handling in the "geo" directive. *) Bugfix: when using variables in the "auth_basic_user_file" directive a null character might appear in logs. Thanks to Vadim Filimonov. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Tue Mar 20 17:31:36 2018 From: nginx-forum at forum.nginx.org (mblancett) Date: Tue, 20 Mar 2018 13:31:36 -0400 Subject: nginx erroneously reports period character as illegal in request headers In-Reply-To: <20180320130030.GX77253@mdounin.ru> References: <20180320130030.GX77253@mdounin.ru> Message-ID: <6d4f0b5658062998464084e1990509ee.NginxMailingListEnglish@forum.nginx.org> To clarify, by 'illegal' I meant non-compliant. These headers _are_ used, as we have run into them in production in our business coming from clients, and some time on stack overflow shows these are becoming more and more common. They are also RFC-compliant, and competing products support them. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279116,279132#msg-279132 From kworthington at gmail.com Tue Mar 20 18:31:25 2018 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 20 Mar 2018 14:31:25 -0400 Subject: [nginx-announce] nginx-1.13.10 In-Reply-To: <20180320161219.GF77253@mdounin.ru> References: <20180320161219.GF77253@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.13.10 for Windows https://kevinworthington.com/nginxwin11310 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Mar 20, 2018 at 12:12 PM, Maxim Dounin wrote: > Changes with nginx 1.13.10 20 Mar > 2018 > > *) Feature: the "set" parameter of the "include" SSI directive now > allows writing arbitrary responses to a variable; the > "subrequest_output_buffer_size" directive defines maximum response > size. > > *) Feature: now nginx uses clock_gettime(CLOCK_MONOTONIC) if available, > to avoid timeouts being incorrectly triggered on system time > changes. > > *) Feature: the "escape=none" parameter of the "log_format" directive. > Thanks to Johannes Baiter and Calin Don. > > *) Feature: the $ssl_preread_alpn_protocols variable in the > ngx_stream_ssl_preread_module. > > *) Feature: the ngx_http_grpc_module. > > *) Bugfix: in memory allocation error handling in the "geo" directive. > > *) Bugfix: when using variables in the "auth_basic_user_file" directive > a null character might appear in logs. > Thanks to Vadim Filimonov. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pablo at pablo.com.mx Tue Mar 20 19:25:35 2018 From: pablo at pablo.com.mx (Pablo Fischer) Date: Tue, 20 Mar 2018 12:25:35 -0700 Subject: [PATCH] Send Connection: close for draining. Message-ID: Howdy, First off sorry if the C code is a bit ugly, been a while since I did some C with nginx. So I've some very long running keep-alive connections that although keepalive_timeout helps it is just "not enough" or "fast enough" when I need it to stop draining connections. I looked at the headers-more plugin and then noted that nginx was still adding the Connection header via ngx_http_header_filter. So I came up with a patch, feel free to use it. The patch does two things: - Adds a new variable ($disable_keep_alive_now). - Disables if we (nginx) are told to disable keep-alive (aka start sending Connection: close). The flow is a little bit like this: 1. You set $disable_keep_alive_now in your nginx location or wherever you want. 2. If the value is "yes" (set $disable_keep_alive_now "yes") then nginx will start sending the Connection: close header rather than the Connection: keep-alive. For example: location / { if (!-f "healthcheck/path") { set $disable_keep_alive_now "yes"; } } That way if you take your host out of service via checking a specific file then you just set the header and nginx will start sending close and thus connections will start draining faster. I opted for a variable rather than a setting so that I still let nginx manage the keepalive timeouts but at the same time having a "lever" that I can turn on dynamically for changing the Connection header. When I googled for it I noted that there were a few people asking how to do something similar so that's why I opted in publishing the patch. Ah, the url is: https://github.com/pfischermx/nginx-keepalive-disable-patch Thanks -- Pablo From lists at lazygranch.com Wed Mar 21 02:49:35 2018 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 20 Mar 2018 19:49:35 -0700 Subject: Aborting malicious requests In-Reply-To: References: <20180319104328.4dfd2d13.lists@lazygranch.com> Message-ID: <20180320194935.3a10caf7.lists@lazygranch.com> On Tue, 20 Mar 2018 13:03:09 +0000 "Friscia, Michael" wrote: > This is great, thank you again, this is a huge jumpstart! Per NIST best practices, you should limit the HTML verbs that you allow. A very simple website can run on just GET and HEAD. Here is how you 444 websites trying to POST for example to your website. In this case, only GET and HEAD are allowed. if ($request_method !~ ^(GET|HEAD)$ ) { return 444; You might as well trap bad agents. Basically whatever isn't a browser. I found a list on github and have been adding new ones as I get pestered. https://paste.fedoraproject.org/paste/FI-IRICSJy1SR5mwBZxVDQ/ I called this file mapbadagentslarge. Use the same basic scheme. This list is overkill, but it doesn't seem to slow down nginx. What you want to avoid are the scrapers like nutch. if ($badagent) { return 444; } I also block bad referrals. Porn sites for instance. If a bad site links to your site, at least you can return a 403 (not 444) and google won't consider the link in its algorithm. You can request an incognito browser and look at them, preferably in private. I've clicked on the occasional odd referral only to have porn pop up my screen while at a coffee shop. Blocking referrals will lower your google rank. https://paste.fedoraproject.org/paste/6ZLa10-4L9KocFNJiNG~pw/ if ($bad_referer) { return 403; } If you are using encryption AND if you are mapping http requests to https, you should do these maps in both the http and https blocks. It doesn't make sense to go through the encryption process just to tell the IP to take a hike. What you do with the 444 entries in the access.log is up to you. You can do nothing and probably be fine. I have scripts to get the bad IPs and if they have no "eyes", I block them in the firewall. Determining if they have no eyes is time consuming. You can feed the IP to ip2location.com. A few of the IPs assigned to data centers really go to ISPs. ISPs have eyes, so you don't want to block them. You can get the IP space assigned to the entity with bgp.he.net. From nginx-forum at forum.nginx.org Wed Mar 21 09:52:20 2018 From: nginx-forum at forum.nginx.org (=?UTF-8?Q? Grzegorz=20=C4=86wikli=C5=84ski ?=) Date: Wed, 21 Mar 2018 05:52:20 -0400 Subject: Files still on disc after inactive time In-Reply-To: <20180228134147.GN89840@mdounin.ru> References: <20180228134147.GN89840@mdounin.ru> Message-ID: <071ebe1767dbbc8667df790bee5e05ca.NginxMailingListEnglish@forum.nginx.org> Hello, As you said before killing nginx worker is connected with third party module which is Linux kernel Out Of Memory killer. Because nginx is using too much memory OOM killer score is about 800-900 point where 1000 is maximum, which is why kernel is killing out nginx worker, and after that files in tmp catalog are not deleted and disc is full. About configuration, we are storing 40GB cache because size of files which are stored is about 700MB to 1GB and stored for 1d. Our machines have 24GB and at some time nginx uses 22GB-23GB, and after that OOM killer start and kill our worker. My question is why nginx is using so much RAM, and how to prevent nginx from using so much RAM. Or other possibilities to stop killing nginx worker. Best Regards, Grzegorz ?wikli?ski Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278589,279138#msg-279138 From mdounin at mdounin.ru Wed Mar 21 15:41:00 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 21 Mar 2018 18:41:00 +0300 Subject: Files still on disc after inactive time In-Reply-To: <071ebe1767dbbc8667df790bee5e05ca.NginxMailingListEnglish@forum.nginx.org> References: <20180228134147.GN89840@mdounin.ru> <071ebe1767dbbc8667df790bee5e05ca.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180321154100.GP77253@mdounin.ru> Hello! On Wed, Mar 21, 2018 at 05:52:20AM -0400, Grzegorz ?wikli?ski wrote: > As you said before killing nginx worker is connected with third party module > which is Linux kernel Out Of Memory killer. > > Because nginx is using too much memory OOM killer score is about 800-900 > point where 1000 is maximum, which is why kernel is killing out nginx > worker, and after that files in tmp catalog are not deleted and disc is > full. > > About configuration, we are storing 40GB cache because size of files which > are stored is about 700MB to 1GB and stored for 1d. Our machines have 24GB > and at some time nginx uses 22GB-23GB, and after that OOM killer start and > kill our worker. Ok, so the behaviour observed with cache files not being removed is an expected result of the OOM killer activity. No need to investigate it any further. Instead, let's focus on what causes the problem - RAM usage. > My question is why nginx is using so much RAM, and how to prevent nginx from > using so much RAM. Or other possibilities to stop killing nginx worker. Amount of RAM nginx can use depends on the configuration and a particular workload. As per the only configuration snippet given in this thread, nginx is expected to use at least 4G of shared memory for cache keys zone. In typical configurations, most of RAM is spent on various per-connection buffers - including client_header_buffer, large_client_header_buffers, proxy_buffers, proxy_buffer_size, output_buffers, and so on. Even with relatively small defaults these can be a lot if there are thousands of connections. What exactly happens in your case requires detailed investigation, and unlikely it is something that can be done at least without such basic things as "nginx -V" output and full nginx configuration. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Mar 22 10:15:56 2018 From: nginx-forum at forum.nginx.org (manjusv) Date: Thu, 22 Mar 2018 06:15:56 -0400 Subject: no live upstreams and NO previous error In-Reply-To: <52766bba1d689c04b709c4bf6d23cdfc.NginxMailingListEnglish@forum.nginx.org> References: <99d18d74a058c669b0ac0e4b75a9822f.NginxMailingListEnglish@forum.nginx.org> <8e29d2dcf208c5d726fd2be50a10e699.NginxMailingListEnglish@forum.nginx.org> <52766bba1d689c04b709c4bf6d23cdfc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1388e6a1586839dd259beea5deb58972.NginxMailingListEnglish@forum.nginx.org> Hey Drookie, Can you let me know how you solved this issue? We are facing a similar issue which results in 502 gateway error and I'm not able to find a solution for it. Below is the error log from /var/log/nginx/error.log 2018/03/22 01:34:49 [error] 8779#0: *13133 no live upstreams while connecting to upstream, client: 207.189.113.68, server: 192.168.10.4, request: "POST /upload_app HTTP/1.1", upstream: "uwsgi://upload_clusters", host: "173.255.9.88" 2018/03/22 01:34:50 [error] 8779#0: *13136 no live upstreams while connecting to upstream, client: 207.189.113.68, server: 192.168.10.4, request: "POST /upload_app HTTP/1.1", upstream: "uwsgi://upload_clusters", host: "173.255.9.88" 2018/03/22 01:34:50 [error] 8779#0: *13143 no live upstreams while connecting to upstream, client: 207.189.113.68, server: 192.168.10.4, request: "POST /upload_app HTTP/1.1", upstream: "uwsgi://upload_clusters", host: "173.255.9.88" 2018/03/22 01:34:50 [error] 8779#0: *13144 no live upstreams while connecting to upstream, client: 207.189.113.68, server: 192.168.10.4, request: "POST /upload_app HTTP/1.1", upstream: "uwsgi://upload_clusters", host: "173.255.9.88" We dont get this issue for all requests but for some random requests in between. Below is the access log from /var/log/nginx/access.log 207.189.113.68 - - [22/Mar/2018:02:15:29 -0700] "POST /upload_app HTTP/1.1" 502 198 "-" "Apache-HttpClient/4.0.3 (java 1.5)" 207.189.113.68 - - [22/Mar/2018:02:15:29 -0700] "POST /upload_app HTTP/1.1" 200 162 "-" "Apache-HttpClient/4.0.3 (java 1.5)" 207.189.113.68 - - [22/Mar/2018:02:15:29 -0700] "POST /upload_app HTTP/1.1" 200 162 "-" "Apache-HttpClient/4.0.3 (java 1.5)" 207.189.113.68 - - [22/Mar/2018:02:15:29 -0700] "POST /upload_app HTTP/1.1" 502 198 "-" "Apache-HttpClient/4.0.3 (java 1.5)" 207.189.113.68 - - [22/Mar/2018:02:15:29 -0700] "POST /upload_app HTTP/1.1" 502 198 "-" "Apache-HttpClient/4.0.3 (java 1.5)" 207.189.113.68 - - [22/Mar/2018:02:15:29 -0700] "POST /upload_app HTTP/1.1" 200 162 "-" "Apache-HttpClient/4.0.3 (java 1.5)" 207.189.113.68 - - [22/Mar/2018:02:15:34 -0700] "POST /upload_app HTTP/1.1" 200 162 "-" "Apache-HttpClient/4.0.3 (java 1.5)" 207.189.113.68 - - [22/Mar/2018:02:15:34 -0700] "POST /upload_app HTTP/1.1" 200 162 "-" "Apache-HttpClient/4.0.3 (java 1.5)" 207.189.113.68 - - [22/Mar/2018:02:15:36 -0700] "POST /upload_app HTTP/1.1" 200 162 "-" "Apache-HttpClient/4.0.3 (java 1.5)" 207.189.113.68 - - [22/Mar/2018:02:15:37 -0700] "POST /upload_app HTTP/1.1" 200 162 "-" "Apache-HttpClient/4.0.3 (java 1.5)" 207.189.113.68 - - [22/Mar/2018:02:15:39 -0700] "POST /upload_app HTTP/1.1" 200 162 "-" "Apache-HttpClient/4.0.3 (java 1.5)" 207.189.113.68 - - [22/Mar/2018:02:15:42 -0700] "POST /upload_app HTTP/1.1" 200 162 "-" "Apache-HttpClient/4.0.3 (java 1.5)" 207.189.113.68 - - [22/Mar/2018:02:15:42 -0700] "POST /upload_app HTTP/1.1" 200 162 "-" "Apache-HttpClient/4.0.3 (java 1.5)" 207.189.113.68 - - [22/Mar/2018:02:15:45 -0700] "POST /upload_app HTTP/1.1" 200 162 "-" "Apache-HttpClient/4.0.3 (java 1.5)" 207.189.113.68 - - [22/Mar/2018:02:15:46 -0700] "POST /upload_app HTTP/1.1" 502 198 "-" "Apache-HttpClient/4.0.3 (java 1.5)" 207.189.113.68 - - [22/Mar/2018:02:15:46 -0700] "POST /upload_app HTTP/1.1" 200 162 "-" "Apache-HttpClient/4.0.3 (java 1.5)" As you can see some requests got 200(success) response and some got 502(gateway error). I have tried parameter tuning by increasing timeouts but no help. It would be really helpful if you can let me know what worked for you. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269577,279148#msg-279148 From hemantbist at gmail.com Fri Mar 23 00:00:20 2018 From: hemantbist at gmail.com (Hemant Bist) Date: Thu, 22 Mar 2018 17:00:20 -0700 Subject: Only compressed version of file on server , and supporting clients that don't send Accept-Encoding. Message-ID: Hi, We have only gzipped files stored on nginx and need to serve client that : A) Support gzip transfer encoding (> 99% of the clients). They send Accept-Encoding: gzip header... B) < 1% of the clients that don't support transfer encoding. The don't send Accept-Encoding header. There is ample CPU in the nginx servers to support clients of type B). But I am unable to figure out a config/reasonable script to help us serve these clients. Clients of type A) are served with the following config. --- Working config that appends .gz in the try_files ---- location /compressed_files/ { add_header Content-Encoding "gzip"; expires 48h; add_header Cache-Control private; try_files $uri.gz @lua_script_for_missing_file; } ----- Not working config with gunzip on; likely because gunzip filter runs before add_header? location /compressed_files/ { add_header Content-Encoding "gzip"; expires 48h; add_header Cache-Control private; *# gunzip on fails to uncompress likely because it does not notice the add_header directive.* * gunzip on;* * gzip_proxied any;* try_files $uri.gz @lua_script_for_missing_file; } I would appreciate any pointers on how to do this. I may be missing some obvious configuration for such case. We did discuss keeping both unzipped and zipped version on the server, but unfortunately that is unlikely to happen. Thanks, Hemant -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Mar 23 00:38:13 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 23 Mar 2018 03:38:13 +0300 Subject: Only compressed version of file on server , and supporting clients that don't send Accept-Encoding. In-Reply-To: References: Message-ID: <20180323003813.GX77253@mdounin.ru> Hello! On Thu, Mar 22, 2018 at 05:00:20PM -0700, Hemant Bist wrote: > Hi, > We have only gzipped files stored on nginx and need to serve client that : > A) Support gzip transfer encoding (> 99% of the clients). They send > Accept-Encoding: gzip header... > B) < 1% of the clients that don't support transfer encoding. The don't send > Accept-Encoding header. > > There is ample CPU in the nginx servers to support clients of type B). But > I am unable to figure out a config/reasonable script > to help us serve these clients. > > Clients of type A) are served with the following config. > --- Working config that appends .gz in the try_files ---- > location /compressed_files/ { > add_header Content-Encoding "gzip"; > expires 48h; > add_header Cache-Control private; > try_files $uri.gz @lua_script_for_missing_file; > } > > > ----- Not working config with gunzip on; likely because gunzip filter > runs before add_header? > > location /compressed_files/ { > add_header Content-Encoding "gzip"; > > expires 48h; > add_header Cache-Control private; > *# gunzip on fails to uncompress likely because it does not notice the > add_header directive.* > * gunzip on;* > * gzip_proxied any;* > try_files $uri.gz @lua_script_for_missing_file; > } > > > I would appreciate any pointers on how to do this. I may be missing some > obvious configuration for such case. > We did discuss keeping both unzipped and zipped version on the server, but > unfortunately that is unlikely to happen. Try this instead: location /compressed_files/ { gzip_static always; gunzip on; } See documentation here for additional details: http://nginx.org/r/gzip_static http://nginx.org/r/gunzip Note that you wan't be able to combine this with "try_files $uri.gz ...", as this will change URI as seen by gzip_static and will break it. If you want to fall back to a different location when there is no file, use "error_page 404 ..." instead. -- Maxim Dounin http://mdounin.ru/ From michael.friscia at yale.edu Fri Mar 23 11:28:34 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Fri, 23 Mar 2018 11:28:34 +0000 Subject: proxy_cache_key case sensitivity question Message-ID: The question is if these are cached as different files http://myurl.html http://MyUrl.html I?m assuming that both would be different cache locations since the md5 would be different on each but ideally these would be the same cached file to prevent dupes. My question is about the proxy_cache_key, when that is generated, is it case sensitive? We ran a quick test and it seemed to be true that changing the case in the URL created a new/different version of the page. If our test was accurate and this is how it works, then is there a way to make it so that the key used to generate the MD5 always uses a lower case string? One possible solution is to install the module that changes strings to lower/upper and then wrap that around the string used for the key. But before I go down that path, I wanted to find out if I would be wasting my time. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.friscia at yale.edu Fri Mar 23 13:14:00 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Fri, 23 Mar 2018 13:14:00 +0000 Subject: Redirect question Message-ID: I?m wondering how to achieve this in the config I have a url like this http://example.com/people/mike and I want to redirect to https://www.othersite.com/users/mike the problem at hand is switching ?/people/? to ?/users/? but keep everything else so if I was to have http://example.com/people/mike/education?page=1 I would still get redirected to https://www.othersite.com/users/mike/education?page=1 I currently have redirects where I just append $request_uri to the new domain name but in this case I need to alter the $request_uri before I use it. So the question is how should I approach making this sort of change? ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Mar 23 13:36:26 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 23 Mar 2018 16:36:26 +0300 Subject: Unit 0.7 beta release Message-ID: <4127558.rKuOZfYYtY@vbart-workstation> Hello, I'm glad to announce a new beta of NGINX Unit with a number of bugfixes and Ruby/Rack support. Now you can easily run applications like Redmine with Unit. The full list of supported languages today is PHP, Python, Go, Perl, and Ruby. More languages are coming. Changes with Unit 0.7 22 Mar 2018 *) Feature: Ruby application module. *) Bugfix: in discovering modules. *) Bugfix: various race conditions on reconfiguration and during shutting down. *) Bugfix: tabs and trailing spaces were not allowed in header fields values. *) Bugfix: a segmentation fault occurred in Python module if start_response() was called outside of WSGI callable. *) Bugfix: a segmentation fault might occur in PHP module if there was an error while initialization. Binary Linux packages and Docker images are available here: - Packages: https://unit.nginx.org/installation/#precompiled-packages - Docker: https://hub.docker.com/r/nginx/unit/tags/ Packages and images for the new Ruby module will be built next week. wbr, Valentin V. Bartenev From igor at sysoev.ru Fri Mar 23 15:04:34 2018 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 23 Mar 2018 18:04:34 +0300 Subject: Redirect question In-Reply-To: References: Message-ID: <7CB37613-0A83-462F-84C2-741593832662@sysoev.ru> > On 23 Mar 2018, at 16:14, Friscia, Michael wrote: > > I?m wondering how to achieve this in the config > > I have a url like this > http://example.com/people/mike > > and I want to redirect to > https://www.othersite.com/users/mike > > the problem at hand is switching ?/people/? to ?/users/? but keep everything else so if I was to have > http://example.com/people/mike/education?page=1 > I would still get redirected to > https://www.othersite.com/users/mike/education?page=1 > > I currently have redirects where I just append $request_uri to the new domain name but in this case I need to alter the $request_uri before I use it. So the question is how should I approach making this sort of change? Something like this: location ~ ^/people/(?.+) { return 301 http://example.com/users/$REST$is_args$args; } However, if you do not want to care about location order in future, this is better: location /people/ { location ~ ^/people/(?.+) { return 301 http://example.com/users/$REST$is_args$args; } } -- Igor Sysoev http://nginx.com From michael.friscia at yale.edu Fri Mar 23 15:22:13 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Fri, 23 Mar 2018 15:22:13 +0000 Subject: Redirect question In-Reply-To: <7CB37613-0A83-462F-84C2-741593832662@sysoev.ru> References: <7CB37613-0A83-462F-84C2-741593832662@sysoev.ru> Message-ID: <056DFCFA-3708-4D03-B4C2-0350745F1A70@yale.edu> Ok, that worked out really well. For anyone following I had to go here https://forum.nginx.org/read.php?2,279172,279176#msg-279176 because our exchange server destroyed the sample URLs. But I?m not sure how the location order is mitigated. Is this because the first location match is a regex instead of just a string match? ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu On 3/23/18, 11:04 AM, "nginx on behalf of Igor Sysoev" wrote: > On 23 Mar 2018, at 16:14, Friscia, Michael wrote: > > I?m wondering how to achieve this in the config > > I have a url like this > https://urldefense.proofpoint.com/v2/url?u=http-3A__example.com_people_mike&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=3O_LBJK37dgtTRVYwIUXbeYZOEoMAFuwQka4abHmvc0&s=d-eqNL5YgMJ1AKeHKJOtXnHHNtzi9Z1jTzjuytLCfu8&e= > > and I want to redirect to > https://urldefense.proofpoint.com/v2/url?u=https-3A__www.othersite.com_users_mike&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=3O_LBJK37dgtTRVYwIUXbeYZOEoMAFuwQka4abHmvc0&s=s3EYsk58VjmTeyiIyIvDdkIV6OWmBL9HsdDGIWnT7oA&e= > > the problem at hand is switching ?/people/? to ?/users/? but keep everything else so if I was to have > https://urldefense.proofpoint.com/v2/url?u=http-3A__example.com_people_mike_education-3Fpage-3D1&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=3O_LBJK37dgtTRVYwIUXbeYZOEoMAFuwQka4abHmvc0&s=Jg2LCkkCd0MtLQdRgKJn_Qw3KVPphD5g1M6YGmxLYMI&e= > I would still get redirected to > https://urldefense.proofpoint.com/v2/url?u=https-3A__www.othersite.com_users_mike_education-3Fpage-3D1&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=3O_LBJK37dgtTRVYwIUXbeYZOEoMAFuwQka4abHmvc0&s=6b4ktk2Q5Fa3Ro1Hg2qGdD9dhsIk5Ap-3cS_GbOSBc0&e= > > I currently have redirects where I just append $request_uri to the new domain name but in this case I need to alter the $request_uri before I use it. So the question is how should I approach making this sort of change? Something like this: location ~ ^/people/(?.+) { return 301 https://urldefense.proofpoint.com/v2/url?u=http-3A__example.com_users_-24REST-24is-5Fargs-24args&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=3O_LBJK37dgtTRVYwIUXbeYZOEoMAFuwQka4abHmvc0&s=dwsx2pgviho1mDEe9-eSJOsU6L5YomDKne9pGahGEII&e=; } However, if you do not want to care about location order in future, this is better: location /people/ { location ~ ^/people/(?.+) { return 301 https://urldefense.proofpoint.com/v2/url?u=http-3A__example.com_users_-24REST-24is-5Fargs-24args&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=3O_LBJK37dgtTRVYwIUXbeYZOEoMAFuwQka4abHmvc0&s=dwsx2pgviho1mDEe9-eSJOsU6L5YomDKne9pGahGEII&e=; } } -- Igor Sysoev https://urldefense.proofpoint.com/v2/url?u=http-3A__nginx.com&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=3O_LBJK37dgtTRVYwIUXbeYZOEoMAFuwQka4abHmvc0&s=G8eSWJNBD_3_Eg6m9BGtziqdxy0G0D38NiN75tlpYHU&e= _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=3O_LBJK37dgtTRVYwIUXbeYZOEoMAFuwQka4abHmvc0&s=8dvq4qhKfz_X0MxXZ6Qaeik8x47SP3epk7aqPxYGoh4&e= From igor at sysoev.ru Fri Mar 23 15:37:39 2018 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 23 Mar 2018 18:37:39 +0300 Subject: Redirect question In-Reply-To: <056DFCFA-3708-4D03-B4C2-0350745F1A70@yale.edu> References: <7CB37613-0A83-462F-84C2-741593832662@sysoev.ru> <056DFCFA-3708-4D03-B4C2-0350745F1A70@yale.edu> Message-ID: > On 23 Mar 2018, at 18:22, Friscia, Michael wrote: > > Ok, that worked out really well. > > For anyone following I had to go here > https://forum.nginx.org/read.php?2,279172,279176#msg-279176 > because our exchange server destroyed the sample URLs. > > But I?m not sure how the location order is mitigated. Is this because the first location match is a regex instead of just a string match? The regex locations are checked in order of appearance. Prefix and excact locations are checked to find the longest matching prefix so their order has no meaning. When you have hundreds of locations the order becomes important factor during configuration maintainance. -- Igor Sysoev http://nginx.com From michael.friscia at yale.edu Fri Mar 23 15:38:58 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Fri, 23 Mar 2018 15:38:58 +0000 Subject: Redirect question In-Reply-To: References: <7CB37613-0A83-462F-84C2-741593832662@sysoev.ru> <056DFCFA-3708-4D03-B4C2-0350745F1A70@yale.edu> Message-ID: <59F02F94-18E7-429B-B8B2-1097EB1941A5@yale.edu> Great, thank you for that explanation, I do happen to have hundreds. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu On 3/23/18, 11:37 AM, "nginx on behalf of Igor Sysoev" wrote: > On 23 Mar 2018, at 18:22, Friscia, Michael wrote: > > Ok, that worked out really well. > > For anyone following I had to go here > https://urldefense.proofpoint.com/v2/url?u=https-3A__forum.nginx.org_read.php-3F2-2C279172-2C279176-23msg-2D279176&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=wMvFvA1TR6LcTFxOloRtKG43AguBoxQxBCr4c8wjsWA&s=5TAabdHhJgEYksbFWVUWH3oS5BcbJr1p9qkxFKDtwIw&e= > because our exchange server destroyed the sample URLs. > > But I?m not sure how the location order is mitigated. Is this because the first location match is a regex instead of just a string match? The regex locations are checked in order of appearance. Prefix and excact locations are checked to find the longest matching prefix so their order has no meaning. When you have hundreds of locations the order becomes important factor during configuration maintainance. -- Igor Sysoev https://urldefense.proofpoint.com/v2/url?u=http-3A__nginx.com&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=wMvFvA1TR6LcTFxOloRtKG43AguBoxQxBCr4c8wjsWA&s=jwneREdf5OsLivujbx29wE52wALhYDW7sYRhfT3TXZs&e= _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=DwIGaQ&c=cjytLXgP8ixuoHflwc-poQ&r=wvXEDjvtDPcv7AlldT5UvDx32KXBEM6um_lS023SJrs&m=wMvFvA1TR6LcTFxOloRtKG43AguBoxQxBCr4c8wjsWA&s=henjumx7PnMKjFYOwcaCn2sBKqWmMY5vz2cSfumQFX4&e= From hemantbist at gmail.com Fri Mar 23 17:58:34 2018 From: hemantbist at gmail.com (Hemant Bist) Date: Fri, 23 Mar 2018 10:58:34 -0700 Subject: Only compressed version of file on server , and supporting clients that don't send Accept-Encoding. In-Reply-To: <20180323003813.GX77253@mdounin.ru> References: <20180323003813.GX77253@mdounin.ru> Message-ID: Thanks, it worked like a charm! The lua script that handled 404 logic worked without any changes with error_page. There were some other add_header cofiguration, they also worked fine with it... HB On Thu, Mar 22, 2018 at 5:38 PM, Maxim Dounin wrote: > Hello! > > On Thu, Mar 22, 2018 at 05:00:20PM -0700, Hemant Bist wrote: > > > Hi, > > We have only gzipped files stored on nginx and need to serve client that > : > > A) Support gzip transfer encoding (> 99% of the clients). They send > > Accept-Encoding: gzip header... > > B) < 1% of the clients that don't support transfer encoding. The don't > send > > Accept-Encoding header. > > > > There is ample CPU in the nginx servers to support clients of type B). > But > > I am unable to figure out a config/reasonable script > > to help us serve these clients. > > > > Clients of type A) are served with the following config. > > --- Working config that appends .gz in the try_files ---- > > location /compressed_files/ { > > add_header Content-Encoding "gzip"; > > expires 48h; > > add_header Cache-Control private; > > try_files $uri.gz @lua_script_for_missing_file; > > } > > > > > > ----- Not working config with gunzip on; likely because gunzip filter > > runs before add_header? > > > > location /compressed_files/ { > > add_header Content-Encoding "gzip"; > > > > expires 48h; > > add_header Cache-Control private; > > *# gunzip on fails to uncompress likely because it does not notice the > > add_header directive.* > > * gunzip on;* > > * gzip_proxied any;* > > try_files $uri.gz @lua_script_for_missing_file; > > } > > > > > > I would appreciate any pointers on how to do this. I may be missing some > > obvious configuration for such case. > > We did discuss keeping both unzipped and zipped version on the server, > but > > unfortunately that is unlikely to happen. > > Try this instead: > > location /compressed_files/ { > gzip_static always; > gunzip on; > } > > See documentation here for additional details: > > http://nginx.org/r/gzip_static > http://nginx.org/r/gunzip > > Note that you wan't be able to combine this with "try_files > $uri.gz ...", as this will change URI as seen by gzip_static and > will break it. If you want to fall back to a different location > when there is no file, use "error_page 404 ..." instead. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Mar 23 19:06:34 2018 From: nginx-forum at forum.nginx.org (lichandro) Date: Fri, 23 Mar 2018 15:06:34 -0400 Subject: 301 Redirect from www version to non www. Message-ID: <9ddb973844bd9c39d8deb129ff38b9b4.NginxMailingListEnglish@forum.nginx.org> Hello I have to redirect my whole traffic to the non www version of my website. I have the http2 SSL free form cloudflare. I did this in my .conf file and It doesn't work. server { listen 80; server_name www.kasacja-aut.pl kasacja-aut.pl https://www.kasacja-aut.pl; return 301 https://kasacja-aut.pl$request_uri; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279183,279183#msg-279183 From jeff.dyke at gmail.com Fri Mar 23 19:22:32 2018 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Fri, 23 Mar 2018 15:22:32 -0400 Subject: 301 Redirect from www version to non www. In-Reply-To: <9ddb973844bd9c39d8deb129ff38b9b4.NginxMailingListEnglish@forum.nginx.org> References: <9ddb973844bd9c39d8deb129ff38b9b4.NginxMailingListEnglish@forum.nginx.org> Message-ID: A couple things here guess. Is 80 even open in the firewall? Also could cloudflare be picking up 80 and redirecting to https, also this won't solve your problem, but having a server name prefixed with https is not valid, it may pass a configtest, but not sure that it would every match. On Fri, Mar 23, 2018 at 3:06 PM, lichandro wrote: > Hello I have to redirect my whole traffic to the non www version of my > website. I have the http2 SSL free form cloudflare. I did this in my .conf > file and It doesn't work. > > server { > listen 80; > server_name www.kasacja-aut.pl kasacja-aut.pl > https://www.kasacja-aut.pl; > return 301 https://kasacja-aut.pl$request_uri; > } > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,279183,279183#msg-279183 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Mar 23 20:49:19 2018 From: nginx-forum at forum.nginx.org (linnading) Date: Fri, 23 Mar 2018 16:49:19 -0400 Subject: Cache size exceeds max_size? Message-ID: <49aff1790dfb50e17509ba327269d268.NginxMailingListEnglish@forum.nginx.org> Hi there, I've noticed that our cache directory grew over the max_size(5G) defined with proxy_cache_path. We're running on Nginx 1.13.5. Is this a known issue? What should I do to fix this? Thanks! Linna Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279185,279185#msg-279185 From ph.gras at worldonline.fr Fri Mar 23 23:04:56 2018 From: ph.gras at worldonline.fr (Ph. Gras) Date: Sat, 24 Mar 2018 00:04:56 +0100 Subject: Trouble with SSL connection and let's encrypt certificates Message-ID: Hello there, I'm running several websites with different domain names on a Debian 9 server and have problems to have a connection on port 443 for some days. Certificates are generated by let's encrypt and do the job on other services except NginX, for example : # openssl s_client -connect mailbox.fredlutaud.com:443 -showcerts CONNECTED(00000003) write:errno=0 --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 176 bytes Verification: OK --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : 0000 Session-ID: Session-ID-ctx: Master-Key: PSK identity: None PSK identity hint: None SRP username: None Start Time: 1521844523 Timeout : 7200 (sec) Verify return code: 0 (ok) Extended master secret: no --- # openssl s_client -connect mailbox.fredlutaud.com:993 -showcerts CONNECTED(00000003) depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3 verify return:1 depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3 verify return:1 depth=0 CN = ns365710.ip-176-31-120.eu verify return:1 --- Certificate chain 0 s:/CN=ns365710.ip-176-31-120.eu i:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3 -----BEGIN CERTIFICATE----- MIIFFTCCA/2gAwIBAgISAz2exXicPgWK2nWjFrHdoj7UMA0GCSqGSIb3DQEBCwUA [Blah?] # netstat -antp | grep nginx tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 16773/nginx: master tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 16773/nginx: master tcp6 0 0 :::80 :::* LISTEN 16773/nginx: master tcp6 0 0 :::443 :::* LISTEN 16773/nginx: mas # nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful Do you have an idea to solve my problem ? Thanks in advance, Ph. Gras From nginx-forum at forum.nginx.org Sat Mar 24 00:17:09 2018 From: nginx-forum at forum.nginx.org (vi54) Date: Fri, 23 Mar 2018 20:17:09 -0400 Subject: Nginx as forward proxy and maintaining persistent connections Message-ID: am trying to configure nginx as a forward proxy and establish persistent connection between the nginx and the upstream server. When I set server { location / { proxy_pass http://$http_host$request_uri; proxy_http_version 1.1; proxy_set_header Connection ""; ... } } in the location context, it has no effect until I have a upstream module that has keepalive directive in it. Is it mandatory to specify the upstream module with keepalive directive? If I do that way, http { upstream up { server $http_host; keepalive 20; } server { proxy_pass http://up$request_uri; } } I get the error host not found in upstream "$http_host". Looks like the server directive is not using it as variable but merely looking for a server named $http_host? I tried finding hints in online resources but couldn't get help. I need someone to point me how this goes wrong? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279187,279187#msg-279187 From mdounin at mdounin.ru Sat Mar 24 14:53:11 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 24 Mar 2018 17:53:11 +0300 Subject: Cache size exceeds max_size? In-Reply-To: <49aff1790dfb50e17509ba327269d268.NginxMailingListEnglish@forum.nginx.org> References: <49aff1790dfb50e17509ba327269d268.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180324145311.GY77253@mdounin.ru> Hello! On Fri, Mar 23, 2018 at 04:49:19PM -0400, linnading wrote: > I've noticed that our cache directory grew over the max_size(5G) defined > with proxy_cache_path. We're running on Nginx 1.13.5. Is this a known issue? > What should I do to fix this? First of all, you have to look into your logs to find out what's going on. In most cases, this is a symptom of another another problem, usually nginx workers being killed or crashed. For example, last time similar symptoms were traced to OOM killer activity, see here: http://mailman.nginx.org/pipermail/nginx/2018-March/055908.html Also it might be a good idea to upgrade to a newer nginx version, as 1.13.5 is somewhat old. Most recent 1.13.x version is 1.13.10, see http://nginx.org/en/download.html. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Sat Mar 24 14:57:30 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 24 Mar 2018 17:57:30 +0300 Subject: Trouble with SSL connection and let's encrypt certificates In-Reply-To: References: Message-ID: <20180324145730.GZ77253@mdounin.ru> Hello! On Sat, Mar 24, 2018 at 12:04:56AM +0100, Ph. Gras wrote: > I'm running several websites with different domain names on a Debian 9 server and > have problems to have a connection on port 443 for some days. > > Certificates are generated by let's encrypt and do the job on other services except > NginX, for example : > # openssl s_client -connect mailbox.fredlutaud.com:443 -showcerts > CONNECTED(00000003) > write:errno=0 > --- > no peer certificate available > --- > No client certificate CA names sent > --- > SSL handshake has read 0 bytes and written 176 bytes > Verification: OK As per openssl output, the were no server response to the SSL handshake. [...] > Do you have an idea to solve my problem ? First of all, you have to look into your nginx logs and configuration to find out what's going on here. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Sat Mar 24 15:06:46 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 24 Mar 2018 18:06:46 +0300 Subject: Nginx as forward proxy and maintaining persistent connections In-Reply-To: References: Message-ID: <20180324150646.GA77253@mdounin.ru> Hello! On Fri, Mar 23, 2018 at 08:17:09PM -0400, vi54 wrote: > am trying to configure nginx as a forward proxy and establish persistent Note that nginx is not a forward proxy. You may have better luck with other programs which are designed to be used as a forward proxy. > connection between the nginx and the upstream server. When I set > > server { > location / { > proxy_pass http://$http_host$request_uri; > proxy_http_version 1.1; > proxy_set_header Connection ""; > ... > } > } > in the location context, it has no effect until I have a upstream module > that has keepalive directive in it. Is it mandatory to specify the upstream > module with keepalive directive? Yes, to maintain keepalive connections with an upstream server you have to define and upstream{} block with keepalive cache configured. > If I do that way, > > http > { > upstream up { > server $http_host; > keepalive 20; > } > server { > proxy_pass http://up$request_uri; > } > } > I get the error host not found in upstream "$http_host". Looks like the > server directive is not using it as variable but merely looking for a server > named $http_host? I tried finding hints in online resources but couldn't get > help. I need someone to point me how this goes wrong? That's expected, you cannot use variables in the "server" directive in the upstream block. That is, what you are trying to do won't work. You can only maintain persistent connections with upstream servers you know about and configured in advance. -- Maxim Dounin http://mdounin.ru/ From ph.gras at worldonline.fr Sat Mar 24 18:30:41 2018 From: ph.gras at worldonline.fr (Ph. Gras) Date: Sat, 24 Mar 2018 19:30:41 +0100 Subject: [SOLVED] Trouble with SSL connection and let's encrypt certificates In-Reply-To: <20180324145730.GZ77253@mdounin.ru> References: <20180324145730.GZ77253@mdounin.ru> Message-ID: <8D3348A2-1D37-4F23-B28A-137912BFD432@worldonline.fr> Thank you Maxim! > First of all, you have to look into your nginx logs and > configuration to find out what's going on here. :/var/log/nginx# cat error.log 2018/03/24 05:18:10 [error] 24058#24058: *573 no "ssl_certificate" is defined in server listening on SSL port while SSL handshaking, client: 162.158.78.50, server: 0.0.0.0:443 2018/03/24 05:18:14 [error] 24058#24058: *574 no "ssl_certificate" is defined in server listening on SSL port while SSL handshaking, client: 162.158.79.27, server: 0.0.0.0:443 2018/03/24 05:23:40 [error] 24058#24058: *576 no "ssl_certificate" is defined in server listening on SSL port while SSL handshaking, client: 172.69.70.147, server: 0.0.0.0:443 2018/03/24 05:29:46 [error] 24058#24058: *579 no "ssl_certificate" is defined in server listening on SSL port while SSL handshaking, client: 162.158.146.8, server: 0.0.0.0:443 2018/03/24 05:32:45 [error] 24058#24058: *581 no "ssl_certificate" is defined in server listening on SSL port while SSL handshaking, client: 172.68.174.50, server: 0.0.0.0:443 2018/03/24 05:33:09 [error] 24058#24058: *582 no "ssl_certificate" is defined in server listening on SSL port while SSL handshaking, client: 173.245.54.47, server: 0.0.0.0:443 2018/03/24 05:34:41 [error] 24058#24058: *587 no "ssl_certificate" is defined in server listening on SSL port while SSL handshaking, client: 141.101.69.85, server: 0.0.0.0:443 So, I have uncomment the include snippets/snakeoil.conf line on the default server and it works ;-) Does a default server necessary to make NginX efficient ? Best regards, Ph. Gras From John.Melom at spok.com Mon Mar 26 20:21:27 2018 From: John.Melom at spok.com (John Melom) Date: Mon, 26 Mar 2018 20:21:27 +0000 Subject: Nginx throttling issue? Message-ID: Hi, I am load testing our system using Jmeter as a load generator. We execute a script consisting of an https request executing in a loop. The loop does not contain a think time, since at this point I am not trying to emulate a ?real user?. I want to get a quick look at our system capacity. Load on our system is increased by increasing the number of Jmeter threads executing our script. Each Jmeter thread references different data. Our system is in AWS with an ELB fronting Nginx, which serves as a reverse proxy for our Docker Swarm application cluster. At moderate loads, a subset of our https requests start experiencing to a 1 second delay in addition to their normal response time. The delay is not due to resource contention. System utilizations remain low. The response times cluster around 4 values: 0 millilseconds, 50 milliseconds, 1 second, and 1.050 seconds. Right now, I am most interested in understanding and eliminating the 1 second delay that gives the clusters at 1 second and 1.050 seconds. The attachment shows a response time scatterplot from one of our runs. The x-axis is the number of seconds into the run, the y-axis is the response time in milliseconds. The plotted data shows the response time of requests at the time they occurred in the run. If I run the test bypassing the ELB and Nginx, this delay does not occur. If I bypass the ELB, but include Nginx in the request path, the delay returns. This leads me to believe the 1 second delay is coming from Nginx. One possible candidate Nginx DDOS. Since all requests are coming from the same Jmeter system, I expect they share the same originating IP address. I attempted to control DDOS throttling by setting limit_req as shown in the nginx.conf fragment below: http { ? limit_req_zone $binary_remote_addr zone=perf:20m rate=10000r/s; ? server { ? location /myReq { limit_req zone=perf burst=600; proxy_pass xxx.xxx.xxx.xxx; } ?. } The thinking behind the values set in this conf file is that my aggregate demand would not exceed 10000 requests per second, so throttling of requests should not occur. If there were short bursts more intense than that, the burst value would buffer these requests. This tuning did not change my results. I still get the 1 second delay. Am I implementing this correctly? Is there something else I should be trying? The responses are not large, so I don?t believe limit_req is the answer. I have a small number of intense users, so limit_conn does not seem likely to be the answer either. Thanks, John Melom Performance Test Engineer Sp?k, Inc. +1 (952) 230 5311 Office John.Melom at spok.com [cid:image001.jpg at 01D1E1AF.34FE1C10] ________________________________ NOTE: This email message and any attachments are for the sole use of the intended recipient(s) and may contain confidential and/or privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you have received this e-mail in error, please contact the sender by replying to this email, and destroy all copies of the original message and any material included with this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 3300 bytes Desc: image003.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rawRespScatterplot.png Type: image/png Size: 43349 bytes Desc: rawRespScatterplot.png URL: From peter_booth at me.com Mon Mar 26 20:57:01 2018 From: peter_booth at me.com (Peter Booth) Date: Mon, 26 Mar 2018 16:57:01 -0400 Subject: Nginx throttling issue? In-Reply-To: References: Message-ID: <7DCEF786-4278-4EC1-87A5-38D2FA47E192@me.com> You?re correct that this is the ddos throttling. The real question is what do you want to do? JMeter with zero think time is an imperfect load generator- this is only one complication. The bigger one is the open/closed model issue. With you design you have back ptesssure from your system under test to your load generator. A jmeter virtual user will only ever issue a request when the prior one completes. Real users are not so well behaved which is why your test results will always be over optimistic with this design. Better approach us to use a load generator that replicates the desired request distribution without triggering the ddos protection. Wrk2, Tsung, httperf are candidates, as well as the cloud based load generator services. Also see Neil Gunther?s paper on how to combine multiple jmeter instances to replicate real world tragic patterns. Peter Sent from my iPhone > On Mar 26, 2018, at 4:21 PM, John Melom wrote: > > Hi, > > I am load testing our system using Jmeter as a load generator. We execute a script consisting of an https request executing in a loop. The loop does not contain a think time, since at this point I am not trying to emulate a ?real user?. I want to get a quick look at our system capacity. Load on our system is increased by increasing the number of Jmeter threads executing our script. Each Jmeter thread references different data. > > Our system is in AWS with an ELB fronting Nginx, which serves as a reverse proxy for our Docker Swarm application cluster. > > At moderate loads, a subset of our https requests start experiencing to a 1 second delay in addition to their normal response time. The delay is not due to resource contention. System utilizations remain low. The response times cluster around 4 values: 0 millilseconds, 50 milliseconds, 1 second, and 1.050 seconds. Right now, I am most interested in understanding and eliminating the 1 second delay that gives the clusters at 1 second and 1.050 seconds. > > The attachment shows a response time scatterplot from one of our runs. The x-axis is the number of seconds into the run, the y-axis is the response time in milliseconds. The plotted data shows the response time of requests at the time they occurred in the run. > > If I run the test bypassing the ELB and Nginx, this delay does not occur. > If I bypass the ELB, but include Nginx in the request path, the delay returns. > > This leads me to believe the 1 second delay is coming from Nginx. > > One possible candidate Nginx DDOS. Since all requests are coming from the same Jmeter system, I expect they share the same originating IP address. I attempted to control DDOS throttling by setting limit_req as shown in the nginx.conf fragment below: > > http { > ? > limit_req_zone $binary_remote_addr zone=perf:20m rate=10000r/s; > ? > server { > ? > location /myReq { > limit_req zone=perf burst=600; > proxy_pass xxx.xxx.xxx.xxx; > } > ?. > } > > The thinking behind the values set in this conf file is that my aggregate demand would not exceed 10000 requests per second, so throttling of requests should not occur. If there were short bursts more intense than that, the burst value would buffer these requests. > > This tuning did not change my results. I still get the 1 second delay. > > Am I implementing this correctly? > Is there something else I should be trying? > > The responses are not large, so I don?t believe limit_req is the answer. > I have a small number of intense users, so limit_conn does not seem likely to be the answer either. > > Thanks, > > John Melom > Performance Test Engineer > Sp?k, Inc. > +1 (952) 230 5311 Office > John.Melom at spok.com > > > > > NOTE: This email message and any attachments are for the sole use of the intended recipient(s) and may contain confidential and/or privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you have received this e-mail in error, please contact the sender by replying to this email, and destroy all copies of the original message and any material included with this email. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Mar 27 11:55:06 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Mar 2018 14:55:06 +0300 Subject: Nginx throttling issue? In-Reply-To: References: Message-ID: <20180327115506.GF77253@mdounin.ru> Hello! On Mon, Mar 26, 2018 at 08:21:27PM +0000, John Melom wrote: > I am load testing our system using Jmeter as a load generator. > We execute a script consisting of an https request executing in > a loop. The loop does not contain a think time, since at this > point I am not trying to emulate a ?real user?. I want to get a > quick look at our system capacity. Load on our system is > increased by increasing the number of Jmeter threads executing > our script. Each Jmeter thread references different data. > > Our system is in AWS with an ELB fronting Nginx, which serves as > a reverse proxy for our Docker Swarm application cluster. > > At moderate loads, a subset of our https requests start > experiencing to a 1 second delay in addition to their normal > response time. The delay is not due to resource contention. > System utilizations remain low. The response times cluster > around 4 values: 0 millilseconds, 50 milliseconds, 1 second, > and 1.050 seconds. Right now, I am most interested in > understanding and eliminating the 1 second delay that gives the > clusters at 1 second and 1.050 seconds. > > The attachment shows a response time scatterplot from one of our > runs. The x-axis is the number of seconds into the run, the > y-axis is the response time in milliseconds. The plotted data > shows the response time of requests at the time they occurred in > the run. > > If I run the test bypassing the ELB and Nginx, this delay does > not occur. > If I bypass the ELB, but include Nginx in the request path, the > delay returns. > > This leads me to believe the 1 second delay is coming from > Nginx. There are no magic 1 second delays in nginx - unless you've configured something explicitly. Most likely, the 1 second delay is coming from TCP retransmission timeout during connection establishment due to listen queue overflows. Check "netstat -s" to see if there are any listen queue overflows on your hosts. [...] -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Mar 27 12:37:11 2018 From: nginx-forum at forum.nginx.org (dhallam) Date: Tue, 27 Mar 2018 08:37:11 -0400 Subject: SSL Client Certificate Validation Message-ID: <29d1c8be35f221c01446b3e33eeaf507.NginxMailingListEnglish@forum.nginx.org> Hi, I'm running nginx version: nginx/1.11.5 (nginx-plus-r11). I am trying to connect a tcp client (with client cert) over SSL to nginx, where the SSL will be validated and terminated, and then onto the upsteam server in the clear. I have the following configuration: stream { upstream upstream_servers { server localhost:80; } server { listen 192.168.0.30:443 ssl; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_client_certificate /etc/nginx/ssl/client.crt; ssl_verify_client on; ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_verify_depth 2; proxy_pass upstream_servers; } } However, I get a '2018/03/27 12:14:35 [emerg] 18325#18325: "ssl_client_certificate" directive is not allowed here in /etc/nginx/conf.d/my-listener.conf:11' error when I try to start the server. According to http://nginx.org/en/docs/stream/ngx_stream_ssl_module.html#ssl_client_certificate, this seems to be a valid configuration directive. Would anyone be able to help identify what it is that I am missing? Many thanks, Dave Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279204,279204#msg-279204 From mdounin at mdounin.ru Tue Mar 27 12:45:17 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Mar 2018 15:45:17 +0300 Subject: SSL Client Certificate Validation In-Reply-To: <29d1c8be35f221c01446b3e33eeaf507.NginxMailingListEnglish@forum.nginx.org> References: <29d1c8be35f221c01446b3e33eeaf507.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180327124516.GI77253@mdounin.ru> Hello! On Tue, Mar 27, 2018 at 08:37:11AM -0400, dhallam wrote: > I'm running nginx version: nginx/1.11.5 (nginx-plus-r11). I am trying to > connect a tcp client (with client cert) over SSL to nginx, where the SSL > will be validated and terminated, and then onto the upsteam server in the > clear. [...] > However, I get a '2018/03/27 12:14:35 [emerg] 18325#18325: > "ssl_client_certificate" directive is not allowed here in > /etc/nginx/conf.d/my-listener.conf:11' error when I try to start the > server. > > According to > http://nginx.org/en/docs/stream/ngx_stream_ssl_module.html#ssl_client_certificate, > this seems to be a valid configuration directive. > > Would anyone be able to help identify what it is that I am missing? Quoting the link above: : This directive appeared in version 1.11.8. And you are using nginx 1.11.5, which is older than 1.11.8. You have to upgrade to a newer version to get it working. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Mar 27 12:51:31 2018 From: nginx-forum at forum.nginx.org (dhallam) Date: Tue, 27 Mar 2018 08:51:31 -0400 Subject: SSL Client Certificate Validation In-Reply-To: <20180327124516.GI77253@mdounin.ru> References: <20180327124516.GI77253@mdounin.ru> Message-ID: <1f5d23bfa6148126ad54262267828f75.NginxMailingListEnglish@forum.nginx.org> Thank you. Please accept my apologies for not spotting that in the documentation. Many thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279204,279206#msg-279206 From John.Melom at spok.com Tue Mar 27 13:47:32 2018 From: John.Melom at spok.com (John Melom) Date: Tue, 27 Mar 2018 13:47:32 +0000 Subject: Nginx throttling issue? In-Reply-To: <7DCEF786-4278-4EC1-87A5-38D2FA47E192@me.com> References: <7DCEF786-4278-4EC1-87A5-38D2FA47E192@me.com> Message-ID: Peter, Thanks for your reply. What I?d really like is to understand how to tune nginx to avoid the delays when I run my tests. I am comfortable with the overly optimistic results from my current ?closed model? test design. Once I determine my system?s throughput limits I will introduce significant think times into my scripts so that much larger user populations are required to produce the same work demand. This will more closely approximate an ?open model? test design. Could you provide more explanation as to why a different load generation tool would avoid triggering a DDOS response from nginx? My first guess would have been that they would also generate requests from a single IP address, and thus look the same as a JMeter load. I did try my test with JMeter driving workload from 2 different machines at the same time. I ran each machine ?s workload at a low enough level that individually they did not trigger the 1 second delay. The combined workload did trigger the delay for each of the JMeter workload generators. I?m not sure how many machines would be required to avoid the collective response from nginx. Thanks, John From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Peter Booth Sent: Monday, March 26, 2018 3:57 PM To: nginx at nginx.org Subject: Re: Nginx throttling issue? You?re correct that this is the ddos throttling. The real question is what do you want to do? JMeter with zero think time is an imperfect load generator- this is only one complication. The bigger one is the open/closed model issue. With you design you have back ptesssure from your system under test to your load generator. A jmeter virtual user will only ever issue a request when the prior one completes. Real users are not so well behaved which is why your test results will always be over optimistic with this design. Better approach us to use a load generator that replicates the desired request distribution without triggering the ddos protection. Wrk2, Tsung, httperf are candidates, as well as the cloud based load generator services. Also see Neil Gunther?s paper on how to combine multiple jmeter instances to replicate real world tragic patterns. Peter Sent from my iPhone On Mar 26, 2018, at 4:21 PM, John Melom > wrote: Hi, I am load testing our system using Jmeter as a load generator. We execute a script consisting of an https request executing in a loop. The loop does not contain a think time, since at this point I am not trying to emulate a ?real user?. I want to get a quick look at our system capacity. Load on our system is increased by increasing the number of Jmeter threads executing our script. Each Jmeter thread references different data. Our system is in AWS with an ELB fronting Nginx, which serves as a reverse proxy for our Docker Swarm application cluster. At moderate loads, a subset of our https requests start experiencing to a 1 second delay in addition to their normal response time. The delay is not due to resource contention. System utilizations remain low. The response times cluster around 4 values: 0 millilseconds, 50 milliseconds, 1 second, and 1.050 seconds. Right now, I am most interested in understanding and eliminating the 1 second delay that gives the clusters at 1 second and 1.050 seconds. The attachment shows a response time scatterplot from one of our runs. The x-axis is the number of seconds into the run, the y-axis is the response time in milliseconds. The plotted data shows the response time of requests at the time they occurred in the run. If I run the test bypassing the ELB and Nginx, this delay does not occur. If I bypass the ELB, but include Nginx in the request path, the delay returns. This leads me to believe the 1 second delay is coming from Nginx. One possible candidate Nginx DDOS. Since all requests are coming from the same Jmeter system, I expect they share the same originating IP address. I attempted to control DDOS throttling by setting limit_req as shown in the nginx.conf fragment below: http { ? limit_req_zone $binary_remote_addr zone=perf:20m rate=10000r/s; ? server { ? location /myReq { limit_req zone=perf burst=600; proxy_pass xxx.xxx.xxx.xxx; } ?. } The thinking behind the values set in this conf file is that my aggregate demand would not exceed 10000 requests per second, so throttling of requests should not occur. If there were short bursts more intense than that, the burst value would buffer these requests. This tuning did not change my results. I still get the 1 second delay. Am I implementing this correctly? Is there something else I should be trying? The responses are not large, so I don?t believe limit_req is the answer. I have a small number of intense users, so limit_conn does not seem likely to be the answer either. Thanks, John Melom Performance Test Engineer Sp?k, Inc. +1 (952) 230 5311 Office John.Melom at spok.com ________________________________ NOTE: This email message and any attachments are for the sole use of the intended recipient(s) and may contain confidential and/or privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you have received this e-mail in error, please contact the sender by replying to this email, and destroy all copies of the original message and any material included with this email. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx ________________________________ NOTE: This email message and any attachments are for the sole use of the intended recipient(s) and may contain confidential and/or privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you have received this e-mail in error, please contact the sender by replying to this email, and destroy all copies of the original message and any material included with this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From John.Melom at spok.com Tue Mar 27 13:52:11 2018 From: John.Melom at spok.com (John Melom) Date: Tue, 27 Mar 2018 13:52:11 +0000 Subject: Nginx throttling issue? In-Reply-To: <20180327115506.GF77253@mdounin.ru> References: <20180327115506.GF77253@mdounin.ru> Message-ID: Maxim, Thank you for your reply. I will look to see if "netstat -s" detects any listen queue overflows. John -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Maxim Dounin Sent: Tuesday, March 27, 2018 6:55 AM To: nginx at nginx.org Subject: Re: Nginx throttling issue? Hello! On Mon, Mar 26, 2018 at 08:21:27PM +0000, John Melom wrote: > I am load testing our system using Jmeter as a load generator. > We execute a script consisting of an https request executing in a > loop. The loop does not contain a think time, since at this point I > am not trying to emulate a ?real user?. I want to get a quick look at > our system capacity. Load on our system is increased by increasing > the number of Jmeter threads executing our script. Each Jmeter thread > references different data. > > Our system is in AWS with an ELB fronting Nginx, which serves as a > reverse proxy for our Docker Swarm application cluster. > > At moderate loads, a subset of our https requests start experiencing > to a 1 second delay in addition to their normal response time. The > delay is not due to resource contention. > System utilizations remain low. The response times cluster around 4 > values: 0 millilseconds, 50 milliseconds, 1 second, and 1.050 > seconds. Right now, I am most interested in understanding and > eliminating the 1 second delay that gives the clusters at 1 second and > 1.050 seconds. > > The attachment shows a response time scatterplot from one of our runs. > The x-axis is the number of seconds into the run, the y-axis is the > response time in milliseconds. The plotted data shows the response > time of requests at the time they occurred in the run. > > If I run the test bypassing the ELB and Nginx, this delay does not > occur. > If I bypass the ELB, but include Nginx in the request path, the delay > returns. > > This leads me to believe the 1 second delay is coming from Nginx. There are no magic 1 second delays in nginx - unless you've configured something explicitly. Most likely, the 1 second delay is coming from TCP retransmission timeout during connection establishment due to listen queue overflows. Check "netstat -s" to see if there are any listen queue overflows on your hosts. [...] -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx ________________________________ NOTE: This email message and any attachments are for the sole use of the intended recipient(s) and may contain confidential and/or privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you have received this e-mail in error, please contact the sender by replying to this email, and destroy all copies of the original message and any material included with this email. From hal469 at xsmail.com Tue Mar 27 16:50:14 2018 From: hal469 at xsmail.com (hal469 at xsmail.com) Date: Tue, 27 Mar 2018 09:50:14 -0700 Subject: How to set a conditional Content-Security-Policy? Message-ID: <1522169414.1887151.1317918408.744B88DA@webmail.messagingengine.com> For my nginx server, I set a CSP header set $CSP ''; set $CSP "${CSP}default-src 'self';"; set $CSP "${CSP}script-src 'self';"; add_header Content-Security-Policy $CSP; For a webapp, using Symfony, the developer UI injects inline script for display of a "Debug Toolbar" It's access-blocked by that^ server policy. Changing - set $CSP "${CSP}script-src 'self';"; + set $CSP "${CSP}script-src 'self' 'unsafe-inline';"; fixes the problem -- access the debug toolbar is allowed, and it's rendered. But, adding the 'unsafe-inline' is certainly not ideal! Apache has the option to create/return a CSP policy depending on Request IP: https://blog.paranoidpenguin.net/2017/12/deploy-different-content-security-policies-csps-using-the-apache-if-directive/ How would the equivalent be done in nginx config? Iiuc, there's no if/then/else construct. Something with maps maybe? Hal From mdounin at mdounin.ru Tue Mar 27 17:27:05 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Mar 2018 20:27:05 +0300 Subject: How to set a conditional Content-Security-Policy? In-Reply-To: <1522169414.1887151.1317918408.744B88DA@webmail.messagingengine.com> References: <1522169414.1887151.1317918408.744B88DA@webmail.messagingengine.com> Message-ID: <20180327172705.GJ77253@mdounin.ru> Hello! On Tue, Mar 27, 2018 at 09:50:14AM -0700, hal469 at xsmail.com wrote: > For my nginx server, I set a CSP header > > set $CSP ''; > set $CSP "${CSP}default-src 'self';"; > set $CSP "${CSP}script-src 'self';"; > add_header Content-Security-Policy $CSP; > > For a webapp, using Symfony, the developer UI injects inline script for display of a "Debug Toolbar" > > It's access-blocked by that^ server policy. > > Changing > > - set $CSP "${CSP}script-src 'self';"; > + set $CSP "${CSP}script-src 'self' 'unsafe-inline';"; > > fixes the problem -- access the debug toolbar is allowed, and it's rendered. > > But, adding the 'unsafe-inline' is certainly not ideal! > > Apache has the option to create/return a CSP policy depending on Request IP: > > https://blog.paranoidpenguin.net/2017/12/deploy-different-content-security-policies-csps-using-the-apache-if-directive/ > > How would the equivalent be done in nginx config? > > Iiuc, there's no if/then/else construct. > > Something with maps maybe? There are "if" constructs in nginx, see http://nginx.org/r/if. On the other hand, if you want to set CSP depending on the client IP address, it might be better idea to use "geo" instead, e.g.: geo $csp { default "default-src 'self'; script-src 'self';"; 10.0.0.0/8 "default-src 'self'; script-src 'self' 'unsafe-inline'"; } add_header Content-Security-Policy $csp; See http://nginx.org/en/docs/http/ngx_http_geo_module.html for details. -- Maxim Dounin http://mdounin.ru/ From hal469 at xsmail.com Tue Mar 27 17:56:45 2018 From: hal469 at xsmail.com (hal469 at xsmail.com) Date: Tue, 27 Mar 2018 10:56:45 -0700 Subject: How to set a conditional Content-Security-Policy? In-Reply-To: <20180327172705.GJ77253@mdounin.ru> References: <1522169414.1887151.1317918408.744B88DA@webmail.messagingengine.com> <20180327172705.GJ77253@mdounin.ru> Message-ID: <1522173405.1918310.1318002000.2A390621@webmail.messagingengine.com> > There are "if" constructs in nginx, see http://nginx.org/r/if. Well I'll be darned. I'd thought "if was evil". Thx. > On the other hand, if you want to set CSP depending on the client > IP address, it might be better idea to use "geo" instead, e.g.: > > geo $csp { > default "default-src 'self'; script-src 'self';"; > 10.0.0.0/8 "default-src 'self'; script-src 'self' 'unsafe-inline'"; > } > > add_header Content-Security-Policy $csp; Works perfectly! Thx! From pablo at pablo.com.mx Wed Mar 28 03:28:42 2018 From: pablo at pablo.com.mx (Pablo Fischer) Date: Tue, 27 Mar 2018 20:28:42 -0700 Subject: Bytes sent to upstream Message-ID: Hello, Seems like the upstream_bytes_sent variable only exists for the stream module and not for the http_upstream. Is there a way (via a variable would be better) to know the bytes that nginx has sent to upstream? Thanks! -- Pablo From mdounin at mdounin.ru Wed Mar 28 13:05:20 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 28 Mar 2018 16:05:20 +0300 Subject: Bytes sent to upstream In-Reply-To: References: Message-ID: <20180328130520.GL77253@mdounin.ru> Hello! On Tue, Mar 27, 2018 at 08:28:42PM -0700, Pablo Fischer wrote: > Seems like the upstream_bytes_sent variable only exists for the > stream module and not for the http_upstream. Is there a way (via a > variable would be better) to know the bytes that nginx has sent to > upstream? There is no such variable in nginx for http upstream connections. Depending on your configuration, $request_length + $content_length might be a good estimate. -- Maxim Dounin http://mdounin.ru/ From gfrankliu at gmail.com Thu Mar 29 07:23:37 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Thu, 29 Mar 2018 00:23:37 -0700 Subject: GeoIP2 Message-ID: The nginx geoip module http://nginx.org/en/docs/http/ngx_http_geoip_module.html is using the legacy maxmind db. I just read maxmind legacy db March 2018 will be the last publicly available download. Jan 2, 2019 it will be removed. https://dev.maxmind.com/geoip/geoip2/geolite2/ Is there a plan to switch to GeoIP2? Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From defan at nginx.com Thu Mar 29 10:14:10 2018 From: defan at nginx.com (Andrei Belov) Date: Thu, 29 Mar 2018 13:14:10 +0300 Subject: GeoIP2 In-Reply-To: References: Message-ID: Hi Frank, > On 29 Mar 2018, at 10:23, Frank Liu wrote: > > The nginx geoip module http://nginx.org/en/docs/http/ngx_http_geoip_module.html is using the legacy maxmind db. > I just read maxmind legacy db March 2018 will be the last publicly available download. Jan 2, 2019 it will be removed. > https://dev.maxmind.com/geoip/geoip2/geolite2/ > > Is there a plan to switch to GeoIP2? there is the 3rd-party module that provides GeoIP v2 support for nginx: https://github.com/leev/ngx_http_geoip2_module Hope this helps. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Mar 30 02:40:28 2018 From: nginx-forum at forum.nginx.org (hostcanada2020) Date: Thu, 29 Mar 2018 22:40:28 -0400 Subject: GeoIP2 In-Reply-To: References: Message-ID: <66a85e439f7d5d6e76c5290c88d9b2ca.NginxMailingListEnglish@forum.nginx.org> There is another 3rd-party geolocation module that is based on IP2Location LITE which is also free. https://www.ip2location.com/developers/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279227,279247#msg-279247 From nginx-forum at forum.nginx.org Fri Mar 30 20:35:00 2018 From: nginx-forum at forum.nginx.org (linnading) Date: Fri, 30 Mar 2018 16:35:00 -0400 Subject: Cache size exceeds max_size? In-Reply-To: <20180324145311.GY77253@mdounin.ru> References: <20180324145311.GY77253@mdounin.ru> Message-ID: Turns out it's not configured correctly :) Thanks for responding Maxim! Best, Linna Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279185,279259#msg-279259 From nginx-forum at forum.nginx.org Sat Mar 31 10:36:35 2018 From: nginx-forum at forum.nginx.org (bcoz123) Date: Sat, 31 Mar 2018 06:36:35 -0400 Subject: Is grpc keepalive supported ? Message-ID: <9078533581e9145c40316632c5e02458.NginxMailingListEnglish@forum.nginx.org> Hello everyone, In the latest version(1.13.10) Does ?grpc_pass? support the ?keepalive? option ? My configuration is: ... http { ... upstream backend { server localhost:50051; keepalive 300; } server { listen 80 http2; location / { grpc_pass grpc://backend; } ... } ... It seems not work, Because I can still see for every request from the front, Nginx create a new tcp connection between nginx and backend every time. Is there something wrong in my configuration file? Or just it is not supported in this version ? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279261,279261#msg-279261 From mdounin at mdounin.ru Sat Mar 31 22:25:21 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 1 Apr 2018 01:25:21 +0300 Subject: Is grpc keepalive supported ? In-Reply-To: <9078533581e9145c40316632c5e02458.NginxMailingListEnglish@forum.nginx.org> References: <9078533581e9145c40316632c5e02458.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180331222521.GT77253@mdounin.ru> Hello! On Sat, Mar 31, 2018 at 06:36:35AM -0400, bcoz123 wrote: > Hello everyone, > In the latest version(1.13.10) > Does ?grpc_pass? support the ?keepalive? option ? > My configuration is: > > ... > http { > ... > upstream backend { > server localhost:50051; > keepalive 300; > } > server { > listen 80 http2; > location / { > grpc_pass grpc://backend; > } > ... > } > ... > > It seems not work, > Because I can still see for every request from the front, > Nginx create a new tcp connection between nginx and backend every time. > Is there something wrong in my configuration file? > Or just it is not supported in this version ? It is supported, though connections may still be closed in some specific cases (unlikely to occur with valid gRPC requests though). Note well that keepalive connection caches are per-worker, so if you are using multiple worker processes you may need to do multiple tests to see it working. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Sat Mar 31 23:07:57 2018 From: nginx-forum at forum.nginx.org (masonicboom) Date: Sat, 31 Mar 2018 19:07:57 -0400 Subject: Anonymize IP logging In-Reply-To: References: Message-ID: I made a module called ipscrub that does this: http://www.ipscrub.org. It hashes the IP address with an ephemeral salt, so that you can match up requests from the same IP (using the hash), but each time the salt cycles, it becomes impossible to match an IP address with a hash in the logs. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,226003,279264#msg-279264