From nginx-forum at forum.nginx.org Tue Nov 1 01:52:22 2016 From: nginx-forum at forum.nginx.org (ngineo) Date: Mon, 31 Oct 2016 21:52:22 -0400 Subject: setting up client ip and hostname in nginx? Message-ID: i am working on AWS Elastic Beanstalk Instance, which runs Java applicaiton servered through Nginx ( no load balancer in front, just a standalone instance ) I need to set cookie to catch client ip and client hostname. Is this possible to do it in nginx and if yes then how? Below if my nginx configuration file: location / { proxy_pass http://127.0.0.1:5000; proxy_http_version 1.1; proxy_set_header Connection $connection_upgrade; proxy_set_header Upgrade $http_upgrade; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270675,270675#msg-270675 From nginx-forum at forum.nginx.org Tue Nov 1 13:44:12 2016 From: nginx-forum at forum.nginx.org (olat) Date: Tue, 01 Nov 2016 09:44:12 -0400 Subject: location regex Message-ID: <7231903ed546348b8dc7a6a091f58fff.NginxMailingListEnglish@forum.nginx.org> Hello, I would like to ask about regex. Why these 2 doesn't behave the same? location ~ /(apple/|pear/(small|big)/|test(ing|er)/(fruit|vegis)_)* location ~ /apple/*|/pear/(small|big)/*|/test(ing|er)/(fruit|vegis)_* could you point me to good practice? Ola Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270678,270678#msg-270678 From igor at sysoev.ru Tue Nov 1 14:04:35 2016 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 1 Nov 2016 17:04:35 +0300 Subject: location regex In-Reply-To: <7231903ed546348b8dc7a6a091f58fff.NginxMailingListEnglish@forum.nginx.org> References: <7231903ed546348b8dc7a6a091f58fff.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 01 Nov 2016, at 16:44, olat wrote: > Hello, > > I would like to ask about regex. Why these 2 doesn't behave the same? > > location ~ /(apple/|pear/(small|big)/|test(ing|er)/(fruit|vegis)_)* > > location ~ /apple/*|/pear/(small|big)/*|/test(ing|er)/(fruit|vegis)_* > > > could you point me to good practice? The good practice is not to use regex at all. location /apple/ { } locaiton /pear/ { } etc. -- Igor Sysoev http://nginx.com From eric.cox at kroger.com Tue Nov 1 15:15:45 2016 From: eric.cox at kroger.com (Cox, Eric S) Date: Tue, 1 Nov 2016 15:15:45 +0000 Subject: Blocking tens of thousands of IP's Message-ID: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> Is anyone aware of a difference performance wise between using return 403; vs deny all; When mapping against a list of tens of thousands of ip? Thanks ________________________________ This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is confidential and protected by law from unauthorized disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Tue Nov 1 21:37:59 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Tue, 1 Nov 2016 17:37:59 -0400 Subject: Blocking tens of thousands of IP's In-Reply-To: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> References: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> Message-ID: I don't think managing large lists of IPs is nginx's strength - as far as I can tell all of its ACLs are arrays that have the be iterated through on each request. When I do have to manage IP lists in Nginx I try to compress the lists into the most compact CIDR representation so there is less to search. Here is a perl snippet I use to do that (handles ipv4 and ipv6): #!/usr/bin/perl use NetAddr::IP; my @addresses; foreach my $subnet (split(/\s+/, $list_of_ips)) { push(@addresses, NetAddr::IP->new($subnet)); } foreach my $cidr (NetAddr::IP::compact(@addresses)) { if ($cidr->version == 4) { print $cidr . "\n"; } else { print $cidr->short() . "/" . $cidr->masklen() . "\n"; } On Tue, Nov 1, 2016 at 11:15 AM, Cox, Eric S wrote: > Is anyone aware of a difference performance wise between using > > > > return 403; > > > > vs > > > > deny all; > > > > When mapping against a list of tens of thousands of ip? > > > > Thanks > > ------------------------------ > > This e-mail message, including any attachments, is for the sole use of the > intended recipient(s) and may contain information that is confidential and > protected by law from unauthorized disclosure. Any unauthorized review, > use, disclosure or distribution is prohibited. If you are not the intended > recipient, please contact the sender by reply e-mail and destroy all copies > of the original message. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Tue Nov 1 21:42:24 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Tue, 1 Nov 2016 17:42:24 -0400 Subject: nginx and FIX server In-Reply-To: References: Message-ID: Probably a better solution then most! On Fri, Oct 28, 2016 at 10:42 PM, Alex Samad wrote: > Hi > > Not really an option in current setup. The rate limit is to stop > clients with bad fix servers that spam our fix server. > > Right now we have a custom bit of java code that that bit rate limits > tcp streams.. > > just bought into nginx so looking at stream proxing it through it instead > A > > On 29 October 2016 at 02:48, CJ Ess wrote: > > Cool. Probably off topic, but why rate limit FIX? My solution for heavy > > traders was always to put them on their own hardware and pass the costs > back > > to them. They are usually invested in whatever strategy they are using > and > > happy to pay up. > > > > > > > > On Fri, Oct 28, 2016 at 1:29 AM, Alex Samad wrote: > >> > >> Hi > >> > >> yeah I have had a very quick look, just wondering if any one on the > >> list had set one up. > >> > >> Alex > >> > >> On 28 October 2016 at 16:15, CJ Ess wrote: > >> > Maybe this is what you want: > >> > https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html > >> > > >> > See the parts about proxy_download_rate and proxy_upload_rate > >> > > >> > On Thu, Oct 27, 2016 at 11:22 PM, Alex Samad > wrote: > >> >> > >> >> Yep > >> >> > >> >> On 28 October 2016 at 11:57, CJ Ess wrote: > >> >> > FIX as in the financial information exchange protocol? > >> >> > > >> >> > On Thu, Oct 27, 2016 at 7:19 PM, Alex Samad > >> >> > wrote: > >> >> >> > >> >> >> Hi > >> >> >> > >> >> >> any one setup nginx infront of a fix engine to do rate limiting ? > >> >> >> > >> >> >> Alex > >> >> >> > >> >> >> _______________________________________________ > >> >> >> nginx mailing list > >> >> >> nginx at nginx.org > >> >> >> http://mailman.nginx.org/mailman/listinfo/nginx > >> >> > > >> >> > > >> >> > > >> >> > _______________________________________________ > >> >> > nginx mailing list > >> >> > nginx at nginx.org > >> >> > http://mailman.nginx.org/mailman/listinfo/nginx > >> >> > >> >> _______________________________________________ > >> >> nginx mailing list > >> >> nginx at nginx.org > >> >> http://mailman.nginx.org/mailman/listinfo/nginx > >> > > >> > > >> > > >> > _______________________________________________ > >> > nginx mailing list > >> > nginx at nginx.org > >> > http://mailman.nginx.org/mailman/listinfo/nginx > >> > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeff.dyke at gmail.com Tue Nov 1 21:46:21 2016 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Tue, 1 Nov 2016 17:46:21 -0400 Subject: Blocking tens of thousands of IP's In-Reply-To: References: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> Message-ID: what is your firewall?, that is the place to block subnets etc, i assume they are not random ips, they are likely from a block owned by someone?? On Tue, Nov 1, 2016 at 5:37 PM, CJ Ess wrote: > I don't think managing large lists of IPs is nginx's strength - as far as > I can tell all of its ACLs are arrays that have the be iterated through on > each request. > > When I do have to manage IP lists in Nginx I try to compress the lists > into the most compact CIDR representation so there is less to search. Here > is a perl snippet I use to do that (handles ipv4 and ipv6): > > #!/usr/bin/perl > > use NetAddr::IP; > > my @addresses; > > foreach my $subnet (split(/\s+/, $list_of_ips)) { > push(@addresses, NetAddr::IP->new($subnet)); > } > > foreach my $cidr (NetAddr::IP::compact(@addresses)) { > if ($cidr->version == 4) { > print $cidr . "\n"; > } else { > print $cidr->short() . "/" . $cidr->masklen() . "\n"; > } > > > On Tue, Nov 1, 2016 at 11:15 AM, Cox, Eric S wrote: > >> Is anyone aware of a difference performance wise between using >> >> >> >> return 403; >> >> >> >> vs >> >> >> >> deny all; >> >> >> >> When mapping against a list of tens of thousands of ip? >> >> >> >> Thanks >> >> ------------------------------ >> >> This e-mail message, including any attachments, is for the sole use of >> the intended recipient(s) and may contain information that is confidential >> and protected by law from unauthorized disclosure. Any unauthorized review, >> use, disclosure or distribution is prohibited. If you are not the intended >> recipient, please contact the sender by reply e-mail and destroy all copies >> of the original message. >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.cox at kroger.com Tue Nov 1 21:48:07 2016 From: eric.cox at kroger.com (Cox, Eric S) Date: Tue, 1 Nov 2016 21:48:07 +0000 Subject: Blocking tens of thousands of IP's In-Reply-To: References: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> , Message-ID: <74A4D440E25E6843BC8E324E67BB3E394558B05D@N060XBOXP38.kroger.com> Random, blocks, certain durations, etc. Its very random and or short lived which is something we don't want to move to the firewall at the moment -----Original Message----- From: Jeff Dyke [jeff.dyke at gmail.com] Received: Tuesday, 01 Nov 2016, 5:46PM To: nginx at nginx.org [nginx at nginx.org] Subject: Re: Blocking tens of thousands of IP's what is your firewall?, that is the place to block subnets etc, i assume they are not random ips, they are likely from a block owned by someone?? On Tue, Nov 1, 2016 at 5:37 PM, CJ Ess > wrote: I don't think managing large lists of IPs is nginx's strength - as far as I can tell all of its ACLs are arrays that have the be iterated through on each request. When I do have to manage IP lists in Nginx I try to compress the lists into the most compact CIDR representation so there is less to search. Here is a perl snippet I use to do that (handles ipv4 and ipv6): #!/usr/bin/perl use NetAddr::IP; my @addresses; foreach my $subnet (split(/\s+/, $list_of_ips)) { push(@addresses, NetAddr::IP->new($subnet)); } foreach my $cidr (NetAddr::IP::compact(@addresses)) { if ($cidr->version == 4) { print $cidr . "\n"; } else { print $cidr->short() . "/" . $cidr->masklen() . "\n"; } On Tue, Nov 1, 2016 at 11:15 AM, Cox, Eric S > wrote: Is anyone aware of a difference performance wise between using return 403; vs deny all; When mapping against a list of tens of thousands of ip? Thanks ________________________________ This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is confidential and protected by law from unauthorized disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx ________________________________ This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is confidential and protected by law from unauthorized disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at samad.com.au Tue Nov 1 21:50:14 2016 From: alex at samad.com.au (Alex Samad) Date: Wed, 2 Nov 2016 08:50:14 +1100 Subject: nginx and FIX server In-Reply-To: References: Message-ID: Hi So you have done a setup ? Alex On 2 November 2016 at 08:42, CJ Ess wrote: > Probably a better solution then most! > > On Fri, Oct 28, 2016 at 10:42 PM, Alex Samad wrote: >> >> Hi >> >> Not really an option in current setup. The rate limit is to stop >> clients with bad fix servers that spam our fix server. >> >> Right now we have a custom bit of java code that that bit rate limits >> tcp streams.. >> >> just bought into nginx so looking at stream proxing it through it instead >> A >> >> On 29 October 2016 at 02:48, CJ Ess wrote: >> > Cool. Probably off topic, but why rate limit FIX? My solution for heavy >> > traders was always to put them on their own hardware and pass the costs >> > back >> > to them. They are usually invested in whatever strategy they are using >> > and >> > happy to pay up. >> > >> > >> > >> > On Fri, Oct 28, 2016 at 1:29 AM, Alex Samad wrote: >> >> >> >> Hi >> >> >> >> yeah I have had a very quick look, just wondering if any one on the >> >> list had set one up. >> >> >> >> Alex >> >> >> >> On 28 October 2016 at 16:15, CJ Ess wrote: >> >> > Maybe this is what you want: >> >> > https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html >> >> > >> >> > See the parts about proxy_download_rate and proxy_upload_rate >> >> > >> >> > On Thu, Oct 27, 2016 at 11:22 PM, Alex Samad >> >> > wrote: >> >> >> >> >> >> Yep >> >> >> >> >> >> On 28 October 2016 at 11:57, CJ Ess wrote: >> >> >> > FIX as in the financial information exchange protocol? >> >> >> > >> >> >> > On Thu, Oct 27, 2016 at 7:19 PM, Alex Samad >> >> >> > wrote: >> >> >> >> >> >> >> >> Hi >> >> >> >> >> >> >> >> any one setup nginx infront of a fix engine to do rate limiting ? >> >> >> >> >> >> >> >> Alex >> >> >> >> >> >> >> >> _______________________________________________ >> >> >> >> nginx mailing list >> >> >> >> nginx at nginx.org >> >> >> >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> > >> >> >> > >> >> >> > >> >> >> > _______________________________________________ >> >> >> > nginx mailing list >> >> >> > nginx at nginx.org >> >> >> > http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> >> >> _______________________________________________ >> >> >> nginx mailing list >> >> >> nginx at nginx.org >> >> >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> > >> >> > >> >> > >> >> > _______________________________________________ >> >> > nginx mailing list >> >> > nginx at nginx.org >> >> > http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> _______________________________________________ >> >> nginx mailing list >> >> nginx at nginx.org >> >> http://mailman.nginx.org/mailman/listinfo/nginx >> > >> > >> > >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From rainer at ultra-secure.de Tue Nov 1 21:50:47 2016 From: rainer at ultra-secure.de (Rainer Duffner) Date: Tue, 1 Nov 2016 22:50:47 +0100 Subject: Blocking tens of thousands of IP's In-Reply-To: References: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> Message-ID: > Am 01.11.2016 um 22:46 schrieb Jeff Dyke : > > what is your firewall?, that is the place to block subnets etc, i assume they are not random ips, they are likely from a block owned by someone?? Depends on the firewall, but our network-guys would refuse to do that (and have so in the past). Apparently, the performance of firewalls when loaded with thousands of rules isn?t much to brag about ;-) Additionally, they like to create their rules by hand instead of generating them (old school). How are the IPs gathered? From lucas at lucasrolff.com Tue Nov 1 21:51:46 2016 From: lucas at lucasrolff.com (Lucas Rolff) Date: Tue, 01 Nov 2016 22:51:46 +0100 Subject: Blocking tens of thousands of IP's In-Reply-To: <74A4D440E25E6843BC8E324E67BB3E394558B05D@N060XBOXP38.kroger.com> References: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> , <74A4D440E25E6843BC8E324E67BB3E394558B05D@N060XBOXP38.kroger.com> Message-ID: <58190E72.4000304@lucasrolff.com> You could very well do a small ipset together with iptables, it's fast, and you don't have to reload for every subnet / ip you add. Doing it within nginx is rather.. Yeah. -- Best Regards, Lucas Rolff Cox, Eric S wrote: > Random, blocks, certain durations, etc. Its very random and or short > lived which is something we don't want to move to the firewall at the > moment > > -----Original Message----- > *From:* Jeff Dyke [jeff.dyke at gmail.com] > *Received:* Tuesday, 01 Nov 2016, 5:46PM > *To:* nginx at nginx.org [nginx at nginx.org] > *Subject:* Re: Blocking tens of thousands of IP's > > what is your firewall?, that is the place to block subnets etc, i > assume they are not random ips, they are likely from a block owned by > someone?? > > On Tue, Nov 1, 2016 at 5:37 PM, CJ Ess > wrote: > > I don't think managing large lists of IPs is nginx's strength - as > far as I can tell all of its ACLs are arrays that have the be > iterated through on each request. > > When I do have to manage IP lists in Nginx I try to compress the > lists into the most compact CIDR representation so there is less > to search. Here is a perl snippet I use to do that (handles ipv4 > and ipv6): > > #!/usr/bin/perl > > use NetAddr::IP; > > my @addresses; > > foreach my $subnet (split(/\s+/, $list_of_ips)) { > push(@addresses, NetAddr::IP->new($subnet)); > } > > foreach my $cidr (NetAddr::IP::compact(@addresses)) { > if ($cidr->version == 4) { > print $cidr . "\n"; > } else { > print $cidr->short() . "/" . $cidr->masklen() . "\n"; > } > > > On Tue, Nov 1, 2016 at 11:15 AM, Cox, Eric S > wrote: > > Is anyone aware of a difference performance wise between using > > return 403; > > vs > > deny all; > > When mapping against a list of tens of thousands of ip? > > Thanks > > > ------------------------------------------------------------------------ > > This e-mail message, including any attachments, is for the > sole use of the intended recipient(s) and may contain > information that is confidential and protected by law from > unauthorized disclosure. Any unauthorized review, use, > disclosure or distribution is prohibited. If you are not the > intended recipient, please contact the sender by reply e-mail > and destroy all copies of the original message. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > ------------------------------------------------------------------------ > > This e-mail message, including any attachments, is for the sole use of > the intended recipient(s) and may contain information that is > confidential and protected by law from unauthorized disclosure. Any > unauthorized review, use, disclosure or distribution is prohibited. If > you are not the intended recipient, please contact the sender by reply > e-mail and destroy all copies of the original message. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Tue Nov 1 22:26:40 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 01 Nov 2016 15:26:40 -0700 Subject: Blocking tens of thousands of IP's In-Reply-To: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> References: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> Message-ID: <20161101222640.5484625.57537.15414@lazygranch.com> ? ? Original Message ? From: Cox, Eric S Sent: Tuesday, November 1, 2016 8:16 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Blocking tens of thousands of IP's Is anyone aware of a difference performance wise between using ? return 403; ? vs ? deny all; ? When mapping against a list of tens of thousands of ip? ? Thanks ? ------------- I started a thread on blocking via nginx a while ago. I will most assuredly get the terminology wrong here, but what I noticed is nginx reads the data from the IP then blocks the actual processing. ?The fact you see the IP in your nginx access log indicates nginx spent some time on the IP request. It is more efficient to block the IP space at the firewall. ?For one thing, it keeps the access.log cleaner since the requests never show up. I still maintain a file compatible with nginx, but have a script to convert the file to an IPFW table.? I receive nothing but grief when I mention in forums about blocking the IP space of what consider not to be eyeballs. I just see no reason to serve AWS, OVH, etc. OVH has been documented in nation state hacking as command and control.? I block one or two commercial sites every time I process the log. (Obviously sites I haven't seen before since are not in the ipfw table.)I flag the obvious hacking and have scripts to display all the entries or just the IPs. From eric.cox at kroger.com Tue Nov 1 22:35:34 2016 From: eric.cox at kroger.com (Cox, Eric S) Date: Tue, 1 Nov 2016 22:35:34 +0000 Subject: Blocking tens of thousands of IP's In-Reply-To: References: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> , Message-ID: <74A4D440E25E6843BC8E324E67BB3E394558B28E@N060XBOXP38.kroger.com> Currently we track all access logs realtime via an in house built log aggregation solution. Various algorithms are setup to detect said IPS whether it be by hit rate, country, known types of attacks etc. These IPS are typically identified within a few mins and we reload to banned list every 60 seconds. We just moved some services from apache where we were doing this without any noticable performance impact. Have this working in nginx but was looking for general suggestion on how to optimize if at all possible. -----Original Message----- From: Rainer Duffner [rainer at ultra-secure.de] Received: Tuesday, 01 Nov 2016, 5:51PM To: nginx at nginx.org [nginx at nginx.org] Subject: Re: Blocking tens of thousands of IP's > Am 01.11.2016 um 22:46 schrieb Jeff Dyke : > > what is your firewall?, that is the place to block subnets etc, i assume they are not random ips, they are likely from a block owned by someone?? Depends on the firewall, but our network-guys would refuse to do that (and have so in the past). Apparently, the performance of firewalls when loaded with thousands of rules isn?t much to brag about ;-) Additionally, they like to create their rules by hand instead of generating them (old school). How are the IPs gathered? _______________________________________________ nginx mailing list nginx at nginx.org https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx&d=CwIGaQ&c=WUZzGzAb7_N4DvMsVhUlFrsw4WYzLoMP5bgx2U7ydPE&r=20GRp3QiDlDBgTH4mxQcOIMPCXcNvWGMx5Y0qmfF8VE&m=FMODc3JSzrdrdzR0wbnGKDiUZI8s8iei9P6-WI1uOp8&s=AduDjg3qFeqZtQhu6YkSmcp-pwUZ6xy-IjPk0rGo0Xs&e= ________________________________ This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is confidential and protected by law from unauthorized disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rainer at ultra-secure.de Tue Nov 1 22:41:24 2016 From: rainer at ultra-secure.de (Rainer Duffner) Date: Tue, 1 Nov 2016 23:41:24 +0100 Subject: Blocking tens of thousands of IP's In-Reply-To: <74A4D440E25E6843BC8E324E67BB3E394558B28E@N060XBOXP38.kroger.com> References: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> <74A4D440E25E6843BC8E324E67BB3E394558B28E@N060XBOXP38.kroger.com> Message-ID: <15B19760-FECC-4895-875A-B1645F248394@ultra-secure.de> > Am 01.11.2016 um 23:35 schrieb Cox, Eric S : > > Currently we track all access logs realtime via an in house built log aggregation solution. Various algorithms are setup to detect said IPS whether it be by hit rate, country, known types of attacks etc. These IPS are typically identified within a few mins and we reload to banned list every 60 seconds. We just moved some services from apache where we were doing this without any noticable performance impact. Have this working in nginx but was looking for general suggestion on how to optimize if at all possible. Ah, if you already have the data pre-processed? I?d move blocking to the host?s firewall, as suggested. Long term, I want to do this (or at least be able to), too. We (MSP) have a rather large number of firewalls and telling the network-guys ?Block this IP at all of them? does not work (it would probably take them the better part of the day). They don?t believe in automation... -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.cox at kroger.com Tue Nov 1 22:43:27 2016 From: eric.cox at kroger.com (Cox, Eric S) Date: Tue, 1 Nov 2016 22:43:27 +0000 Subject: Blocking tens of thousands of IP's In-Reply-To: <15B19760-FECC-4895-875A-B1645F248394@ultra-secure.de> References: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> <74A4D440E25E6843BC8E324E67BB3E394558B28E@N060XBOXP38.kroger.com>, <15B19760-FECC-4895-875A-B1645F248394@ultra-secure.de> Message-ID: <74A4D440E25E6843BC8E324E67BB3E394558B2F8@N060XBOXP38.kroger.com> Unfortunately much like others have stated, we also don't have the automation at the firewall layer to move as quickly as we would like. So at the moment its not an option. -----Original Message----- From: Rainer Duffner [rainer at ultra-secure.de] Received: Tuesday, 01 Nov 2016, 6:41PM To: nginx at nginx.org [nginx at nginx.org] Subject: Re: Blocking tens of thousands of IP's Am 01.11.2016 um 23:35 schrieb Cox, Eric S >: Currently we track all access logs realtime via an in house built log aggregation solution. Various algorithms are setup to detect said IPS whether it be by hit rate, country, known types of attacks etc. These IPS are typically identified within a few mins and we reload to banned list every 60 seconds. We just moved some services from apache where we were doing this without any noticable performance impact. Have this working in nginx but was looking for general suggestion on how to optimize if at all possible. Ah, if you already have the data pre-processed? I?d move blocking to the host?s firewall, as suggested. Long term, I want to do this (or at least be able to), too. We (MSP) have a rather large number of firewalls and telling the network-guys ?Block this IP at all of them? does not work (it would probably take them the better part of the day). They don?t believe in automation... ________________________________ This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is confidential and protected by law from unauthorized disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Tue Nov 1 22:47:08 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 01 Nov 2016 15:47:08 -0700 Subject: Blocking tens of thousands of IP's In-Reply-To: <74A4D440E25E6843BC8E324E67BB3E394558B28E@N060XBOXP38.kroger.com> References: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> , <74A4D440E25E6843BC8E324E67BB3E394558B28E@N060XBOXP38.kroger.com> Message-ID: <20161101224708.5484625.22131.15420@lazygranch.com> ? ? Original Message ? From: Cox, Eric S Sent: Tuesday, November 1, 2016 3:35 PM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: RE: Blocking tens of thousands of IP's Currently we track all access logs realtime via an in house built log aggregation solution. Various algorithms are setup to detect said IPS whether it be by hit rate, country, known types of attacks etc. These IPS are typically identified within a few mins and we reload to banned list every 60 seconds. We just moved some services from apache where we were doing this without any noticable performance impact. Have this working in nginx but was looking for general suggestion on how to optimize if at all possible. ----------- You would have to reload/restart nginx to block dynamically. ?That alone might be the CPU hit.? ? From nginx-forum at forum.nginx.org Tue Nov 1 22:49:04 2016 From: nginx-forum at forum.nginx.org (olat) Date: Tue, 01 Nov 2016 18:49:04 -0400 Subject: location regex In-Reply-To: References: Message-ID: <2b2a85075254b927f0c01202bc494bdb.NginxMailingListEnglish@forum.nginx.org> Looks like there is a bug in the forum. 2 the same topics and the response ended up in the wrong thread, mixed up ;-) Anyway, Thanks Igor for a quick response. Could you explain more why regex is not a good idea? I am asking about regex in the context of caching some of the requests on front-end proxy to speed up django app loading dynamic content where session is involved. Does it mean each of the filtered out requests we would like to cache, should duplicate the same location block? server { ... proxy_set_header Host myhost; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto- $scheme; # catch only /appfoo* and /appbar* location ~ /app(foo|bar)* { proxy_pass http://ip; proxy_redirect http://ip $scheme://$server_name; proxy_cache $cache_zone_name; proxy_cache_key "$request_uri"; ... proxy_cache_methods GET HEAD; proxy_cache_bypass $cache_refresh; proxy_no_cache $skip_cache; proxy_ignore_headers "Set-Cookie" "Vary" "Expires"; proxy_hide_header Set-Cookie; } # everything else goes here location / { proxy_pass http://ip; proxy_redirect http://ip $scheme://$server_name; } Thanks in advance for help. O. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,17473,270697#msg-270697 From rainer at ultra-secure.de Tue Nov 1 22:53:56 2016 From: rainer at ultra-secure.de (Rainer Duffner) Date: Tue, 1 Nov 2016 23:53:56 +0100 Subject: Blocking tens of thousands of IP's In-Reply-To: <74A4D440E25E6843BC8E324E67BB3E394558B2F8@N060XBOXP38.kroger.com> References: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> <74A4D440E25E6843BC8E324E67BB3E394558B28E@N060XBOXP38.kroger.com> <15B19760-FECC-4895-875A-B1645F248394@ultra-secure.de> <74A4D440E25E6843BC8E324E67BB3E394558B2F8@N060XBOXP38.kroger.com> Message-ID: > Am 01.11.2016 um 23:43 schrieb Cox, Eric S : > > Unfortunately much like others have stated, we also don't have the automation at the firewall layer to move as quickly as we would like. So at the moment its not an option. If you get hammered, even serving the 403-page is actually noticeable traffic. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.cox at kroger.com Tue Nov 1 22:57:09 2016 From: eric.cox at kroger.com (Cox, Eric S) Date: Tue, 1 Nov 2016 22:57:09 +0000 Subject: Blocking tens of thousands of IP's In-Reply-To: References: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> <74A4D440E25E6843BC8E324E67BB3E394558B28E@N060XBOXP38.kroger.com> <15B19760-FECC-4895-875A-B1645F248394@ultra-secure.de> <74A4D440E25E6843BC8E324E67BB3E394558B2F8@N060XBOXP38.kroger.com>, Message-ID: <74A4D440E25E6843BC8E324E67BB3E394558B3B3@N060XBOXP38.kroger.com> Is it. However our frontend capacity spans across multiple data centers, a dozen+ nginx instances, and over 70 cores of processing power. We are not as concerned with overloading the frontend as we are with certain endpoints that might be single instance legacy apps etc. -----Original Message----- From: Rainer Duffner [rainer at ultra-secure.de] Received: Tuesday, 01 Nov 2016, 6:54PM To: nginx at nginx.org [nginx at nginx.org] Subject: Re: Blocking tens of thousands of IP's Am 01.11.2016 um 23:43 schrieb Cox, Eric S >: Unfortunately much like others have stated, we also don't have the automation at the firewall layer to move as quickly as we would like. So at the moment its not an option. If you get hammered, even serving the 403-page is actually noticeable traffic. ________________________________ This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is confidential and protected by law from unauthorized disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Tue Nov 1 22:57:54 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 01 Nov 2016 15:57:54 -0700 Subject: Blocking tens of thousands of IP's In-Reply-To: References: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> <74A4D440E25E6843BC8E324E67BB3E394558B28E@N060XBOXP38.kroger.com> <15B19760-FECC-4895-875A-B1645F248394@ultra-secure.de> <74A4D440E25E6843BC8E324E67BB3E394558B2F8@N060XBOXP38.kroger.com> Message-ID: <20161101225754.5484625.48322.15425@lazygranch.com> If you get hammered, even serving the 403-page is actually noticeable traffic. --------- Nginx rate limiting works very well. ? From me+lists.nginx at tomthorogood.co.uk Tue Nov 1 23:05:17 2016 From: me+lists.nginx at tomthorogood.co.uk (Tom Thorogood) Date: Wed, 02 Nov 2016 09:35:17 +1030 Subject: Blocking tens of thousands of IP's In-Reply-To: <74A4D440E25E6843BC8E324E67BB3E394558B2F8@N060XBOXP38.kroger.com> References: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> <15B19760-FECC-4895-875A-B1645F248394@ultra-secure.de> <74A4D440E25E6843BC8E324E67BB3E394558B2F8@N060XBOXP38.kroger.com> Message-ID: <1478041517.1593252.774391769.3381CDA7@webmail.messagingengine.com> Hi Eric, This is a rather shameless plug here, but I wrote an nginx module designed to efficiently block (or filter) large numbers of IP addresses. It's a two part system with the nginx module being https://github.com/tmthrgd/nginx-ip-blocker and a separate agent daemon here https://github.com/tmthrgd/ip-blocker-agent . It uses shared memory to store the IP addresses and binary search to iterate through them. It might not work for your circumstance, but it just might. Kind Regards, Tom Thorogood. On Wed, 2 Nov 2016, at 09:13 AM, Cox, Eric S wrote: > Unfortunately much like others have stated, we also don't have the > automation at the firewall layer to move as quickly as we would like. > So at the moment its not an option. > > -----Original Message----- *From:* Rainer Duffner [rainer at ultra- > secure.de] *Received:* Tuesday, 01 Nov 2016, 6:41PM *To:* > nginx at nginx.org [nginx at nginx.org] *Subject:* Re: Blocking tens of > thousands of IP's > > >> Am 01.11.2016 um 23:35 schrieb Cox, Eric S : >> >> Currently we track all access logs realtime via an in house built log >> aggregation solution. Various algorithms are setup to detect said IPS >> whether it be by hit rate, country, known types of attacks etc. These >> IPS are typically identified within a few mins and we reload to >> banned list every 60 seconds. We just moved some services from apache >> where we were doing this without any noticable performance impact. >> Have this working in nginx but was looking for general suggestion on >> how to optimize if at all possible. > > > Ah, if you already have the data pre-processed? > > I?d move blocking to the host?s firewall, as suggested. > > Long term, I want to do this (or at least be able to), too. > > We (MSP) have a rather large number of firewalls and telling the network- > guys ?Block this IP at all of them? does not work (it would probably > take them the better part of the day). > They don?t believe in automation... > > > This e-mail message, including any attachments, is for the sole use of > the intended recipient(s) and may contain information that is > confidential and protected by law from unauthorized disclosure. Any > unauthorized review, use, disclosure or distribution is prohibited. If > you are not the intended recipient, please contact the sender by reply > e-mail and destroy all copies of the original message. > _________________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From spywdec at gmail.com Wed Nov 2 03:33:15 2016 From: spywdec at gmail.com (dec w) Date: Wed, 2 Nov 2016 11:33:15 +0800 Subject: setting up client ip and hostname in nginx? In-Reply-To: References: Message-ID: you can get client ip, but you can't get client hostname. $host just your server hostname. 2016-11-01 9:52 GMT+08:00 ngineo : > i am working on AWS Elastic Beanstalk Instance, which runs Java applicaiton > servered through Nginx ( no load balancer in front, just a standalone > instance ) > I need to set cookie to catch client ip and client hostname. Is this > possible to do it in nginx and if yes then how? > Below if my nginx configuration file: > > location / { > proxy_pass http://127.0.0.1:5000; > proxy_http_version 1.1; > > proxy_set_header Connection $connection_upgrade; > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > } > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,270675,270675#msg-270675 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Nov 2 06:28:30 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Wed, 02 Nov 2016 02:28:30 -0400 Subject: Nginx Kodi User Agent secure_link blocking / banning Message-ID: <5fb0dbd6683bf7ee286f2c2c371210e7.NginxMailingListEnglish@forum.nginx.org> So with Nginx my access.logs show allot of Kodi user agents from what I look up online Kodi is a app that runs on Phones, TV sticks, Mac, PC etc and it is used for watching live TV I reckon its a pretty abusive app or service since there is allot going around about IPTV and how illegal it is. The issue I have is I am receiving allot of spammy errors from them like this. [02/Nov/2016:06:46:58 +0100] "HEAD /media/files/5b/4e/80/79ecf5e1db30cd313adcac277134389b.mp4?md5=RoSdLIex-9qnGbGdpSyoDDojjTM&expires=1478083618 HTTP/1.1" Status:403 0 "http://networkflare.com/media/files/5b/4e/80/79ecf5e1db30cd313adcac277134389b.mp4?md5=RoSdLIex-9qnGbGdpSyoDDojjTM&expires=1478083618" "Kodi/16.1 (Linux; Android 5.1.1; AFTM Build/LVY48F) Android/5.1.1 Sys_CPU/armv7l App_Bitness/32 Version/16.1-Git:2016-04-24-c327c53" [02/Nov/2016:06:47:01 +0100] "GET /media/files/12/d1/df/c057fab9ca845f4ae182796a124da8a2.mp4?md5=ILfKhx7G3Mt_RsZjhTNRk5RnXXI&expires=1478083619 HTTP/1.1" Status:403 162 "http://networkflare.com/media/files/12/d1/df/c057fab9ca845f4ae182796a124da8a2.mp4?md5=ILfKhx7G3Mt_RsZjhTNRk5RnXXI&expires=1478083619" "KODI/21.1 (Linux; Android 4.4.2; m201 Build/KOT49H) Kodi_Fork_KODI/1.0 Android/4.4.2 Sys_CPU/armv7l App_Bitness/32 Version/21.1-Git:2016-10-15-c327c53-dirty" [02/Nov/2016:06:47:03 +0100] "GET /media/files/cf/0d/38/8d62ecb3f7813ca45ce561e5ab31314f.mp4?md5=b_1dqChBf3PZthSuYDWmNYehZRo&expires=1478083621 HTTP/1.1" Status:403 162 "http://networkflare.com/media/files/cf/0d/38/8d62ecb3f7813ca45ce561e5ab31314f.mp4?md5=b_1dqChBf3PZthSuYDWmNYehZRo&expires=1478083621" "Kodi/16.1 (Linux; Android 4.4.4; SM-N900V Build/KTU84P) Android/4.4.4 Sys_CPU/armv7l App_Bitness/32 Version/16.1-Git:2016-04-24-f6ceced" [02/Nov/2016:06:47:04 +0100] "GET /media/files/12/d1/df/c057fab9ca845f4ae182796a124da8a2.mp4?md5=ILfKhx7G3Mt_RsZjhTNRk5RnXXI&expires=1478083619 HTTP/1.1" Status:403 162 "http://networkflare.com/media/files/12/d1/df/c057fab9ca845f4ae182796a124da8a2.mp4?md5=ILfKhx7G3Mt_RsZjhTNRk5RnXXI&expires=1478083619" "KODI/21.1 (Linux; Android 4.4.2; m201 Build/KOT49H) Kodi_Fork_KODI/1.0 Android/4.4.2 Sys_CPU/armv7l App_Bitness/32 Version/21.1-Git:2016-10-15-c327c53-dirty" Now I don't host any IPTV services I only have my own static MP4's, My access.log clearly displays they are trying to hit static MP4 files they have obviously not used my site to obtain the correct link what is why the secure_link module is denying them and they seem to be pulling the link straight from HTML as the ampersand inside the referrer and request URL shows (&), Another thing that makes me curious it how the referrer URL is an exact match to what the request URL is. And considering my site does a 301 redirect on all links to http://www.networkflare.com/* there is no way that the referrer should be without a www. in the URL. (Allot of things don't seem right about these requests) Are these bots has anyone had any experience with KODI before and should I just ignore these requests or take the next step by blacklisting Kodi matching user agents. I also read that Kodi does not display adverts similar to a adblocker and I have a major problem with those who try to hotlink hijack steal bandwidth and waste resources for free as I am sure allot of others do. Thanks for reading looking forward to what advice and input others can share on what should be done. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270705,270705#msg-270705 From lists at lazygranch.com Wed Nov 2 07:12:13 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Wed, 02 Nov 2016 00:12:13 -0700 Subject: Nginx Kodi User Agent secure_link blocking / banning In-Reply-To: <5fb0dbd6683bf7ee286f2c2c371210e7.NginxMailingListEnglish@forum.nginx.org> References: <5fb0dbd6683bf7ee286f2c2c371210e7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161102071213.5484625.23370.15451@lazygranch.com> Kodi is the renamed xbmc. I use it myself, but I never "aimed" it at a website. I just view my own videos or use the kodi plug-ins. You can install it yourself on a PC and see it is intended to be just a media player. It really isn't any different that seeing VLC as the agent.? Perhaps someone wrote a plugin for your website. Make that a poorly written plugin ;-) Do you offer your mp4 files to the public? I've been told but have no proof that Kodi jammed on a Roku stick could contain malware. I have only used it on Windows and Linux.? ? Original Message ? From: c0nw0nk Sent: Tuesday, November 1, 2016 11:28 PM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Nginx Kodi User Agent secure_link blocking / banning So with Nginx my access.logs show allot of Kodi user agents from what I look up online Kodi is a app that runs on Phones, TV sticks, Mac, PC etc and it is used for watching live TV I reckon its a pretty abusive app or service since there is allot going around about IPTV and how illegal it is. The issue I have is I am receiving allot of spammy errors from them like this. [02/Nov/2016:06:46:58 +0100] "HEAD /media/files/5b/4e/80/79ecf5e1db30cd313adcac277134389b.mp4?md5=RoSdLIex-9qnGbGdpSyoDDojjTM&expires=1478083618 HTTP/1.1" Status:403 0 "http://networkflare.com/media/files/5b/4e/80/79ecf5e1db30cd313adcac277134389b.mp4?md5=RoSdLIex-9qnGbGdpSyoDDojjTM&expires=1478083618" "Kodi/16.1 (Linux; Android 5.1.1; AFTM Build/LVY48F) Android/5.1.1 Sys_CPU/armv7l App_Bitness/32 Version/16.1-Git:2016-04-24-c327c53" [02/Nov/2016:06:47:01 +0100] "GET /media/files/12/d1/df/c057fab9ca845f4ae182796a124da8a2.mp4?md5=ILfKhx7G3Mt_RsZjhTNRk5RnXXI&expires=1478083619 HTTP/1.1" Status:403 162 "http://networkflare.com/media/files/12/d1/df/c057fab9ca845f4ae182796a124da8a2.mp4?md5=ILfKhx7G3Mt_RsZjhTNRk5RnXXI&expires=1478083619" "KODI/21.1 (Linux; Android 4.4.2; m201 Build/KOT49H) Kodi_Fork_KODI/1.0 Android/4.4.2 Sys_CPU/armv7l App_Bitness/32 Version/21.1-Git:2016-10-15-c327c53-dirty" [02/Nov/2016:06:47:03 +0100] "GET /media/files/cf/0d/38/8d62ecb3f7813ca45ce561e5ab31314f.mp4?md5=b_1dqChBf3PZthSuYDWmNYehZRo&expires=1478083621 HTTP/1.1" Status:403 162 "http://networkflare.com/media/files/cf/0d/38/8d62ecb3f7813ca45ce561e5ab31314f.mp4?md5=b_1dqChBf3PZthSuYDWmNYehZRo&expires=1478083621" "Kodi/16.1 (Linux; Android 4.4.4; SM-N900V Build/KTU84P) Android/4.4.4 Sys_CPU/armv7l App_Bitness/32 Version/16.1-Git:2016-04-24-f6ceced" [02/Nov/2016:06:47:04 +0100] "GET /media/files/12/d1/df/c057fab9ca845f4ae182796a124da8a2.mp4?md5=ILfKhx7G3Mt_RsZjhTNRk5RnXXI&expires=1478083619 HTTP/1.1" Status:403 162 "http://networkflare.com/media/files/12/d1/df/c057fab9ca845f4ae182796a124da8a2.mp4?md5=ILfKhx7G3Mt_RsZjhTNRk5RnXXI&expires=1478083619" "KODI/21.1 (Linux; Android 4.4.2; m201 Build/KOT49H) Kodi_Fork_KODI/1.0 Android/4.4.2 Sys_CPU/armv7l App_Bitness/32 Version/21.1-Git:2016-10-15-c327c53-dirty" Now I don't host any IPTV services I only have my own static MP4's, My access.log clearly displays they are trying to hit static MP4 files they have obviously not used my site to obtain the correct link what is why the secure_link module is denying them and they seem to be pulling the link straight from HTML as the ampersand inside the referrer and request URL shows (&), Another thing that makes me curious it how the referrer URL is an exact match to what the request URL is. And considering my site does a 301 redirect on all links to http://www.networkflare.com/* there is no way that the referrer should be without a www. in the URL. (Allot of things don't seem right about these requests) Are these bots has anyone had any experience with KODI before and should I just ignore these requests or take the next step by blacklisting Kodi matching user agents. I also read that Kodi does not display adverts similar to a adblocker and I have a major problem with those who try to hotlink hijack steal bandwidth and waste resources for free as I am sure allot of others do. Thanks for reading looking forward to what advice and input others can share on what should be done. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270705,270705#msg-270705 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed Nov 2 07:23:41 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Wed, 02 Nov 2016 03:23:41 -0400 Subject: Nginx Kodi User Agent secure_link blocking / banning In-Reply-To: <20161102071213.5484625.23370.15451@lazygranch.com> References: <20161102071213.5484625.23370.15451@lazygranch.com> Message-ID: <3d453e93bd92daf36967ddbf033684eb.NginxMailingListEnglish@forum.nginx.org> gariac Wrote: ------------------------------------------------------- > Kodi is the renamed xbmc. I use it myself, but I never "aimed" it at a > website. I just view my own videos or use the kodi plug-ins. You can > install it yourself on a PC and see it is intended to be just a media > player. It really isn't any different that seeing VLC as the agent.? > > Perhaps someone wrote a plugin for your website. Make that a poorly > written plugin ;-) > > Do you offer your mp4 files to the public? Well the idea of the secure_link module is to prevent people directly accessing any file without their own generated link. The generated hash matches that user's IP, the URL and contains a UNIX timestamp what is set to expire the URL after about 5 hours. With the secure_link module you can also include other personal information on the user into the encrypted string of the generated secure link such as cookies, user-agents etc. I guess to answer your question no the files should not be publicly accessible without first accessing the webpage that generates the secure_link for you to access the file via. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270705,270707#msg-270707 From lists at lazygranch.com Wed Nov 2 07:47:07 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Wed, 02 Nov 2016 00:47:07 -0700 Subject: Nginx Kodi User Agent secure_link blocking / banning In-Reply-To: <3d453e93bd92daf36967ddbf033684eb.NginxMailingListEnglish@forum.nginx.org> References: <20161102071213.5484625.23370.15451@lazygranch.com> <3d453e93bd92daf36967ddbf033684eb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161102074707.5484625.96830.15454@lazygranch.com> ?Apparently there is a scheme to feed urls to kodi.? ?https://m.reddit.com/r/kodi/comments/3lz84g/how_do_you_open_a_youtube_video_from_the_shell/ Block/ban as you see fit. ;-) These people are edge users of Kodi.? But you may want to search the interwebs to see if someone is attempting to write a kodi plugin for your service. ? The vast majority of the Kodi plug-ins are third party.? From nginx-forum at forum.nginx.org Wed Nov 2 08:16:19 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Wed, 02 Nov 2016 04:16:19 -0400 Subject: Nginx Kodi User Agent secure_link blocking / banning In-Reply-To: <20161102074707.5484625.96830.15454@lazygranch.com> References: <20161102074707.5484625.96830.15454@lazygranch.com> Message-ID: <9cf1f720ed1cb8b95358622d7577039b.NginxMailingListEnglish@forum.nginx.org> gariac Wrote: ------------------------------------------------------- > ?Apparently there is a scheme to feed urls to kodi.? > > ?https://m.reddit.com/r/kodi/comments/3lz84g/how_do_you_open_a_youtube > _video_from_the_shell/ > > Block/ban as you see fit. ;-) These people are edge users of Kodi.? > > But you may want to search the interwebs to see if someone is > attempting to write a kodi plugin for your service. ? The vast > majority of the Kodi plug-ins are third party.? > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thank's I am curious though what do they use CURL on Kodi plugins to obtain the link from webpages ? Also curious how they expect that to work when the link grabbed has been generated for that IP address only. Any other IP that requests the same link would result in a 403. I also though YouTube generated MP4 links match the users IP and requested URI only the same as the secure_link module. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270705,270709#msg-270709 From lists at lazygranch.com Wed Nov 2 08:34:37 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Wed, 02 Nov 2016 01:34:37 -0700 Subject: Nginx Kodi User Agent secure_link blocking / banning In-Reply-To: <9cf1f720ed1cb8b95358622d7577039b.NginxMailingListEnglish@forum.nginx.org> References: <20161102074707.5484625.96830.15454@lazygranch.com> <9cf1f720ed1cb8b95358622d7577039b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161102083437.5484625.52833.15457@lazygranch.com> I don't know how to state this without being insulting, but Kodi is designed to be used by dumb people. That is how I use it. It seems pointless to me to try to hack Kodi into doing something it wasn't meant to do. That is why I called that example an edge case.? There is a YouTube plugin for Kodi. http://kodi.wiki/view/Add-on:YouTube ?Load up Kodi and go hack yourself. That is the best way to learn how to block them.? There is some hacker variant of Kodi called TVMC. It may report as Kodi. User Agent reporting is on the honor system. ;-) At this point we're kind of getting off topic, though I did find that nginx module interesting. ? Original Message ? From: c0nw0nk Sent: Wednesday, November 2, 2016 1:16 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: Nginx Kodi User Agent secure_link blocking / banning gariac Wrote: ------------------------------------------------------- > ?Apparently there is a scheme to feed urls to kodi. > > ?https://m.reddit.com/r/kodi/comments/3lz84g/how_do_you_open_a_youtube > _video_from_the_shell/ > > Block/ban as you see fit. ;-) These people are edge users of Kodi.? > > But you may want to search the interwebs to see if someone is > attempting to write a kodi plugin for your service. ? The vast > majority of the Kodi plug-ins are third party.? > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thank's I am curious though what do they use CURL on Kodi plugins to obtain the link from webpages ? Also curious how they expect that to work when the link grabbed has been generated for that IP address only. Any other IP that requests the same link would result in a 403. I also though YouTube generated MP4 links match the users IP and requested URI only the same as the secure_link module. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270705,270709#msg-270709 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From livingdeadzerg at yandex.ru Wed Nov 2 09:28:39 2016 From: livingdeadzerg at yandex.ru (navern) Date: Wed, 02 Nov 2016 12:28:39 +0300 Subject: Are there plans for Nginx supporting HTTP/2 server push? In-Reply-To: <72b548b754f23f4c4daaeed49ec0e594.NginxMailingListEnglish@forum.nginx.org> References: <72b548b754f23f4c4daaeed49ec0e594.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5819B1C7.5070906@yandex.ru> Hello, I have same question. Are there any estimates for this feature? Maybe i missed something:) On 22.09.2016 15:00, mastercan wrote: > Is there something like a release timeline for HTTP/2 server push feature in > Nginx? It would help make https connections faster and get rid of one TCP > roundtrip. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269749,269749#msg-269749 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed Nov 2 09:43:05 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Wed, 02 Nov 2016 05:43:05 -0400 Subject: Nginx Kodi User Agent secure_link blocking / banning In-Reply-To: <20161102083437.5484625.52833.15457@lazygranch.com> References: <20161102083437.5484625.52833.15457@lazygranch.com> Message-ID: <70ad4a82bf68dfcf282d923cd18b3cab.NginxMailingListEnglish@forum.nginx.org> Yes I see after looking at the various plugins on GitHub it seems they replace the & ampersand string with & when they pull contents from the HTML. They also fake / spoof referrers and can change user-agents etc but they do it properly not like the person who has ended up in my logs. As you said they did it is badly. I feel this could be a loosing battle if they are spoofing the user-agent referrer etc it is pointless for me to block them since they will update their plugin to change it to match with legitimate web-browser user-agents like chrome, Firefox, Internet Explorer, Microsoft edge etc. What a pickle this is :( Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270705,270713#msg-270713 From nginx-forum at forum.nginx.org Wed Nov 2 10:17:06 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Wed, 02 Nov 2016 06:17:06 -0400 Subject: Nginx Kodi User Agent secure_link blocking / banning In-Reply-To: <70ad4a82bf68dfcf282d923cd18b3cab.NginxMailingListEnglish@forum.nginx.org> References: <20161102083437.5484625.52833.15457@lazygranch.com> <70ad4a82bf68dfcf282d923cd18b3cab.NginxMailingListEnglish@forum.nginx.org> Message-ID: <676756e88a7faf39ffb1f951b08cade6.NginxMailingListEnglish@forum.nginx.org> I wouldn't mind those using app's like Kodi if they did not just hotlink and steal my links. If my adverts was still there and I am being reimbursed for my work and content and bandwidth they are consuming. Then I wouldn't mind but I bet Kodi is not the only app with plugins doing this. The only solution I can think of is to lock the site to paid accounts only. So only registered users who pay can view the HTML page that generates my secure links so they may access my static MP4 files. :( Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270705,270714#msg-270714 From luky-37 at hotmail.com Wed Nov 2 10:43:01 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 2 Nov 2016 10:43:01 +0000 Subject: AW: Nginx Kodi User Agent secure_link blocking / banning In-Reply-To: <676756e88a7faf39ffb1f951b08cade6.NginxMailingListEnglish@forum.nginx.org> References: <20161102083437.5484625.52833.15457@lazygranch.com> <70ad4a82bf68dfcf282d923cd18b3cab.NginxMailingListEnglish@forum.nginx.org>, <676756e88a7faf39ffb1f951b08cade6.NginxMailingListEnglish@forum.nginx.org> Message-ID: I have a question: secure_link is correctly blocking those requests so its not generating any traffic. Why does it bother you then, if it is already blocked? From nginx-forum at forum.nginx.org Wed Nov 2 10:54:01 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Wed, 02 Nov 2016 06:54:01 -0400 Subject: AW: Nginx Kodi User Agent secure_link blocking / banning In-Reply-To: References: Message-ID: Lukas Tribus Wrote: ------------------------------------------------------- > I have a question: secure_link is correctly blocking those requests so > its not generating any traffic. > > Why does it bother you then, if it is already blocked? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Yes the links are generated correctly but because their plugin does not currently contain the regex to understand ampersands in HTML. If they was to fix their plugin and use regex to replace the ampersand & with & then the link would work correctly. It bothers me because the fix is as simple as that and it is a problem that is wasting bandwidth. On apps that steal content. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270705,270716#msg-270716 From igor at sysoev.ru Wed Nov 2 11:28:05 2016 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 2 Nov 2016 14:28:05 +0300 Subject: location regex In-Reply-To: <2b2a85075254b927f0c01202bc494bdb.NginxMailingListEnglish@forum.nginx.org> References: <2b2a85075254b927f0c01202bc494bdb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7C38A96F-4DB3-41FB-8BBA-057614785B19@sysoev.ru> On 02 Nov 2016, at 01:49, olat wrote: > Looks like there is a bug in the forum. 2 the same topics and the response > ended up in the wrong thread, mixed up ;-) Yes, the forum has some issues when answer come from the mailing list. > Anyway, Thanks Igor for a quick response. Could you explain more why regex > is not a good idea? I am asking about regex in the context of caching some > of the requests on front-end proxy to speed up django app loading dynamic > content where session is involved. Does it mean each of the filtered out > requests we would like to cache, should duplicate the same location block? > > > server { > ... > > proxy_set_header Host myhost; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto- $scheme; > > # catch only /appfoo* and /appbar* > location ~ /app(foo|bar)* { > proxy_pass http://ip; > proxy_redirect http://ip $scheme://$server_name; > > proxy_cache $cache_zone_name; > proxy_cache_key "$request_uri"; > ... > proxy_cache_methods GET HEAD; > proxy_cache_bypass $cache_refresh; > proxy_no_cache $skip_cache; > > proxy_ignore_headers "Set-Cookie" "Vary" "Expires"; > proxy_hide_header Set-Cookie; > } > # everything else goes here > location / { > proxy_pass http://ip; > proxy_redirect http://ip $scheme://$server_name; > } > > Thanks in advance for help. The regex locations are tested in order of their appearance. The prefix locations match the longest match regardless of their order. If you will have a lot of locations then prefix locations are much easier to maintain: you can add, change and delete prefix locations without worrying how this will affect other locations. -- Igor Sysoev http://nginx.com From nginx-forum at forum.nginx.org Wed Nov 2 12:49:52 2016 From: nginx-forum at forum.nginx.org (pkjchate) Date: Wed, 02 Nov 2016 08:49:52 -0400 Subject: Question about Error logs Message-ID: <1ea9cdc920bfac7e35d8cf939d9eed36.NginxMailingListEnglish@forum.nginx.org> Hi, I just migrated from Apache to Nginx. I have a question error logs. Nginx now logs same error again and again in the log file, but with different request IPs. Is there any setting I can do, so that Nginx only logs 1 error in same file only once in the file. The problem is I am getting a lot of same errors and the file gets large in size quickly. thanks for your help. Best, pankaj Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270718,270718#msg-270718 From mdounin at mdounin.ru Wed Nov 2 12:57:31 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Nov 2016 15:57:31 +0300 Subject: Blocking tens of thousands of IP's In-Reply-To: References: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> Message-ID: <20161102125731.GO73038@mdounin.ru> Hello! On Tue, Nov 01, 2016 at 05:37:59PM -0400, CJ Ess wrote: > I don't think managing large lists of IPs is nginx's strength - as far as I > can tell all of its ACLs are arrays that have the be iterated through on > each request. > > When I do have to manage IP lists in Nginx I try to compress the lists into > the most compact CIDR representation so there is less to search. Here is a > perl snippet I use to do that (handles ipv4 and ipv6): Yes, the "allow" / "deny" directives do sequential scan of address blocks specified, and this may not be very efficient when working with large sets of IPs. For large lists of IPs it's usually better idea to use the geo module, combined with if/return: geo $blocked { defaul 0; 192.2.0.0/16 1; ... } if ($blocked) { return 403; } Documentation is here: http://nginx.org/en/docs/http/ngx_http_geo_module.html -- Maxim Dounin http://nginx.org/ From luky-37 at hotmail.com Wed Nov 2 13:37:14 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 2 Nov 2016 13:37:14 +0000 Subject: AW: AW: Nginx Kodi User Agent secure_link blocking / banning In-Reply-To: References: , Message-ID: > Yes the links are generated correctly but because their plugin does not > currently contain the regex to understand ampersands in HTML. If they was to > fix their plugin and use regex to replace the ampersand & with & then > the link would work correctly. > > It bothers me because the fix is as simple as that and it is a problem that > is wasting bandwidth. On apps that steal content. I see. I guess the solution could be to generate the secret URL with complicated Javascript, so that it cannot be easily parsed from the HTML. They can always try to trun JS code too, but it is way more complicated. Lukas From tseveendorj at gmail.com Thu Nov 3 06:05:55 2016 From: tseveendorj at gmail.com (Tseveendorj Ochirlantuu) Date: Thu, 3 Nov 2016 14:05:55 +0800 Subject: exclude error_page on geoip Message-ID: Hello, I need to use geoip module for allow specific region access to my website. But blocked users should see the error_page. Users are blocked and cannot see custom error_page. I don't want to see error_page from other domain. I need to except only error page which is not applied to geoip block. How do this BR, Tseveen -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Nov 3 13:18:55 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 3 Nov 2016 16:18:55 +0300 Subject: exclude error_page on geoip In-Reply-To: References: Message-ID: <20161103131854.GS73038@mdounin.ru> Hello! On Thu, Nov 03, 2016 at 02:05:55PM +0800, Tseveendorj Ochirlantuu wrote: > Hello, > > I need to use geoip module for allow specific region access to my website. > But blocked users should see the error_page. Users are blocked and cannot > see custom error_page. > > I don't want to see error_page from other domain. I need to except only > error page which is not applied to geoip block. Try something like this: server { listen 80; error_page 403 /403.html; location / { if ($blocked) { return 403; } ... } location = /403.html { # no geoip restrictions here } } With such a configuration GeoIP-based restrictions are only applied in "location /", but doesn't affect requests to /403.html. That is, nginx will be able to return the error page correctly. -- Maxim Dounin http://nginx.org/ From daniel at linux-nerd.de Thu Nov 3 14:12:39 2016 From: daniel at linux-nerd.de (Daniel) Date: Thu, 3 Nov 2016 15:12:39 +0100 Subject: Alias or root directive Message-ID: Hi there, i try to add a images folder but seems not work. Could someone tell me what i am doing wrong: location ~ ^/en/holidays/shared/images { root /mnt/nfs/uat/; } When i replace root with alias it has also no effect :-( Cheers Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Nov 3 17:46:04 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 3 Nov 2016 17:46:04 +0000 Subject: Alias or root directive In-Reply-To: References: Message-ID: <20161103174604.GC23518@daoine.org> On Thu, Nov 03, 2016 at 03:12:39PM +0100, Daniel wrote: Hi there, > i try to add a images folder but seems not work. > Could someone tell me what i am doing wrong: What one example http request do you want to make? What file on your filesystem do you want nginx to serve in response to that request? > location ~ ^/en/holidays/shared/images { > root /mnt/nfs/uat/; > } > > When i replace root with alias it has also no effect :-( "alias" in a regex location has special requirements. http://nginx.org/r/alias Cheers, f -- Francis Daly francis at daoine.org From daniel at linux-nerd.de Thu Nov 3 17:51:43 2016 From: daniel at linux-nerd.de (Daniel) Date: Thu, 3 Nov 2016 18:51:43 +0100 Subject: Alias or root directive In-Reply-To: <20161103174604.GC23518@daoine.org> References: <20161103174604.GC23518@daoine.org> Message-ID: <52CFB0B3-3C4F-4656-9181-CEBE95139A95@linux-nerd.de> > >> i try to add a images folder but seems not work. >> Could someone tell me what i am doing wrong: > > What one example http request do you want to make? > I wanted to load such kind of URL: domain.de//en/holidays/shared/images/guides/germany/berlin.jpg > What file on your filesystem do you want nginx to serve in response to > that request? > on /mnt/nfs/uat/ are the folders like guides/germany/ >> location ~ ^/en/holidays/shared/images { >> root /mnt/nfs/uat/; >> } >> >> When i replace root with alias it has also no effect :-( > > "alias" in a regex location has special requirements. > I also tried with root instead of alias and i have the same behave cheers Daniel From nginx-forum at forum.nginx.org Thu Nov 3 18:17:35 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Thu, 03 Nov 2016 14:17:35 -0400 Subject: AW: AW: Nginx Kodi User Agent secure_link blocking / banning In-Reply-To: References: Message-ID: Well I do use Nginx with Lua I was planning on writing up a little Lua to replace body_contents outputs and include some JavaScript to append src links. For example in HTML : I would use Lua to obtain the link between the quotation and replace it with "" (Making it empty) and then use Lua to insert JavaScript like so into the body contents and have my link in that. The benefit of this that can allow me to break these Kodi users with Nginx + Lua is I can have the "_video" var in my regex outputs dynamicly changing making it hard for their regex inside their Kodi plugins to pick up on and it also has a added benefit it forces browsers with JavaScript disabled to enable JavaScript in order to watch videos. So it kills a few birds with one stone. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270705,270739#msg-270739 From francis at daoine.org Thu Nov 3 18:25:27 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 3 Nov 2016 18:25:27 +0000 Subject: Alias or root directive In-Reply-To: <52CFB0B3-3C4F-4656-9181-CEBE95139A95@linux-nerd.de> References: <20161103174604.GC23518@daoine.org> <52CFB0B3-3C4F-4656-9181-CEBE95139A95@linux-nerd.de> Message-ID: <20161103182527.GD23518@daoine.org> On Thu, Nov 03, 2016 at 06:51:43PM +0100, Daniel wrote: Hi there, > > What one example http request do you want to make? > > I wanted to load such kind of URL: > domain.de//en/holidays/shared/images/guides/germany/berlin.jpg > > > What file on your filesystem do you want nginx to serve in response to > > that request? > > on /mnt/nfs/uat/ are the folders like guides/germany/ "alias" replaces the bit in the location with the bit in the alias, and uses the result as the filename to serve. So try: location ^~ /en/holidays/shared/images/ { alias /mnt/nfs/uat/; } Note - it is "^~" so that *everything* below /en/holidays/shared/images/ will be served from the filesystem. (That is not a regex pattern.) Also, the number of / at the end of the location and the alias are the same. If you check your error_log, you should see an indication of what file nginx tried to serve, if it failed. f -- Francis Daly francis at daoine.org From daniel at linux-nerd.de Thu Nov 3 18:26:01 2016 From: daniel at linux-nerd.de (Daniel) Date: Thu, 3 Nov 2016 19:26:01 +0100 Subject: Alias or root directive In-Reply-To: <52CFB0B3-3C4F-4656-9181-CEBE95139A95@linux-nerd.de> References: <20161103174604.GC23518@daoine.org> <52CFB0B3-3C4F-4656-9181-CEBE95139A95@linux-nerd.de> Message-ID: <852B43C6-77E1-4BB0-9C4B-9E4B024C956E@linux-nerd.de> As i understand the Documentation correct then my entry is correct: location /en/holidays/shared/images/ { alias /mnt/nfs/uat/; } Anyways, when i try to use root instead of alias it has same result. Its getting ignored completely in the config. > Am 03.11.2016 um 18:51 schrieb Daniel : > >> >>> i try to add a images folder but seems not work. >>> Could someone tell me what i am doing wrong: >> >> What one example http request do you want to make? >> > > I wanted to load such kind of URL: > domain.de//en/holidays/shared/images/guides/germany/berlin.jpg > >> What file on your filesystem do you want nginx to serve in response to >> that request? >> > > on /mnt/nfs/uat/ are the folders like guides/germany/ > > >>> location ~ ^/en/holidays/shared/images { >>> root /mnt/nfs/uat/; >>> } >>> >>> When i replace root with alias it has also no effect :-( >> >> "alias" in a regex location has special requirements. >> > > I also tried with root instead of alias and i have the same behave > > cheers > > Daniel > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel at linux-nerd.de Thu Nov 3 18:29:35 2016 From: daniel at linux-nerd.de (Daniel) Date: Thu, 3 Nov 2016 19:29:35 +0100 Subject: Alias or root directive In-Reply-To: <20161103182527.GD23518@daoine.org> References: <20161103174604.GC23518@daoine.org> <52CFB0B3-3C4F-4656-9181-CEBE95139A95@linux-nerd.de> <20161103182527.GD23518@daoine.org> Message-ID: <314F1C38-8099-43FA-96C5-9F0D5D37613D@linux-nerd.de> > > If you check your error_log, you should see an indication of what file > nginx tried to serve, if it failed. > Yes it tries to open the doc_root to open that file and this is totally wrong of course because this file is placed on /mnt/nfs/uat/guide/germany/berlin.jpg /var/www/d1/current/web/shared/images/guides/germany/berlin.jpg -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Nov 3 18:29:43 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 3 Nov 2016 18:29:43 +0000 Subject: Alias or root directive In-Reply-To: <852B43C6-77E1-4BB0-9C4B-9E4B024C956E@linux-nerd.de> References: <20161103174604.GC23518@daoine.org> <52CFB0B3-3C4F-4656-9181-CEBE95139A95@linux-nerd.de> <852B43C6-77E1-4BB0-9C4B-9E4B024C956E@linux-nerd.de> Message-ID: <20161103182943.GE23518@daoine.org> On Thu, Nov 03, 2016 at 07:26:01PM +0100, Daniel wrote: Hi there, > As i understand the Documentation correct then my entry is correct: > > location /en/holidays/shared/images/ { > alias /mnt/nfs/uat/; > } Yes, that should work. So long as this location{} is the one that is used for the request that you make. If you have something like "location ~ jpg" as well, then *that* would be used instead of this one. http://nginx.org/r/location > Anyways, when i try to use root instead of alias it has same result. > Its getting ignored completely in the config. I'd suggest to check the logs then, to see what is happening instead. And ensure that you are getting nginx to use the new config, each time you change it. The logs should show that that has happened too. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Nov 3 18:40:49 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 3 Nov 2016 18:40:49 +0000 Subject: Question about Error logs In-Reply-To: <1ea9cdc920bfac7e35d8cf939d9eed36.NginxMailingListEnglish@forum.nginx.org> References: <1ea9cdc920bfac7e35d8cf939d9eed36.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161103184049.GF23518@daoine.org> On Wed, Nov 02, 2016 at 08:49:52AM -0400, pkjchate wrote: Hi there, > Is there any setting I can do, so that Nginx only logs 1 error in same file > only once in the file. I think "no", using just configuration. > The problem is I am getting a lot of same errors and the file gets large in > size quickly. I'd probably either try to fix the problem that leads to the errors; or post-process the file to eliminate the duplicates and only store the processed version. But neither of those is a direct answer to your question. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Nov 3 18:53:12 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 3 Nov 2016 18:53:12 +0000 Subject: location regex In-Reply-To: <2b2a85075254b927f0c01202bc494bdb.NginxMailingListEnglish@forum.nginx.org> References: <2b2a85075254b927f0c01202bc494bdb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161103185312.GG23518@daoine.org> On Tue, Nov 01, 2016 at 06:49:04PM -0400, olat wrote: Hi there, > I am asking about regex in the context of caching some > of the requests on front-end proxy to speed up django app loading dynamic > content where session is involved. Does it mean each of the filtered out > requests we would like to cache, should duplicate the same location block? I think that a lot of this configuration will naturally inherit into a location{} block, so you could potentially set the common pieces at server{} level and just have the small number repeated in the individual location{}s. Or you could generate the nginx.conf fragment with lots of duplicate content; you would only have a single source in your generator to change. Or you could have the common config in an external file and "include" it multiple times. In general, you write the config a few times, and nginx reads it many times. So it is often worth making it easy for nginx to reads, even at the expense of making it harder for you to write. > # catch only /appfoo* and /appbar* > location ~ /app(foo|bar)* { Another feature of regexen is that it is not always obvious what they should do. For example, "/foo/app" and "/appbaz" will both match this location. (Effectively, the above is "does the request include the four-character string /app anywhere?", which is not what the comment suggests that you wanted.) f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Nov 3 19:00:35 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 3 Nov 2016 19:00:35 +0000 Subject: Blocking tens of thousands of IP's In-Reply-To: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> References: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> Message-ID: <20161103190035.GH23518@daoine.org> On Tue, Nov 01, 2016 at 03:15:45PM +0000, Cox, Eric S wrote: Hi there, > Is anyone aware of a difference performance wise between using > > return 403; > > vs > > deny all; > > When mapping against a list of tens of thousands of ip? I think the answer is "no". I would expect that "return 403" would be quicker, since the rewrite phase happens before the access phase. But I also suspect that the "checking the list of tens of thousands" that would have to happen first, would swamp any difference. I think that the general rule is that if you do not measure a difference, there is not an important difference to you. And yes, use "geo" rather than "map" or any other list. (Or: build one of each in your lab and measure.) Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Nov 3 19:07:31 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 3 Nov 2016 19:07:31 +0000 Subject: Help how to Proxy without redirect In-Reply-To: <5d0a66b7df9fb7ed640b210bc04073fc.NginxMailingListEnglish@forum.nginx.org> References: <5d0a66b7df9fb7ed640b210bc04073fc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161103190731.GI23518@daoine.org> On Mon, Oct 31, 2016 at 09:11:59AM -0400, tbaror wrote: Hi there, > I need to use Nginx as proxy and to pass all communication trough it > without redirecting to original web location. That's what proxy_pass does, more or less. Apart from "proxy" and "all communication"; you'd need different tools for those. > I have following configuration file the initial logon and welcome page works > well as soon as i click on link its getting redirected in to the original > web page. > any idea how to keep it on the proxy ? Look at the html that your upstream (proxy_pass address) provides. Anything in there that your browser might interpret as a link, should not start with "//" and should not start with "http://" or "https://", unless it is immediately followed by the server_name of your nginx. I suspect that you do not want "proxy_redirect off", though. http://nginx.org/r/proxy_redirect If you can show the request-and-response that works for you, then it might become clear why the request-and-response that fails for you, fails. Cheers, f -- Francis Daly francis at daoine.org From mail2ashish.g at gmail.com Thu Nov 3 19:40:11 2016 From: mail2ashish.g at gmail.com (Ashish Gupta) Date: Thu, 3 Nov 2016 14:40:11 -0500 Subject: Nginx SSL Setup Message-ID: Hello Team, I am using NGINX as a web server ot host some of the file and I need some help with the SSL Setup. Is there a way to create a keystore and use that in the configuration for SSL setup? I don't want to use the self signed certificate, i need sign the certificate with the company CA and import the Root and Issuing certificates. Regards, Ashish -------------- next part -------------- An HTML attachment was scrubbed... URL: From rainer at ultra-secure.de Thu Nov 3 21:46:40 2016 From: rainer at ultra-secure.de (Rainer Duffner) Date: Thu, 3 Nov 2016 22:46:40 +0100 Subject: Nginx SSL Setup In-Reply-To: References: Message-ID: <6A465B68-1A63-4E60-8C6D-96590C45E1CA@ultra-secure.de> > Am 03.11.2016 um 20:40 schrieb Ashish Gupta : > > Hello Team, > > I am using NGINX as a web server ot host some of the file and I need some help with the SSL Setup. Is there a way to create a keystore and use that in the configuration for SSL setup? > > I don't want to use the self signed certificate, i need sign the certificate with the company CA and import the Root and Issuing certificates. NGINX doesn?t use keystores (jks). You need to convert your (I assume) PKCS12 files into PEM files, split the private key and the certificates and configure them according to the documentation. http://nginx.org/en/docs/http/ngx_http_ssl_module.html or Mozilla?s interactive cheat-sheet: https://mozilla.github.io/server-side-tls/ssl-config-generator/ Though, of course, it?s always good to read the documentation provided by NGINX Inc, which is thankfully always very up to date and accurate. Google ?openssl convert pkcs12 pem? Off the top of my head it looks like ?openssl pkcs12 -in your.p12 -out your.pem -nodes? See this for creating key and csr: https://support.rackspace.com/how-to/generate-a-csr-with-openssl/ (or various other links that google spits out) Rainer -------------- next part -------------- An HTML attachment was scrubbed... URL: From hemanthnews at yahoo.com Fri Nov 4 04:04:01 2016 From: hemanthnews at yahoo.com (Hemanth Kumar) Date: Fri, 4 Nov 2016 04:04:01 +0000 (UTC) Subject: 504 Bad gateway error when server date/time is changed References: <1399786425.1032041.1478232241709.ref@mail.yahoo.com> Message-ID: <1399786425.1032041.1478232241709@mail.yahoo.com> Hi,Following is the environment OS: CentOS 7 (64 bit) NGINX: 1.10.1PHP/PHP-FPM:? 5.6 ZF2 Apache 2.4 The Web application is running with HTTPD configured to port 9080 and NGINX on port 80There is an option to set the date, time and timezone from the app.Whenever the date or time is changed on port-80, I get a 504 Bad gateway error but this error is not seen when Apache is used.When the timezone is changed, this 504 error does not come. NOTE: The entire application is being serviced using message queue and when the time/date setting is bypassed,? I don't see the error. Any idea why NGINX is behaving this way? -BestHemanth ?-Hemanth -------------- next part -------------- An HTML attachment was scrubbed... URL: From swaraj at semprehealth.com Fri Nov 4 08:50:37 2016 From: swaraj at semprehealth.com (Swaraj Banerjee) Date: Fri, 04 Nov 2016 08:50:37 +0000 Subject: Trouble using nginx tcp proxy Message-ID: <9c19ad6c-b8f2-f912-cc41-aaa77f1e4f12@mixmax.com> Hi all, I'm having some trouble using NGINX as a TCP proxy connecting to a customer's servers over an IPSec VPN. My setup:- 1 EC2 instance with NGINX plus configured as TCP proxy- 1 EC2 instance in same VPC running Openswan VPN- IPSec VPN with customer that is configured to only respond to requests from my proxy EC2 instance's public IP A visual of my setup is here: https://s3-us-west-1.amazonaws.com/static.semprehealth.com/nginx_stream.jpg My nginx config on proxy instance:user nginx;worker_processes auto; error_log /var/log/nginx/error.log debug;pid /var/run/nginx.pid; events {????worker_connections 1024;} stream {?upstream coupon_processors {??least_conn;??server 170.138.33.30:49841; ?} server {??listen 49841;??proxy_pass coupon_processors;?}} Problem:When I'm on proxy instance, I can send data over TCP to my customer's servers (170.138.33.30:49841). When I try to send data from another box, via the proxy, I don't see data returned. These are the error logs:2016/11/04 08:49:38 [info] 16345#16345: *5 client :49263 connected to 0.0.0.0:498412016/11/04 08:49:38 [info] 16345#16345: *5 proxy 5.5.0.53:26726 connected to 170.138.33.30:498412016/11/04 08:49:38 [info] 16345#16345: *5 client disconnected, bytes from/to client:105/0, bytes from/to upstream:0/105 Any reason why I can send data, but don't receive anything back? Thanks,Swaraj -------------- next part -------------- An HTML attachment was scrubbed... URL: From dewanggaba at xtremenitro.org Fri Nov 4 09:05:30 2016 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Fri, 4 Nov 2016 16:05:30 +0700 Subject: Set header $upstream_response_time with proxy_cache directive Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hello! I have problem how to debugs current response time to upstream, my configuration is looks likes : ... upstream upstream_distribution { server full-fqdn.tld; } # common configuration location ~ \.(jpe?g|png|gif|webp)$ { proxy_pass http://upstream_distribution; proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; proxy_cache_valid 200 302 301 3d; proxy_cache images; proxy_cache_valid any 3s; proxy_cache_lock on; proxy_cache_min_uses 1; proxy_ignore_headers Cache-Control Expires; proxy_hide_header X-Cache; proxy_hide_header Via; proxy_hide_header ETag; more_set_headers "X-Cache: $upstream_cache_status"; more_set_headers "X-RTT: $upstream_response_time ms"; } # common configuration ... The problem is, the request was cached by nginx and didn't send response to upstream, so the response of the header was : age:94 cache-control:max-age=604800, public content-length:143862 content-type:image/webp date:Fri, 04 Nov 2016 09:03:44 GMT last-modified:Fri, 04 Nov 2016 05:41:55 GMT x-rtt:ms server:nginx status:200 x-cache:HIT Is it possible to capture upstream_response_time while the content was cached? Any hints and helps are appreciated. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQI4BAEBCAAiBQJYHE9VGxxkZXdhbmdnYWJhQHh0cmVtZW5pdHJvLm9yZwAKCRDl f9IgoCjNcBIAD/wJQfckGIYS4QHTzP+ThjheEcd3bGXRiOjDzp1ixPIUd8WY9f16 ooVGR0ryzkahYSiaJHj7BGnq7otik9S58ZHnHyFjtroTAhcX5YExyskB9IebtWK0 vslk2qq0j59fVmnbhouD7ZI15YNg0YY2lDbiFqmi1KtfkfZJRHkm5rBYWi+MvEsa vtwKIV9O0nqIH09cVDOWQCJ1pIe5t5lKcQGWTXnVx/HdMOJNFJ5VO4j4Vqyxf829 IJX/nxgGOn46p6djMFx8hVeDK0AxtWjAUUjrUlyaW5OOSkXuuFRs7tdYBXhRZ1W5 cAZQ3XaFpOG8KgLRrI08skIfxIB8v/WCBojj/vulCuKxzGzLb3S8eFOmEcA48ZCV 6CsS/Th0BqYBakiziLawAKuhkCoq+mPETFPmMIzrDtUhMTL0Xqtt8NqFCr4SvlOA hK7UuRTvCPFnYEpsi4X/fcKxNd0mEC49fvgCHnevANop/6uj+gPFDOlrifzyOiFm 6nCYI5NyXV2eRAAEpiXR5JHLnm4Ch5CDRDLkfMaga25+uAFW0o7PfpG/SEtcMubT aHOkoUjVAnGyRJm3m94DF4VHqDlUz5N0bpS0QfRnUGzOYCoo3m4gy7ViOQ6UiDuL OWHKMwKAoVbbIvUpvqwFn2Q46qnIZtDlVtrr7/oNCo//fivpQyLClMGBLg== =Tyu9 -----END PGP SIGNATURE----- From nginx-forum at forum.nginx.org Fri Nov 4 09:37:43 2016 From: nginx-forum at forum.nginx.org (mex) Date: Fri, 04 Nov 2016 05:37:43 -0400 Subject: Blocking tens of thousands of IP's In-Reply-To: <58190E72.4000304@lucasrolff.com> References: <58190E72.4000304@lucasrolff.com> Message-ID: <1b4b42968a4f405c94122c05710a4192.NginxMailingListEnglish@forum.nginx.org> Lucas Rolff Wrote: ------------------------------------------------------- > You could very well do a small ipset together with iptables, it's > fast, > and you don't have to reload for every subnet / ip you add. we had the very same issue, 40k IPs to block daily and we came up with ipset add / del which is fast as hell and has a build-in TTL if you have a huge and dynamic set of ips to be blocked this is the way you should go cheers, mex Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270680,270757#msg-270757 From nginx-forum at forum.nginx.org Fri Nov 4 09:43:47 2016 From: nginx-forum at forum.nginx.org (mex) Date: Fri, 04 Nov 2016 05:43:47 -0400 Subject: Blocking tens of thousands of IP's In-Reply-To: <74A4D440E25E6843BC8E324E67BB3E394558B28E@N060XBOXP38.kroger.com> References: <74A4D440E25E6843BC8E324E67BB3E394558B28E@N060XBOXP38.kroger.com> Message-ID: Hi Eric, see my reply https://forum.nginx.org/read.php?2,270680,270757#msg-270757 we do a similar thing but keep a counter within nginx (lua_shared_dict FTW) and export this stuff via /badass - location. although its not realtime we have a delay of 5 sec which is enough for us cheers, mex Cox, Eric S Wrote: ------------------------------------------------------- > Currently we track all access logs realtime via an in house built log > aggregation solution. Various algorithms are setup to detect said IPS > whether it be by hit rate, country, known types of attacks etc. These > IPS are typically identified within a few mins and we reload to banned > list every 60 seconds. We just moved some services from apache where > we were doing this without any noticable performance impact. Have this > working in nginx but was looking for general suggestion on how to > optimize if at all possible. > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270680,270758#msg-270758 From nginx-forum at forum.nginx.org Fri Nov 4 10:02:39 2016 From: nginx-forum at forum.nginx.org (bertuka) Date: Fri, 04 Nov 2016 06:02:39 -0400 Subject: 502 Bad Gateway nginx/1.2.1 Message-ID: <1ff79953d26267a31c209bd877faf3f9.NginxMailingListEnglish@forum.nginx.org> Hello, since a couple of days I am getting this error message all over my website: 502 Bad Gateway nginx/1.2.1 The thing is that my server uses apache... I have tried solutions I have found in google: erasing cache files on navigator and pc. and doesn't work. I have talked to my server administrators, and can't find where the problem comes from. Last option was, thinking that the issue comes from the Theme + Plugins I am using in my website (in wordpress) which is Talents Them from Kayapati. But they say the issue is not related to them... So I am lost, and I don't know what to do... Is there someone here that can help me, please? This is my website: www.conmuchoarte.net Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270759,270759#msg-270759 From tseveendorj at gmail.com Fri Nov 4 10:35:35 2016 From: tseveendorj at gmail.com (Tseveendorj Ochirlantuu) Date: Fri, 4 Nov 2016 18:35:35 +0800 Subject: exclude error_page on geoip In-Reply-To: <20161103131854.GS73038@mdounin.ru> References: <20161103131854.GS73038@mdounin.ru> Message-ID: Thank you very much. It is working :) On Thu, Nov 3, 2016 at 9:18 PM, Maxim Dounin wrote: > Hello! > > On Thu, Nov 03, 2016 at 02:05:55PM +0800, Tseveendorj Ochirlantuu wrote: > > > Hello, > > > > I need to use geoip module for allow specific region access to my > website. > > But blocked users should see the error_page. Users are blocked and cannot > > see custom error_page. > > > > I don't want to see error_page from other domain. I need to except only > > error page which is not applied to geoip block. > > Try something like this: > > server { > listen 80; > > error_page 403 /403.html; > > location / { > if ($blocked) { > return 403; > } > > ... > } > > location = /403.html { > # no geoip restrictions here > } > } > > With such a configuration GeoIP-based restrictions are only > applied in "location /", but doesn't affect requests to /403.html. > That is, nginx will be able to return the error page correctly. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From black.fledermaus at arcor.de Fri Nov 4 14:38:45 2016 From: black.fledermaus at arcor.de (basti) Date: Fri, 4 Nov 2016 15:38:45 +0100 Subject: 502 Bad Gateway nginx/1.2.1 In-Reply-To: <1ff79953d26267a31c209bd877faf3f9.NginxMailingListEnglish@forum.nginx.org> References: <1ff79953d26267a31c209bd877faf3f9.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, what does 'netstat -tulpen | grep 80' say? (run this as root to see procress) what does 'ps ax | grep apache' say? If you do not need nginx anymore why you do not uninstall it? Best Regards; On 04.11.2016 11:02, bertuka wrote: > Hello, > since a couple of days I am getting this error message all over my website: > 502 Bad Gateway nginx/1.2.1 > > The thing is that my server uses apache... I have tried solutions I have > found in google: erasing cache files on navigator and pc. and doesn't work. > I have talked to my server administrators, and can't find where the problem > comes from. > Last option was, thinking that the issue comes from the Theme + Plugins I am > using in my website (in wordpress) which is Talents Them from Kayapati. But > they say the issue is not related to them... > So I am lost, and I don't know what to do... > Is there someone here that can help me, please? > > This is my website: www.conmuchoarte.net > > Thanks! From r at roze.lv Fri Nov 4 15:06:37 2016 From: r at roze.lv (Reinis Rozitis) Date: Fri, 4 Nov 2016 17:06:37 +0200 Subject: 502 Bad Gateway nginx/1.2.1 In-Reply-To: References: <1ff79953d26267a31c209bd877faf3f9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <010101d236ad$0ad614d0$20823e70$@roze.lv> > If you do not need nginx anymore why you do not uninstall it? I don't think the OP is running nginx (at least Server headers say it's apache). It looks like though that the error could be coming from an external resource. While inspecting the html source it seems generated by php (rather than for example coming from an ajax request). Does the WP have any banner plugins or something like that which could fetch something from remote server (beside a way old nginx version)? If anything try to disable all the plugins, see if it helps. rr From alex at samad.com.au Fri Nov 4 21:20:58 2016 From: alex at samad.com.au (Alex Samad) Date: Sat, 5 Nov 2016 08:20:58 +1100 Subject: ssllabs A+ rating Message-ID: Hi Any one got a write up on how to get a A+ from this site. I can get a A and I have to support tls1.0 which might be dragging me down ! From rpaprocki at fearnothingproductions.net Fri Nov 4 21:28:13 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Fri, 4 Nov 2016 14:28:13 -0700 Subject: ssllabs A+ rating In-Reply-To: References: Message-ID: https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html is a pretty decent write-up. IME, you need to present an HSTS header, otherwise an A+ is never awarded even with the strictest cipher suite and largest keys and DH primes. To be frank though, achieving an A+ is not a very very worthwhile goal; yes, setting up strong crypto is _very_ important, but what's more important is understanding what you're configuring and why, not just reading a guidebook. May I also offer another tool for checking TLS configs: https://github.com/rbsec/sslscan, if only to have another source for verifying TLS configs (IMO, relying exclusively on one single opinion, e.g. Qualsys, as THE authoritative source of truth for a 'proper' secure config is dangerous). On Fri, Nov 4, 2016 at 2:20 PM, Alex Samad wrote: > Hi > > Any one got a write up on how to get a A+ from this site. > > I can get a A and I have to support tls1.0 which might be dragging me down > ! > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at samad.com.au Fri Nov 4 22:57:43 2016 From: alex at samad.com.au (Alex Samad) Date: Sat, 5 Nov 2016 09:57:43 +1100 Subject: ssllabs A+ rating In-Reply-To: References: Message-ID: Hi Agree on the blindly following. But its good to know how to get there I also try this https://cryptoreport.websecurity.symantec.com/checker/ question tls/ssl compression is it worth it ? I have gzip setup, but I am guess tls/ssl compression is over the top. and know I have to read up about hsts and weather we need it or not :) So at current $job we still need tls1.0 because our clients need it... On 5 November 2016 at 08:28, Robert Paprocki wrote: > https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html is a pretty > decent write-up. > > IME, you need to present an HSTS header, otherwise an A+ is never awarded > even with the strictest cipher suite and largest keys and DH primes. > > To be frank though, achieving an A+ is not a very very worthwhile goal; yes, > setting up strong crypto is _very_ important, but what's more important is > understanding what you're configuring and why, not just reading a guidebook. > > May I also offer another tool for checking TLS configs: > https://github.com/rbsec/sslscan, if only to have another source for > verifying TLS configs (IMO, relying exclusively on one single opinion, e.g. > Qualsys, as THE authoritative source of truth for a 'proper' secure config > is dangerous). > > On Fri, Nov 4, 2016 at 2:20 PM, Alex Samad wrote: >> >> Hi >> >> Any one got a write up on how to get a A+ from this site. >> >> I can get a A and I have to support tls1.0 which might be dragging me down >> ! >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From rpaprocki at fearnothingproductions.net Fri Nov 4 22:59:55 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Fri, 4 Nov 2016 15:59:55 -0700 Subject: ssllabs A+ rating In-Reply-To: References: Message-ID: Hi, On Fri, Nov 4, 2016 at 3:57 PM, Alex Samad wrote: > Hi > > Agree on the blindly following. But its good to know how to get there > I also try this > https://cryptoreport.websecurity.symantec.com/checker/ > > question > > tls/ssl compression is it worth it ? I have gzip setup, but I am guess > tls/ssl compression is over the top. > Do not use TLS compression. See https://en.wikipedia.org/wiki/CRIME. -------------- next part -------------- An HTML attachment was scrubbed... URL: From anna.leachovsky at sap.com Sun Nov 6 06:51:31 2016 From: anna.leachovsky at sap.com (Leachovsky, Anna) Date: Sun, 6 Nov 2016 06:51:31 +0000 Subject: How to forward the request to a remote proxy Message-ID: <2fbcb399edee46e99c113c24906beab4@USPHLE13US01.global.corp.sap> Hi, I have nginx reverse proxy on Windows inside internal network. To be able to access the redirected URL I need to configure proxy (for example: proxy:8080) In apache I can use ProxyRemote or ProxyRemoteMatch parameters. The ProxyRemote directive forwards a request to a remote proxy when itself is acting as a proxy. In nginx I just cannot find the right parameter to use for getting the same behavior. Can you help? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Nov 6 09:54:14 2016 From: francis at daoine.org (Francis Daly) Date: Sun, 6 Nov 2016 09:54:14 +0000 Subject: How to forward the request to a remote proxy In-Reply-To: <2fbcb399edee46e99c113c24906beab4@USPHLE13US01.global.corp.sap> References: <2fbcb399edee46e99c113c24906beab4@USPHLE13US01.global.corp.sap> Message-ID: <20161106095414.GJ23518@daoine.org> On Sun, Nov 06, 2016 at 06:51:31AM +0000, Leachovsky, Anna wrote: Hi there, > I have nginx reverse proxy on Windows inside internal network. To be able to access the redirected URL I need to configure proxy (for example: proxy:8080) nginx does not do that. So you would need something else as well as nginx, or something else instead of nginx. Cheers, f -- Francis Daly francis at daoine.org From fourlightson at hotmail.com Sun Nov 6 15:09:46 2016 From: fourlightson at hotmail.com (Scott McGillivray) Date: Sun, 6 Nov 2016 15:09:46 +0000 Subject: auth_basic within location block doesn't work when return is specified? Message-ID: i thought this would work but for some reason it doesn't. location /auth { auth_basic_user_file /etc/nginx/.htpasswd; auth_basic "Secret"; return 200 'hello'; } When i specify the return, 200 or 301, it just skips the auth_basic and processes the return statement. If i comment out the return statement it works OK. Ideally i want just an /auth endpoint that once authenticated it will 301 redirect to $host, e.g. return 301 http://$host Can someone explain why this behaves this way and what is the correct configuration. many thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Mon Nov 7 08:37:49 2016 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Mon, 7 Nov 2016 11:37:49 +0300 Subject: auth_basic within location block doesn't work when return is specified? In-Reply-To: References: Message-ID: <69697e4d-38be-0d8f-4ff9-9991c2fc227d@nginx.com> This behavior is cause by general request processing logic. You may look at ngx_http_core_module.h which defines request processing phases. You may notice that a rewrite phase ('return' acts as a rewrite, actually) is run before access phase. So you have your request returned before access rules are checked. At the same time, try_files phase as after the access phase. So you may try using: location /auth { auth_basic_user_file /etc/nginx/.htpasswd; auth_basic "Secret"; # try_files will be used only for a valid authenticated user try_files @redir =403; #403 will never be returned from here. } location @redir { return 200 'hello'; } This looks a little bit hacky, but is pretty reasonable e.g. if you want to return 404. Just curious, why won't you auth protect your final destination? On 06.11.2016 18:09, Scott McGillivray wrote: > > i thought this would work but for some reason it doesn't. > > |location /auth { auth_basic_user_file /etc/nginx/.htpasswd; auth_basic > "Secret"; return 200 'hello'; } | > > > When i specify the return, 200 or 301, it just skips the auth_basic > and processes the return statement. > > If i comment out the return statement it works OK. Ideally i want just > an |/auth| endpoint that once authenticated it will 301 redirect to > $host, e.g. return 301 http://$host > > Can someone explain why this behaves this way and what is the correct > configuration. > > many thanks > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Mon Nov 7 11:14:33 2016 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Mon, 7 Nov 2016 14:14:33 +0300 Subject: auth_basic within location block doesn't work when return is specified? In-Reply-To: <69697e4d-38be-0d8f-4ff9-9991c2fc227d@nginx.com> References: <69697e4d-38be-0d8f-4ff9-9991c2fc227d@nginx.com> Message-ID: Changed own example in the last moment and made a mistake: try files should address non existent file and then do an internal redirect. E.g. try_files nosuchfile.txt @ret; Only the last argument may be a redirect. On 07.11.2016 11:37, Igor A. Ippolitov wrote: > This behavior is cause by general request processing logic. You may > look at ngx_http_core_module.h which defines request processing phases. > You may notice that a rewrite phase ('return' acts as a rewrite, > actually) is run before access phase. So you have your request > returned before access rules are checked. > At the same time, try_files phase as after the access phase. So you > may try using: > > location /auth { > > auth_basic_user_file /etc/nginx/.htpasswd; > > auth_basic "Secret"; > > # try_files will be used only for a valid authenticated user > > try_files @redir =403; #403 will never be returned from here. > > } > > location @redir { > > return 200 'hello'; > > } > > > This looks a little bit hacky, but is pretty reasonable e.g. if you > want to return 404. > > Just curious, why won't you auth protect your final destination? > > On 06.11.2016 18:09, Scott McGillivray wrote: >> >> i thought this would work but for some reason it doesn't. >> >> |location /auth { auth_basic_user_file /etc/nginx/.htpasswd; >> auth_basic "Secret"; return 200 'hello'; } | >> >> >> When i specify the return, 200 or 301, it just skips the auth_basic >> and processes the return statement. >> >> If i comment out the return statement it works OK. Ideally i want >> just an |/auth| endpoint that once authenticated it will 301 redirect >> to $host, e.g. return 301 http://$host >> >> Can someone explain why this behaves this way and what is the correct >> configuration. >> >> many thanks >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx at 2xlp.com Tue Nov 8 01:14:09 2016 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Mon, 7 Nov 2016 20:14:09 -0500 Subject: most efficient way to return on everything but a single directory? In-Reply-To: <69697e4d-38be-0d8f-4ff9-9991c2fc227d@nginx.com> References: <69697e4d-38be-0d8f-4ff9-9991c2fc227d@nginx.com> Message-ID: I'm doing a quick audit on an nginx deployment and want to make sure something is implemented correctly We have a handful of domains that redirect http traffic to https. we used to do this, which is very efficient: sever { listen 80: server_name example.com www.example.com; return 301 https://www.example.com$request_uri; } now we use SSL certificates via letsencrypt, and need to keep a specific location on port 80 open to proxy into our custom renewal client. the only thing I could think of was this: sever { listen 80: server_name example.com www.example.com; location /letsencrypt/ { # proxy to client } location / { return 301 https://www.example.com$request_uri; } } Is there a more efficient way to accomplish the above or is the above the best way? From igor at sysoev.ru Tue Nov 8 04:34:12 2016 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 8 Nov 2016 07:34:12 +0300 Subject: most efficient way to return on everything but a single directory? In-Reply-To: References: <69697e4d-38be-0d8f-4ff9-9991c2fc227d@nginx.com> Message-ID: On 08 Nov 2016, at 04:14, Jonathan Vanasco wrote: > I'm doing a quick audit on an nginx deployment and want to make sure something is implemented correctly > > We have a handful of domains that redirect http traffic to https. > > we used to do this, which is very efficient: > > sever { > listen 80: > server_name example.com www.example.com; > return 301 https://www.example.com$request_uri; > } > > now we use SSL certificates via letsencrypt, and need to keep a specific location on port 80 open to proxy into our custom renewal client. > > the only thing I could think of was this: > > sever { > listen 80: > server_name example.com www.example.com; > > location /letsencrypt/ { > # proxy to client > } > location / { > return 301 https://www.example.com$request_uri; > } > } > > Is there a more efficient way to accomplish the above or is the above the best way? This is the right way. -- Igor Sysoev http://nginx.com From nginx-forum at forum.nginx.org Tue Nov 8 11:34:17 2016 From: nginx-forum at forum.nginx.org (Ashidubey) Date: Tue, 08 Nov 2016 06:34:17 -0500 Subject: SPDY + HTTP/2 In-Reply-To: <62ac5e3da36e2e48887700ec04640780.NginxMailingListEnglish@forum.nginx.org> References: <62ac5e3da36e2e48887700ec04640780.NginxMailingListEnglish@forum.nginx.org> Message-ID: you are using Bulk SMS India for marketing i also looking for that codes. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,263245,270810#msg-270810 From christian.cioni at staff.aruba.it Tue Nov 8 11:36:13 2016 From: christian.cioni at staff.aruba.it (Christian Cioni) Date: Tue, 8 Nov 2016 12:36:13 +0100 Subject: OCSP stapling Message-ID: <017401d239b4$4fa055d0$eee10170$@staff.aruba.it> Hi, on my server have activated a SSL in SNI configuration without problems, but for the OCSP stapling configurations, receive always ?no response sent? On my configuration have add: ssl_trusted_certificate /etc/nginx/ssl/CA.pem; ssl_stapling on; ssl_stapling_verify on; What can I check? -- Saluti ================================ Christian Cioni Technical Department Aruba.it http://www.aruba.it N? diretto: 0575/1939143 N? centralino: 0575/0505 N? fax: 0575/862300 MailTo: christian.cioni at staff.aruba.it ================================ : AVVISO PRIVACY = = = = = = = = = = = = = = = = = = = = Il contenuto della presente e-mail ed i suoi allegati, sono diretti esclusivamente al destinatario e devono ritenersi riservati, con divieto di diffusione o di uso non conforme alle finalit? per le quali la presente e-mail ? stata inviata. Pertanto, ne ? vietata la diffusione e la comunicazione da parte di soggetti diversi dal destinatario, ai sensi degli artt. 616 e ss. c.p. e D.lgs n. 196/03 Codice Privacy. Se la presente e-mail ed i suoi allegati sono stati ricevuti per errore, siete pregati di distruggere quanto ricevuto e di informare il mittente al seguente recapito: Mailto: christian.cioni at staff.aruba.it = = = = = = = = = = = = = = = = = = = = -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Nov 8 13:13:01 2016 From: nginx-forum at forum.nginx.org (khav) Date: Tue, 08 Nov 2016 08:13:01 -0500 Subject: Make nginx treat another extension as mp4 Message-ID: <721e4e30700f70a0035904ff3ba2db83.NginxMailingListEnglish@forum.nginx.org> have converted an animated gif animated.gif to an mp4 animated.mp4.I then rename animated.mp4 to animated.gifv. How can i tell nginx to treat .gifv files as mp4. location ~* \.(mp4|gifv)$ { mp4; mp4_buffer_size 4M; mp4_max_buffer_size 10M; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270812,270812#msg-270812 From nginx-forum at forum.nginx.org Tue Nov 8 13:53:11 2016 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 08 Nov 2016 08:53:11 -0500 Subject: Make nginx treat another extension as mp4 In-Reply-To: <721e4e30700f70a0035904ff3ba2db83.NginxMailingListEnglish@forum.nginx.org> References: <721e4e30700f70a0035904ff3ba2db83.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8f2dd234ca438a1c1b404ef77f9934d3.NginxMailingListEnglish@forum.nginx.org> I think you could modify the conf/mime.types video/mp4 mp4 gifv; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270812,270813#msg-270813 From nginx-forum at forum.nginx.org Tue Nov 8 14:50:25 2016 From: nginx-forum at forum.nginx.org (khav) Date: Tue, 08 Nov 2016 09:50:25 -0500 Subject: Rewrite help Message-ID: <602cfbe0b18f662a2fff65eaf4a65a1e.NginxMailingListEnglish@forum.nginx.org> Suppose i have a url as `http://somesite.com/ekjkASDs.gifv` , i want to rewrite it as `http://somesite.com/vid.php?id=ekjkASDs` Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270815,270815#msg-270815 From anoopalias01 at gmail.com Tue Nov 8 14:58:26 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Tue, 8 Nov 2016 20:28:26 +0530 Subject: Rewrite help In-Reply-To: <602cfbe0b18f662a2fff65eaf4a65a1e.NginxMailingListEnglish@forum.nginx.org> References: <602cfbe0b18f662a2fff65eaf4a65a1e.NginxMailingListEnglish@forum.nginx.org> Message-ID: try rewrite ^/(.*)\.gifv /vid.php?id=$1 last; On Tue, Nov 8, 2016 at 8:20 PM, khav wrote: > Suppose i have a url as `http://somesite.com/ekjkASDs.gifv` , i want to > rewrite it as `http://somesite.com/vid.php?id=ekjkASDs` > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,270815,270815#msg-270815 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Tue Nov 8 15:53:01 2016 From: lagged at gmail.com (Andrei) Date: Tue, 8 Nov 2016 17:53:01 +0200 Subject: Are there plans for Nginx supporting HTTP/2 server push? In-Reply-To: <045a71269754e17008f6b6686e50ec0b.NginxMailingListEnglish@forum.nginx.org> References: <72b548b754f23f4c4daaeed49ec0e594.NginxMailingListEnglish@forum.nginx.org> <045a71269754e17008f6b6686e50ec0b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, This is a common misconception; "HTTP/2 support" is not the same as "HTTP/2 with server push support". That being said, the Nginx.org/community version does not support HTTP/2 with "Server Push" (which most consider the primary boost in HTTP/2), however it is available in Nginx Plus (paid subscription). As most useful Nginx Plus features hardly make it to the community version, I would expect the same, if not some extraordinary delays with this feature as well considering it's marketing appeal and profit. You're better off doing like CloudFlare, and investing in some dev time if you want push support in the community version any time soon. The ground work is already done :) On Tue, Sep 27, 2016 at 9:07 AM, atulhost wrote: > Hi Mastercan, > > As of now NGINX is supporting HTTP/2 Natively here is how to activate it. > > https://atulhost.com/enable-http2-nginx > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,269749,269863#msg-269863 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Tue Nov 8 16:00:10 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 8 Nov 2016 19:00:10 +0300 Subject: Are there plans for Nginx supporting HTTP/2 server push? In-Reply-To: References: <72b548b754f23f4c4daaeed49ec0e594.NginxMailingListEnglish@forum.nginx.org> <045a71269754e17008f6b6686e50ec0b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Andrei, On 11/8/16 6:53 PM, Andrei wrote: > This is a common misconception; "HTTP/2 support" is not the same > as "HTTP/2 with server push support". That being said, the > Nginx.org/community version does not support HTTP/2 with "Server > Push" (which most consider the primary boost in HTTP/2), however > it is available in Nginx Plus (paid subscription). Where did you get this info? Both "primary boost in HTTP/2" and -plus? > As most useful Nginx Plus features hardly make it to the > community version, I would expect the same, if not some > extraordinary delays with this feature as well considering it's > marketing appeal and profit. You're better off doing like > CloudFlare, and investing in some dev time if you want push > support in the community version any time soon. The ground work > is already done :) Please don't spread FUD. -- Maxim Konovalov From lagged at gmail.com Tue Nov 8 16:07:43 2016 From: lagged at gmail.com (Andrei) Date: Tue, 8 Nov 2016 18:07:43 +0200 Subject: How to delay requests from once unauthorized IP address In-Reply-To: <96cb472bc3d4389b9111c37be2ec6296.NginxMailingListEnglish@forum.nginx.org> References: <96cb472bc3d4389b9111c37be2ec6296.NginxMailingListEnglish@forum.nginx.org> Message-ID: This can be done using ngx_http_limit_req_module - http://nginx.org/en/docs/http/ngx_http_limit_req_module.html On Tue, Oct 25, 2016 at 4:01 PM, hide wrote: > Hello! > > My Nginx does fastcgi_pass to some CGI application. The CGI application can > return HTTP status code 401. I want Nginx to return this status code to the > user and prevent the next access of the user to the CGI application for 5 > seconds. > > For example, the user accessed the CGI application through Nginx and got > HTTP status code 401 at 17:40:40. Suppose that the IP address of the user > is > trying to access the CGI application through Nginx for the second time at > 17:40:42. I want Nginx to provide that this second request will not reach > the CGI application. Then the IP address of the user is trying to access > the > CGI application through Nginx for the third time at 17:40:46. I want Nginx > to let this third request go to the CGI application because 5 seconds have > already passed. Suppose that this third request has worked successfully > with > HTTP status code 200. Then the IP address of the user is trying to access > the CGI application through Nginx for the fourth time at 17:40:47. I want > Nginx to let this fourth request go to the CGI application because 5 > seconds > from HTTP code 401 have already passed. > > Can I do this with Nginx? > > Thank you if you answer. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,270537,270537#msg-270537 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Tue Nov 8 16:09:48 2016 From: lagged at gmail.com (Andrei) Date: Tue, 8 Nov 2016 18:09:48 +0200 Subject: Are there plans for Nginx supporting HTTP/2 server push? In-Reply-To: References: <72b548b754f23f4c4daaeed49ec0e594.NginxMailingListEnglish@forum.nginx.org> <045a71269754e17008f6b6686e50ec0b.NginxMailingListEnglish@forum.nginx.org> Message-ID: The mentioned boost was in regards to HTTP/2 server push as stated. Not plus vs community. Are there any plans on releasing the server push feature for the community version? On Tue, Nov 8, 2016 at 6:00 PM, Maxim Konovalov wrote: > Andrei, > > On 11/8/16 6:53 PM, Andrei wrote: > > This is a common misconception; "HTTP/2 support" is not the same > > as "HTTP/2 with server push support". That being said, the > > Nginx.org/community version does not support HTTP/2 with "Server > > Push" (which most consider the primary boost in HTTP/2), however > > it is available in Nginx Plus (paid subscription). > > Where did you get this info? Both "primary boost in HTTP/2" and -plus? > > > As most useful Nginx Plus features hardly make it to the > > community version, I would expect the same, if not some > > extraordinary delays with this feature as well considering it's > > marketing appeal and profit. You're better off doing like > > CloudFlare, and investing in some dev time if you want push > > support in the community version any time soon. The ground work > > is already done :) > > Please don't spread FUD. > > -- > Maxim Konovalov > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Tue Nov 8 16:17:01 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 8 Nov 2016 19:17:01 +0300 Subject: Are there plans for Nginx supporting HTTP/2 server push? In-Reply-To: References: <72b548b754f23f4c4daaeed49ec0e594.NginxMailingListEnglish@forum.nginx.org> <045a71269754e17008f6b6686e50ec0b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Andrei, nginx-plus has identical HTTP/2 implementation as nginx-oss. It means no HTTP/2 push in -plus. I repeat my question: where did you get information that there is HTTP/2 push in -plus? On 11/8/16 7:09 PM, Andrei wrote: > The mentioned boost was in regards to HTTP/2 server push as stated. > Not plus vs community. Are there any plans on releasing the server > push feature for the community version? > > On Tue, Nov 8, 2016 at 6:00 PM, Maxim Konovalov > wrote: > > Andrei, > > On 11/8/16 6:53 PM, Andrei wrote: > > This is a common misconception; "HTTP/2 support" is not the same > > as "HTTP/2 with server push support". That being said, the > > Nginx.org/community version does not support HTTP/2 with "Server > > Push" (which most consider the primary boost in HTTP/2), however > > it is available in Nginx Plus (paid subscription). > > Where did you get this info? Both "primary boost in HTTP/2" and > -plus? > > > As most useful Nginx Plus features hardly make it to the > > community version, I would expect the same, if not some > > extraordinary delays with this feature as well considering it's > > marketing appeal and profit. You're better off doing like > > CloudFlare, and investing in some dev time if you want push > > support in the community version any time soon. The ground work > > is already done :) > > Please don't spread FUD. > > -- > Maxim Konovalov > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Maxim Konovalov From lagged at gmail.com Tue Nov 8 16:32:46 2016 From: lagged at gmail.com (Andrei) Date: Tue, 8 Nov 2016 18:32:46 +0200 Subject: Are there plans for Nginx supporting HTTP/2 server push? In-Reply-To: References: <72b548b754f23f4c4daaeed49ec0e594.NginxMailingListEnglish@forum.nginx.org> <045a71269754e17008f6b6686e50ec0b.NginxMailingListEnglish@forum.nginx.org> Message-ID: I stand corrected, neither Plus nor Community support "Server Push". Just another marketing buzz related title with the fine print caveat - "HTTP/2 Fully Supported in NGINX Plus" @ https://www.nginx.com/blog/http2-r7 On Tue, Nov 8, 2016 at 6:17 PM, Maxim Konovalov wrote: > Andrei, > > nginx-plus has identical HTTP/2 implementation as nginx-oss. It > means no HTTP/2 push in -plus. > > I repeat my question: where did you get information that there is > HTTP/2 push in -plus? > > On 11/8/16 7:09 PM, Andrei wrote: > > The mentioned boost was in regards to HTTP/2 server push as stated. > > Not plus vs community. Are there any plans on releasing the server > > push feature for the community version? > > > > On Tue, Nov 8, 2016 at 6:00 PM, Maxim Konovalov > > wrote: > > > > Andrei, > > > > On 11/8/16 6:53 PM, Andrei wrote: > > > This is a common misconception; "HTTP/2 support" is not the same > > > as "HTTP/2 with server push support". That being said, the > > > Nginx.org/community version does not support HTTP/2 with "Server > > > Push" (which most consider the primary boost in HTTP/2), however > > > it is available in Nginx Plus (paid subscription). > > > > Where did you get this info? Both "primary boost in HTTP/2" and > > -plus? > > > > > As most useful Nginx Plus features hardly make it to the > > > community version, I would expect the same, if not some > > > extraordinary delays with this feature as well considering it's > > > marketing appeal and profit. You're better off doing like > > > CloudFlare, and investing in some dev time if you want push > > > support in the community version any time soon. The ground work > > > is already done :) > > > > Please don't spread FUD. > > > > -- > > Maxim Konovalov > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > -- > Maxim Konovalov > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Tue Nov 8 16:39:21 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 8 Nov 2016 19:39:21 +0300 Subject: Are there plans for Nginx supporting HTTP/2 server push? In-Reply-To: References: <72b548b754f23f4c4daaeed49ec0e594.NginxMailingListEnglish@forum.nginx.org> <045a71269754e17008f6b6686e50ec0b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6dc115bd-3083-f40b-b9b4-a760b322c73d@nginx.com> On 11/8/16 7:32 PM, Andrei wrote: > I stand corrected, neither Plus nor Community support "Server Push". > Just another marketing buzz related title with the fine print caveat > - "HTTP/2 Fully Supported in NGINX Plus" @ > https://www.nginx.com/blog/http2-r7 > [...] Yes, right, partially because HTTP/2 push was never "the primary boost" for HTTP/2 but I agree that different people at different time can have very different ideas about the same thing. I'd also encourage to read the comments in this blog post. -- Maxim Konovalov From nginx at 2xlp.com Tue Nov 8 18:28:20 2016 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Tue, 8 Nov 2016 13:28:20 -0500 Subject: Blocking tens of thousands of IP's In-Reply-To: References: <74A4D440E25E6843BC8E324E67BB3E394558B28E@N060XBOXP38.kroger.com> Message-ID: <36432CF7-BD34-4DAC-B452-01D551114D7F@2xlp.com> On Nov 4, 2016, at 5:43 AM, mex wrote: > we do a similar thing but keep a counter within nginx (lua_shared_dict FTW) > and export this stuff via /badass - location. > > although its not realtime we have a delay of 5 sec which is enough for us We have a somewhat similar setup under openresty/nginx, but for some different purposes -- I imagine it would transition nicely to this though. We use lua_shared_dict as a read-through cache on each nginx node, with lookups failing over to a central Redis server on the LAN. A small python app manages the Redis server, and each nginx server has an internal api (LAN only access, written in lua) that can flush, prime, or add/delete items to the shared dict as needed. the python app runs on-demand, and also at intervals to reformat internal data for Redis and nginx. this may sound like a lot, but it only took a few hours to get it working and it was much easier to have Redis+Python broker the information between nginx and internal systems than to have them talk directly to one another. From mayak at australsat.com Tue Nov 8 22:57:29 2016 From: mayak at australsat.com (mayak) Date: Tue, 8 Nov 2016 23:57:29 +0100 Subject: Blocking tens of thousands of IP's In-Reply-To: <36432CF7-BD34-4DAC-B452-01D551114D7F@2xlp.com> References: <74A4D440E25E6843BC8E324E67BB3E394558B28E@N060XBOXP38.kroger.com> <36432CF7-BD34-4DAC-B452-01D551114D7F@2xlp.com> Message-ID: On 11/08/2016 07:28 PM, Jonathan Vanasco wrote: > On Nov 4, 2016, at 5:43 AM, mex wrote: > >> we do a similar thing but keep a counter within nginx (lua_shared_dict FTW) >> and export this stuff via /badass - location. >> >> although its not realtime we have a delay of 5 sec which is enough for us We are blocking 2.2 million addresses, however, we do it at the firewall/router (pfsense pfBlocker). Ultra fast. HTH Mayak From lists at lazygranch.com Tue Nov 8 23:15:05 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 08 Nov 2016 15:15:05 -0800 Subject: Blocking tens of thousands of IP's In-Reply-To: <20161108225800.721012C511E4@mail.nginx.com> References: <74A4D440E25E6843BC8E324E67BB3E394558B28E@N060XBOXP38.kroger.com> <36432CF7-BD34-4DAC-B452-01D551114D7F@2xlp.com> <20161108225800.721012C511E4@mail.nginx.com> Message-ID: <20161108231505.5468242.61961.15929@lazygranch.com> Is that 2.2 million CIDRs, or actual addresses? I use IPFW with tables for about 20k CIDRs. I don't see any significant server load. It seems to me nginx has a big enough task that it makes sense to offload the blocking to something that is more tightly integrated to the OS.? At a bare minimum, block OVH and Hetzner. People bash the Russians and old Soviet block countries for hacking, but OVH and Hetzner are far worse.? ? Original Message ? From: mayak Sent: Tuesday, November 8, 2016 2:58 PM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: Blocking tens of thousands of IP's On 11/08/2016 07:28 PM, Jonathan Vanasco wrote: > On Nov 4, 2016, at 5:43 AM, mex wrote: > >> we do a similar thing but keep a counter within nginx (lua_shared_dict FTW) >> and export this stuff via /badass - location. >> >> although its not realtime we have a delay of 5 sec which is enough for us We are blocking 2.2 million addresses, however, we do it at the firewall/router (pfsense pfBlocker). Ultra fast. HTH Mayak _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From alex at samad.com.au Tue Nov 8 23:20:33 2016 From: alex at samad.com.au (Alex Samad) Date: Wed, 9 Nov 2016 10:20:33 +1100 Subject: OCSP stapling In-Reply-To: <017401d239b4$4fa055d0$eee10170$@staff.aruba.it> References: <017401d239b4$4fa055d0$eee10170$@staff.aruba.it> Message-ID: Just when through this. your nginx server makes a requets to the OCSP url for information. My nginx servers can't make requests to the internet so I had to use the offline method 2016-11-08 22:36 GMT+11:00 Christian Cioni : > Hi, > > on my server have activated a SSL in SNI configuration without problems, > but for the OCSP stapling configurations, receive always ?no response sent? > > > > On my configuration have add: > > ssl_trusted_certificate /etc/nginx/ssl/CA.pem; > > ssl_stapling on; > > ssl_stapling_verify on; > > > > What can I check? > > > > -- > > Saluti > > ================================ > > Christian Cioni > > Technical Department > > Aruba.it http://www.aruba.it > > N? diretto: 0575/1939143 > > N? centralino: 0575/0505 > > N? fax: 0575/862300 > > MailTo: christian.cioni at staff.aruba.it > > ================================ > > > > : AVVISO PRIVACY > > = = = = = = = = = = = = = = = = = = = = > > Il contenuto della presente e-mail ed i suoi allegati, > > sono diretti esclusivamente al destinatario e devono > > ritenersi riservati, con divieto di diffusione o di uso > > non conforme alle finalit? per le quali la presente e-mail > > ? stata inviata. > > Pertanto, ne ? vietata la diffusione e la comunicazione > > da parte di soggetti diversi dal destinatario, ai sensi degli > > artt. 616 e ss. c.p. e D.lgs n. 196/03 Codice Privacy. > > > > Se la presente e-mail ed i suoi allegati sono stati ricevuti > > per errore, siete pregati di distruggere quanto ricevuto e > > di informare il mittente al seguente recapito: > > Mailto: christian.cioni at staff.aruba.it > > = = = = = = = = = = = = = = = = = = = = > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Wed Nov 9 07:27:36 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 8 Nov 2016 23:27:36 -0800 Subject: Unexptected return code Message-ID: <20161108232736.5c6e6c03@linux-h57q.site> I only serve static pages, hence I have this in my conf file: ----------------------- ## Only allow these request methods ## if ($request_method !~ ^(GET|HEAD)$ ) { return 444; } ---------------- Shouldn't the return code be 444 instead of 400? ---------------------------------------- 400 111.91.67.118 - - [09/Nov/2016:05:18:38 +0000] "CONNECT search.yahoo.com:443 HTTP/1.1" 173 "-" "-" "-" ------------------------------- This is more of a curiosity rather than an issue. From mdounin at mdounin.ru Wed Nov 9 09:55:14 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Nov 2016 12:55:14 +0300 Subject: OCSP stapling In-Reply-To: <017401d239b4$4fa055d0$eee10170$@staff.aruba.it> References: <017401d239b4$4fa055d0$eee10170$@staff.aruba.it> Message-ID: <20161109095514.GE73038@mdounin.ru> Hello! On Tue, Nov 08, 2016 at 12:36:13PM +0100, Christian Cioni wrote: > Hi, > > on my server have activated a SSL in SNI configuration without problems, but > for the OCSP stapling configurations, receive always ?no response sent? > > > > On my configuration have add: > > ssl_trusted_certificate /etc/nginx/ssl/CA.pem; > > ssl_stapling on; > > ssl_stapling_verify on; > > > > What can I check? Try checking error logs. Most likely you need more trusted certificates for things to work with "ssl_stapling_verify" switched on. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Nov 9 10:02:23 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Nov 2016 13:02:23 +0300 Subject: Unexptected return code In-Reply-To: <20161108232736.5c6e6c03@linux-h57q.site> References: <20161108232736.5c6e6c03@linux-h57q.site> Message-ID: <20161109100223.GF73038@mdounin.ru> Hello! On Tue, Nov 08, 2016 at 11:27:36PM -0800, lists at lazygranch.com wrote: > I only serve static pages, hence I have this in my conf file: > > ----------------------- > ## Only allow these request methods ## > if ($request_method !~ ^(GET|HEAD)$ ) { > return 444; > } > ---------------- > Shouldn't the return code be 444 instead of 400? > ---------------------------------------- > 400 111.91.67.118 - - [09/Nov/2016:05:18:38 +0000] "CONNECT search.yahoo.com:443 HTTP/1.1" 173 "-" "-" "-" > ------------------------------- > > This is more of a curiosity rather than an issue. There is no support for CONNECT method in nginx. As a result, CONNECT requests are rejected as invalid while parsing a request line (as there is no URI nginx expects to see). -- Maxim Dounin http://nginx.org/ From lists at lazygranch.com Wed Nov 9 10:26:21 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Wed, 09 Nov 2016 02:26:21 -0800 Subject: Unexptected return code In-Reply-To: <20161109100223.GF73038@mdounin.ru> References: <20161108232736.5c6e6c03@linux-h57q.site> <20161109100223.GF73038@mdounin.ru> Message-ID: <20161109102621.5468242.70774.15955@lazygranch.com> ?Makes perfect sense!? ? ? Original Message ? From: Maxim Dounin Sent: Wednesday, November 9, 2016 2:02 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: Unexptected return code Hello! On Tue, Nov 08, 2016 at 11:27:36PM -0800, lists at lazygranch.com wrote: > I only serve static pages, hence I have this in my conf file: > > ----------------------- > ## Only allow these request methods ## > if ($request_method !~ ^(GET|HEAD)$ ) { > return 444; > } > ---------------- > Shouldn't the return code be 444 instead of 400? > ---------------------------------------- > 400 111.91.67.118 - - [09/Nov/2016:05:18:38 +0000] "CONNECT search.yahoo.com:443 HTTP/1.1" 173 "-" "-" "-" > ------------------------------- > > This is more of a curiosity rather than an issue. There is no support for CONNECT method in nginx. As a result, CONNECT requests are rejected as invalid while parsing a request line (as there is no URI nginx expects to see). -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed Nov 9 12:34:21 2016 From: nginx-forum at forum.nginx.org (hide) Date: Wed, 09 Nov 2016 07:34:21 -0500 Subject: How to delay requests from once unauthorized IP address In-Reply-To: References: Message-ID: <3b254d387492e56e2a4e9e0bd408e6ee.NginxMailingListEnglish@forum.nginx.org> Hello Andrei. Thank you very much for your response but I cannot understand how I can take HTTP status 401 for a particular client into account with ngx_http_limit_req_module. I have some experience with ngx_http_limit_req_module: I limit the frequency of each client with $binary_remote_addr key. But I would like to set a stricter limit for those only who get HTTP status 401. I cannot understand how this can be done with ngx_http_limit_req_module. Can you give an example configuration for this with limit_req_zone and limit_req? Thank you if you answer. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270537,270840#msg-270840 From hemanthnews at yahoo.com Thu Nov 10 06:06:18 2016 From: hemanthnews at yahoo.com (hemanthnews at yahoo.com) Date: Thu, 10 Nov 2016 11:36:18 +0530 Subject: 504 Bad gateway error when server date/time is changed In-Reply-To: <1399786425.1032041.1478232241709@mail.yahoo.com> References: <1399786425.1032041.1478232241709.ref@mail.yahoo.com> <1399786425.1032041.1478232241709@mail.yahoo.com> Message-ID: <60932.60445.bm@smtp124.mail.sg3.yahoo.com> Hi, I did find a ticket https://trac.nginx.org/nginx/ticket/189 for the issue mentioned below. The ticket seems to be still open!! Any suggestions on how to handle this? -------------------- -Best Hemanth From: Hemanth Kumar via nginx Sent: Friday, November 4, 2016 9:43 AM To: nginx at nginx.org Cc: Hemanth Kumar Subject: 504 Bad gateway error when server date/time is changed Hi, Following is the environment OS: CentOS 7 (64 bit) NGINX: 1.10.1 PHP/PHP-FPM:? 5.6 ZF2 Apache 2.4 The Web application is running with HTTPD configured to port 9080 and NGINX on port 80 There is an option to set the date, time and timezone from the app. Whenever the date or time is changed on port-80, I get a 504 Bad gateway error but this error is not seen when Apache is used. When the timezone is changed, this 504 error does not come. NOTE: The entire application is being serviced using message queue and when the time/date setting is bypassed,? I don't see the error. Any idea why NGINX is behaving this way? -Best Hemanth ? -Hemanth -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Nov 10 07:15:06 2016 From: nginx-forum at forum.nginx.org (George) Date: Thu, 10 Nov 2016 02:15:06 -0500 Subject: multiple php-fpm pool upstream alternating 503 & 502 errors Message-ID: <65312580d94205e171a16ab5a27220b9.NginxMailingListEnglish@forum.nginx.org> Was wondering if anyone could shed some light on this issue I am experiencing only with multiple php-fpm pool setups but not with single php-fpm pool. The issue is when a forum software like Xenforo or Invision board uses their native forum close option to turn off the forums for guests but still allow forum admins access, the forum via php issue a HTTP 503 status message. This seems to trip up and causes issues only for multiple php-fpm pool upstream setups causing alternating 503 and 502 bad gateway errors. Probably partially to do with the http_503 definition for fastcgi_next_upstream. The upstream settings upstream phpbackend { zone zone_phpbackend 64k; ip_hash; keepalive 5; server 127.0.0.1:9000 weight=50; server 127.0.0.1:9002 weight=50; server 127.0.0.1:9003 weight=50; server 127.0.0.1:9004 weight=50; server 127.0.0.1:9005 weight=50; } and relevant php-fpm changes made were to change from single php-fpm pool fastcgi_pass 127.0.0.1:9000; to multiple php-fpm upstream pools fastcgi_next_upstream error timeout http_500 http_503; fastcgi_pass phpbackend; fastcgi_keep_conn on; I can replicate the issue with multiple php-fpm pool upstream setup by creating a 503.php file with contents and then refreshing the 503.php page and it will alternate between 503 and 502 errors The access.log's alternating 503 and 502 errors excerpt IPADDR - - [10/Nov/2016:06:07:07 +0000] "GET /503.php HTTP/1.1" 502 1672 "-" "Mozilla/5.0 snipped" "-" rt=0.000 ua="phpbackend" us="502" ut="0.000" ul="0" cs=- IPADDR - - [10/Nov/2016:06:07:03 +0000] "GET /503.php HTTP/1.1" 503 1665 "-" "Mozilla/5.0 snipped" "-" rt=0.000 ua="127.0.0.1:9004, 127.0.0.1:9002, 127.0.0.1:9005, 127.0.0.1:9003, 127.0.0.1:9000" us="502, 502, 502, 502, 503" ut="0.000, 0.000, 0.000, 0.000, 0.000" ul="0, 0, 0, 0, 0" cs=- IPADDR - - [10/Nov/2016:06:07:05 +0000] "GET /503.php HTTP/1.1" 502 1672 "-" "Mozilla/5.0 snipped" "-" rt=0.000 ua="phpbackend" us="502" ut="0.000" ul="0" cs=- IPADDR - - [10/Nov/2016:06:07:07 +0000] "GET /503.php HTTP/1.1" 502 1672 "-" "Mozilla/5.0 snipped" "-" rt=0.000 ua="phpbackend" us="502" ut="0.000" ul="0" cs=- using log format below log_format main_ext '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" ' 'rt=$request_time ua="$upstream_addr" ' 'us="$upstream_status" ut="$upstream_response_time" ' 'ul="$upstream_response_length" ' 'cs=$upstream_cache_status' ; Using nginx 1.11.5 with PHP 5.6.27 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270850,270850#msg-270850 From iippolitov at nginx.com Thu Nov 10 08:27:35 2016 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Thu, 10 Nov 2016 11:27:35 +0300 Subject: multiple php-fpm pool upstream alternating 503 & 502 errors In-Reply-To: <65312580d94205e171a16ab5a27220b9.NginxMailingListEnglish@forum.nginx.org> References: <65312580d94205e171a16ab5a27220b9.NginxMailingListEnglish@forum.nginx.org> Message-ID: The behaviour of a single and multiple upstreams is changed due to 'http_503' option of 'fastcgi_next_upstream' . When a request comes to nginx, nginx forwards it to an upstream, tries to get a response, gets 503. http 503 is set to be a failed response (with proxy_next_upstream directive). So nginx marks the server as 'failed' (because of a 'server' default of max_fails=1) and forward the request to the next server in the upstream. When all of servers in an upstream returns 503 they are all marked as failed and nginx returns the last reply it has to a client. For the next 'fail_timeout' seconds all servers are marked as down. Next time a request comes within a 'fail_timeout', nginx wont proxy the request as all servers are marked as down. No servers alive - no request is made - http 502 is returned. Note the difference: the first case is when nginx makes a request and returns a reply whatever it is. The second case is when no request for an upstream is made and 502 is returned. If you remove 'http_503' option from 'proxy_next_upstream' nginx will not mark a server in an upstream as failed and you won't get http 502. I'd say, in case a forum is suspended, you would want to return http 403 instead of 503 (or a stub page with http 200). Hope this helps , Ippolitov Igor. On 10.11.2016 10:15, George wrote: > Was wondering if anyone could shed some light on this issue I am > experiencing only with multiple php-fpm pool setups but not with single > php-fpm pool. The issue is when a forum software like Xenforo or Invision > board uses their native forum close option to turn off the forums for guests > but still allow forum admins access, the forum via php issue a HTTP 503 > status message. This seems to trip up and causes issues only for multiple > php-fpm pool upstream setups causing alternating 503 and 502 bad gateway > errors. Probably partially to do with the http_503 definition for > fastcgi_next_upstream. > > The upstream settings > > upstream phpbackend { > zone zone_phpbackend 64k; > ip_hash; > keepalive 5; > server 127.0.0.1:9000 weight=50; > server 127.0.0.1:9002 weight=50; > server 127.0.0.1:9003 weight=50; > server 127.0.0.1:9004 weight=50; > server 127.0.0.1:9005 weight=50; > } > > and relevant php-fpm changes made were to change > > from single php-fpm pool > > fastcgi_pass 127.0.0.1:9000; > > to multiple php-fpm upstream pools > > fastcgi_next_upstream error timeout http_500 http_503; > fastcgi_pass phpbackend; > fastcgi_keep_conn on; > > I can replicate the issue with multiple php-fpm pool upstream setup by > creating a 503.php file with contents > > > > and then refreshing the 503.php page and it will alternate between 503 and > 502 errors > > The access.log's alternating 503 and 502 errors excerpt > > IPADDR - - [10/Nov/2016:06:07:07 +0000] "GET /503.php HTTP/1.1" 502 1672 "-" > "Mozilla/5.0 snipped" "-" rt=0.000 ua="phpbackend" us="502" ut="0.000" > ul="0" cs=- > IPADDR - - [10/Nov/2016:06:07:03 +0000] "GET /503.php HTTP/1.1" 503 1665 "-" > "Mozilla/5.0 snipped" "-" rt=0.000 ua="127.0.0.1:9004, 127.0.0.1:9002, > 127.0.0.1:9005, 127.0.0.1:9003, 127.0.0.1:9000" us="502, 502, 502, 502, 503" > ut="0.000, 0.000, 0.000, 0.000, 0.000" ul="0, 0, 0, 0, 0" cs=- > IPADDR - - [10/Nov/2016:06:07:05 +0000] "GET /503.php HTTP/1.1" 502 1672 "-" > "Mozilla/5.0 snipped" "-" rt=0.000 ua="phpbackend" us="502" ut="0.000" > ul="0" cs=- > IPADDR - - [10/Nov/2016:06:07:07 +0000] "GET /503.php HTTP/1.1" 502 1672 "-" > "Mozilla/5.0 snipped" "-" rt=0.000 ua="phpbackend" us="502" ut="0.000" > ul="0" cs=- > > using log format below > > log_format main_ext '$remote_addr - $remote_user [$time_local] "$request" > ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for" ' > 'rt=$request_time ua="$upstream_addr" ' > 'us="$upstream_status" ut="$upstream_response_time" > ' > 'ul="$upstream_response_length" ' > 'cs=$upstream_cache_status' ; > > Using nginx 1.11.5 with PHP 5.6.27 > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270850,270850#msg-270850 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at forum.nginx.org Thu Nov 10 09:19:21 2016 From: nginx-forum at forum.nginx.org (ganadara) Date: Thu, 10 Nov 2016 04:19:21 -0500 Subject: can not received event on windows Message-ID: I was build nginx for window on linux. (compiler: x86_64-w64-mingw32-gcc) and build succeeded. but It does not work on any request.. as if it hang status... I was checked function ngx_event_process_posted in ngx_event_posted.c on windows gdb. and found an empty ngx_queue_t at any requested. how to fix ?? test nginx version : nginx-release-1.11.5 nginx-release-1.10.2 nginx-release-1.9.15 nginx-release-1.8.1 module version: encrypted-session-nginx-module-0.06 lua-nginx-module-0.10.6 lua-nginx-module-0.10.7 ngx_devel_kit-0.3.0 set-misc-nginx-module-0.31 etc... LuaJIT-2.0.2 LuaJIT-2.0.3 LuaJIT-2.0.4 lua-cjson-2.1.0 cJSON- I don`t know openssl-1.0.2j zlib-1.2.8 configure script: TARGET=".\\\\" export CC=x86_64-w64-mingw32-gcc export CFLAGS export LDFLAGS export LUAJIT_INC=$COMMON_DIR/include/luajit-2.0.4 export LUAJIT_LIB=$WIN_OUT/lib export OPENSSL_INC=$COMMON_DIR/include export OPENSSL_LIB=$WIN_OUT/lib echo $LUAJIT_INC echo $LUAJIT_LIB echo $OPENSSL_INC echo $OPENSSL_LIB ./configure_win --prefix=$TARGET \ --crossbuild=win32 \ --sbin-path=nginx.exe \ --with-cc=x86_64-w64-mingw32-gcc \ --with-cpp=x86_64-w64-mingw32-c++ \ --with-zlib=../../../open_source/zlib-1.2.8/zlib-1.2.8 \ --with-openssl=../../../open_source/openssl-1.0.2j/openssl-1.0.2j \ --with-pcre=../../../open_source/pcre-8.34/pcre-8.34 \ --add-module=../nginX_if/source \ --add-module=../lua/ngx_devel_kit-master \ --add-module=../lua/set-misc-nginx-module-master \ --add-module=../lua/lua-nginx-module \ --add-module=../lua/encrypted-session-nginx-module-master \ --conf-path=.\\\\conf\\\\nginx.conf \ --pid-path=.\\\\log\\\\nginx.pid \ --error-log-path=.\\\\log\\\\ngx_error.log \ --http-log-path=.\\\\log\\\\ngx_access.log \ --http-client-body-temp-path=.\\\\log\\\\client_body_temp \ --http-proxy-temp-path=.\\\\log\\\\proxy_temp \ --http-fastcgi-temp-path=.\\\\log\\\\fastcgi_temp \ --http-uwsgi-temp-path=.\\\\log\\\\uwsgi_temp \ --http-scgi-temp-path=.\\\\log\\\\scgi_temp \ --with-cc-opt=" -ggdb " \ --with-ld-opt="-L$WIN_OUT/lib -lm -lcJSON " \ --with-http_ssl_module Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270853,270853#msg-270853 From nginx-forum at forum.nginx.org Thu Nov 10 09:48:30 2016 From: nginx-forum at forum.nginx.org (George) Date: Thu, 10 Nov 2016 04:48:30 -0500 Subject: multiple php-fpm pool upstream alternating 503 & 502 errors In-Reply-To: References: Message-ID: <176d9ad6eddb6a14f5738c2ccc6fe65d.NginxMailingListEnglish@forum.nginx.org> thanks Igor very insightful :) I guess tricky issue is for SEO forum closure having SEO friendly http status code alternative to 503 ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270850,270854#msg-270854 From nginx-forum at forum.nginx.org Thu Nov 10 09:58:19 2016 From: nginx-forum at forum.nginx.org (ganadara) Date: Thu, 10 Nov 2016 04:58:19 -0500 Subject: can not received event on windows In-Reply-To: References: Message-ID: add information. nginx.conf #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 8081; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270853,270855#msg-270855 From nginx-forum at forum.nginx.org Thu Nov 10 12:02:27 2016 From: nginx-forum at forum.nginx.org (bertuka) Date: Thu, 10 Nov 2016 07:02:27 -0500 Subject: 502 Bad Gateway nginx/1.2.1 In-Reply-To: <010101d236ad$0ad614d0$20823e70$@roze.lv> References: <010101d236ad$0ad614d0$20823e70$@roze.lv> Message-ID: <56a5c3fc2322cab4b01831a65355aa27.NginxMailingListEnglish@forum.nginx.org> Hi Reinis Rozitis, yes it seems to be a plugin.. I am in contact with the developer to see how to fix it.. Thans for your interest and reply Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270759,270857#msg-270857 From nginx-forum at forum.nginx.org Thu Nov 10 20:04:43 2016 From: nginx-forum at forum.nginx.org (ulik) Date: Thu, 10 Nov 2016 15:04:43 -0500 Subject: Set location based on query arg Message-ID: <3b98ecafdc8409bafcc28bfc36f12f9d.NginxMailingListEnglish@forum.nginx.org> Is it possible to set/modify location based on the query string arg from request? Here is the scenario: Request: www.example.com/demo?path=abc Docroot: /var/www/example/abc/ Request: www.example.com/demo?path=xyz Docroot: /var/www/example/xyz/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270868,270868#msg-270868 From francis at daoine.org Thu Nov 10 22:19:53 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 10 Nov 2016 22:19:53 +0000 Subject: Set location based on query arg In-Reply-To: <3b98ecafdc8409bafcc28bfc36f12f9d.NginxMailingListEnglish@forum.nginx.org> References: <3b98ecafdc8409bafcc28bfc36f12f9d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161110221953.GL23518@daoine.org> On Thu, Nov 10, 2016 at 03:04:43PM -0500, ulik wrote: Hi there, > Is it possible to set/modify location based on the query string arg from > request? The one-word answer is "no". So the follow-up question is: what do you want to do? > Here is the scenario: > > Request: www.example.com/demo?path=abc > Docroot: /var/www/example/abc/ What would you like nginx to do with that request, that somehow involves Docroot? Serve a specific file from the filesystem? Make a fastcgi request to an upstream including a particular key/value pair? Something else? > Request: www.example.com/demo?path=xyz > Docroot: /var/www/example/xyz/ And what would you like nginx to do with *that* request, that is different from the previous one? Perhaps the thing that you want achieved, can be achieved somehow. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Nov 10 23:46:10 2016 From: nginx-forum at forum.nginx.org (ulik) Date: Thu, 10 Nov 2016 18:46:10 -0500 Subject: Set location based on query arg In-Reply-To: <20161110221953.GL23518@daoine.org> References: <20161110221953.GL23518@daoine.org> Message-ID: Here is what I want to do, in nginx conf language: server { listen 80; server_name www.example.com; # root when path query arg is present if ($arg_path) { root /var/www/example/$arg_path; } # root when path query arg is not present (default) if (!$arg_path) { root /var/www/example/default; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270868,270870#msg-270870 From dave at jetcafe.org Fri Nov 11 02:30:56 2016 From: dave at jetcafe.org (Dave Hayes) Date: Thu, 10 Nov 2016 18:30:56 -0800 Subject: Multiple SSL listen statements and SNI Message-ID: <77908fa3-04ec-286f-e826-4f9c2007251c@jetcafe.org> Hello. :) Please consider the following nginx setup: server { # server 1 listen 443 default_server ssl; server_name ""; ... return 444; } server { # server 2 listen 127.0.0.81:443 default_server ssl; server_name ""; ... return 444; } server { # server 3 listen 127.0.0.81:443 ssl; server_name "foo.com"; ... } server { # server 4 listen 443 ssl; server_name "thing.com"; ... } I am at nginx 1.8.1 with SNI support enabled. The behavior I expect from this is: - requests to foo.com on 127.0.0.81 will return per the server 3 bucket - requests to thing.com on the default interface or on 127.0.0.81 will return per the server 4 bucket - requests to foo.com on the default interface will return 444 - requests to any other SSL site will return 444 The behavior I observe that is different from this expectation is this: - requests to thing.com on the 127.0.0.81 interface return 444 I would love to know exactly what is going on here. Would anyone be so kind as to point out what is happening? Thanks in advance. -- Dave Hayes - Consultant - Altadena CA, USA - dave at jetcafe.org >>>> *The opinions expressed above are entirely my own* <<<< Nasrudin, starving with hunger, went to a cafe and began filling his mouth with food using both hands. "Why eat with two hands, Mulla?" "Because I haven't got three." From igor at sysoev.ru Fri Nov 11 08:02:51 2016 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 11 Nov 2016 11:02:51 +0300 Subject: Multiple SSL listen statements and SNI In-Reply-To: <77908fa3-04ec-286f-e826-4f9c2007251c@jetcafe.org> References: <77908fa3-04ec-286f-e826-4f9c2007251c@jetcafe.org> Message-ID: <4E0AEFEE-6F80-40E7-A6AB-639CF7F2B0DF@sysoev.ru> On 11 Nov 2016, at 05:30, Dave Hayes wrote: > Hello. :) Please consider the following nginx setup: > > server { > # server 1 > listen 443 default_server ssl; > server_name ""; > ... > return 444; > } > > server { > # server 2 > listen 127.0.0.81:443 default_server ssl; > server_name ""; > ... > return 444; > } > > server { > # server 3 > listen 127.0.0.81:443 ssl; > server_name "foo.com"; > ... > } > > server { > # server 4 > listen 443 ssl; > server_name "thing.com"; > ... > } > > I am at nginx 1.8.1 with SNI support enabled. The behavior I expect from this is: > > - requests to foo.com on 127.0.0.81 will return per the server 3 bucket > - requests to thing.com on the default interface or on 127.0.0.81 will return per the server 4 bucket > - requests to foo.com on the default interface will return 444 > - requests to any other SSL site will return 444 > > The behavior I observe that is different from this expectation is this: > > - requests to thing.com on the 127.0.0.81 interface return 444 > > I would love to know exactly what is going on here. Would anyone be so kind as to point out what is happening? Thanks in advance. Please read this: http://nginx.org/en/docs/http/request_processing.html#mixed_name_ip_based_servers This configuration does what you want: server { # server 4 listen 443 ssl; listen 127.0.0.81:443 ssl; server_name "thing.com"; ... } -- Igor Sysoev http://nginx.com From crsarang at gmail.com Fri Nov 11 08:05:13 2016 From: crsarang at gmail.com (crsarang at gmail.com) Date: Fri, 11 Nov 2016 08:05:13 +0000 (UTC) Subject: does not work on any request on windows In-Reply-To: References: Message-ID: <6A45E15B42C2BF71.868bc320-1455-4ee9-b034-cb2ca377b57d@mail.outlook.com> I was build nginx for window on linux. (compiler: x86_64-w64-mingw32-gcc) and build succeeded. ? but It does not work on any request.. as if it hang status... ? I was checked function ngx_http_wait_request_handler in ngx_http_request.c on windows gdb. One unique feature is the server_name. ? Case1. Configure prefix ? ? (gdb) p cscf->server_name $3 = {len = 1, data = 0x2d4e38f "_????????"} (gdb) p cscf->server_name->data $4 = (u_char *) 0x2d4e38f "_" ? Case2. Configure prefix ?./? (gdb) p cscf->server_name->data $1 = (u_char *) 0x2c85c81 "_./html" ? ? And remain debug log. WSARecv() failed (10014: The system detected an invalid pointer address in attempting to use a pointer argument in a call) while waiting for request, client: 127.0.0.1, server: 0.0.0.0:80 client timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while waiting for request, client: 127.0.0.1, server: 0.0.0.0:80 ? ? how to fix ?? ? test nginx version : nginx-release-1.11.5 nginx-release-1.10.2 nginx-release-1.9.15 nginx-release-1.8.1 ? module version: encrypted-session-nginx-module-0.06 lua-nginx-module-0.10.6 lua-nginx-module-0.10.7 ngx_devel_kit-0.3.0 set-misc-nginx-module-0.31 ? ? etc... LuaJIT-2.0.2 LuaJIT-2.0.3 LuaJIT-2.0.4 lua-cjson-2.1.0 cJSON- I don`t know openssl-1.0.2j zlib-1.2.8 ? ? configure script: TARGET=" " ? export CC=x86_64-w64-mingw32-gcc ? export CFLAGS export LDFLAGS ? export LUAJIT_INC=$COMMON_DIR/include/luajit-2.0.4 export LUAJIT_LIB=$WIN_OUT/lib export OPENSSL_INC=$COMMON_DIR/include export OPENSSL_LIB=$WIN_OUT/lib ? ? echo $LUAJIT_INC echo $LUAJIT_LIB echo $OPENSSL_INC echo $OPENSSL_LIB ? ./configure_win --prefix=$TARGET \ --crossbuild=win32 \ --sbin-path=nginx.exe \ --with-cc=x86_64-w64-mingw32-gcc \ --with-cpp=x86_64-w64-mingw32-c++ \ --with-zlib=../../../open_source/zlib-1.2.8/zlib-1.2.8 \ --with-openssl=../../../open_source/openssl-1.0.2j/openssl-1.0.2j \ --with-pcre=../../../open_source/pcre-8.34/pcre-8.34 \ --add-module=../nginX_if/source \ --add-module=../lua/ngx_devel_kit-master \ --add-module=../lua/set-misc-nginx-module-master \ --add-module=../lua/lua-nginx-module \ --add-module=../lua/encrypted-session-nginx-module-master \ --conf-path=./conf/nginx.conf \ --pid-path=./log/nginx.pid \ --error-log-path=./log/ngx_error.log \ --http-log-path=./log/ngx_access.log \ --http-client-body-temp-path=./log/client_body_temp \ --http-proxy-temp-path=./log/proxy_temp \ --http-fastcgi-temp-path=./log/fastcgi_temp \ --http-uwsgi-temp-path=./log/uwsgi_temp \ --http-scgi-temp-path=./log/scgi_temp \ --with-cc-opt=" -ggdb " \ --with-ld-opt="-L$WIN_OUT/lib -lm -lcJSON " \ --with-http_ssl_module ? ? ? ? nginx.conf #user nobody; worker_processes 1; ? #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; ? #pid logs/nginx.pid; ? ? events { worker_connections 1024; } ? ? http { include mime.types; default_type application/octet-stream; ? #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; ? #access_log logs/access.log main; ? sendfile on; #tcp_nopush on; ? #keepalive_timeout 0; keepalive_timeout 65; ? #gzip on; ? server { listen 80; server_name _; ? #charset koi8-r; ? #access_log logs/host.access.log main; ? location / { root html; index index.html index.htm; } ? #error_page 404 /404.html; ? # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } ? # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} ? # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} ? # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } ? ? # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; ? # location / { # root html; # index index.html index.htm; # } #} ? ? # HTTPS server # #server { # listen 443 ssl; # server_name localhost; ? # ssl_certificate cert.pem; # ssl_certificate_key cert.key; ? # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; ? # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; ? # location / { # root html; # index index.html index.htm; # } #} ? } ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Nov 11 08:30:06 2016 From: francis at daoine.org (Francis Daly) Date: Fri, 11 Nov 2016 08:30:06 +0000 Subject: Set location based on query arg In-Reply-To: References: <20161110221953.GL23518@daoine.org> Message-ID: <20161111083006.GM23518@daoine.org> On Thu, Nov 10, 2016 at 06:46:10PM -0500, ulik wrote: Hi there, > Here is what I want to do, in nginx conf language: > # root when path query arg is present > if ($arg_path) { > root /var/www/example/$arg_path; > } > > # root when path query arg is not present (default) > if (!$arg_path) { > root /var/www/example/default; > } You can use "map" to set a variable, and then use that variable in the "root" directive. That way you can avoid trying to have "root" within "if". Something like map $arg_path $root_bit { default $arg_path; "" default; } and then later root /var/www/example/$root_bit; Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Nov 11 08:41:40 2016 From: francis at daoine.org (Francis Daly) Date: Fri, 11 Nov 2016 08:41:40 +0000 Subject: Set location based on query arg In-Reply-To: <20161111083006.GM23518@daoine.org> References: <20161110221953.GL23518@daoine.org> <20161111083006.GM23518@daoine.org> Message-ID: <20161111084140.GN23518@daoine.org> On Fri, Nov 11, 2016 at 08:30:06AM +0000, Francis Daly wrote: > On Thu, Nov 10, 2016 at 06:46:10PM -0500, ulik wrote: Hi there, > > # root when path query arg is present > > if ($arg_path) { > > root /var/www/example/$arg_path; > > } > You can use "map" to set a variable, and then use that variable in the > "root" directive. That way you can avoid trying to have "root" within > "if". Be aware that using user-controlled values in important config is not often a good thing. A request for /passwd?path=../../../../../etc might return some content that you would prefer it did not, for example. It would be better to have a list of the allowed paths, or at least the allowed path patterns, and write the map so that "root" only ends up with values that you expect. So - make the default value be "default"; and then only use $arg_path if it (for example) is only letters. Cheers, f -- Francis Daly francis at daoine.org From dave at jetcafe.org Fri Nov 11 17:29:58 2016 From: dave at jetcafe.org (Dave Hayes) Date: Fri, 11 Nov 2016 09:29:58 -0800 Subject: Multiple SSL listen statements and SNI In-Reply-To: <4E0AEFEE-6F80-40E7-A6AB-639CF7F2B0DF@sysoev.ru> References: <77908fa3-04ec-286f-e826-4f9c2007251c@jetcafe.org> <4E0AEFEE-6F80-40E7-A6AB-639CF7F2B0DF@sysoev.ru> Message-ID: On 11/11/2016 00:02, Igor Sysoev wrote: > Please read this: > http://nginx.org/en/docs/http/request_processing.html#mixed_name_ip_based_servers Thanks very much for your reply. I have read this before, but maybe I missed something. In reading it again like you asked, I see this paragraph: "In this configuration, nginx first tests the IP address and port of the request against the listen directives of the server blocks. It then tests the ?Host? header field of the request against the server_name entries of the server blocks that matched the IP address and port." So in my previous configuration, if I send an SSL request to 127.0.0.81 with curl properly set up so it does SNI, e.g. curl -vk --resolve thing.com:443:127.0.0.81 https://thing.com/ I would expect it to first test the IP address and port of the request: 127.0.0.81:443 Given that I do not get to the "server 4" block, this appears to imply that 127.0.0.81:443 will not be matched by listen 443 ssl; or listen *:443 ssl; SNI does not look at the Host: header, so I wasn't considering it useful in this analysis. Is this wrong? Your suggestion (which does work) seems to confirm that listen *:443 ssl; will not bind to all IP addresses. > This configuration does what you want: > > server { > # server 4 > listen 443 ssl; > listen 127.0.0.81:443 ssl; > server_name "thing.com"; > ... > } Naturally I've IP aliased the 127.0.0.81 (for testing). Perhaps the usage of IP aliases prevents "*" from having the meaning of "attach this server block to every IP you find"? Am I confused here? Thanks in advance for any insight provided. -- Dave Hayes - Consultant - Altadena CA, USA - dave at jetcafe.org >>>> *The opinions expressed above are entirely my own* <<<< "Luke, you'll find many of the truths we cling to depend greatly upon our point of view." - Obi-Wan Kenobi From nginx-forum at forum.nginx.org Fri Nov 11 17:34:09 2016 From: nginx-forum at forum.nginx.org (ulik) Date: Fri, 11 Nov 2016 12:34:09 -0500 Subject: Set location based on query arg In-Reply-To: <20161111084140.GN23518@daoine.org> References: <20161111084140.GN23518@daoine.org> Message-ID: <30f5c95a4d3c960a51c7a2110734ef10.NginxMailingListEnglish@forum.nginx.org> Thanks Francis, that is an interesting approach (with a map). Will give it a try. And good point on only allowing pre-defined patterns to be passed from arg to root. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270868,270882#msg-270882 From igor at sysoev.ru Fri Nov 11 18:49:01 2016 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 11 Nov 2016 21:49:01 +0300 Subject: Multiple SSL listen statements and SNI In-Reply-To: References: <77908fa3-04ec-286f-e826-4f9c2007251c@jetcafe.org> <4E0AEFEE-6F80-40E7-A6AB-639CF7F2B0DF@sysoev.ru> Message-ID: On 11 Nov 2016, at 20:29, Dave Hayes wrote: > On 11/11/2016 00:02, Igor Sysoev wrote: >> Please read this: >> http://nginx.org/en/docs/http/request_processing.html#mixed_name_ip_based_servers > > Thanks very much for your reply. I have read this before, but maybe I missed something. In reading it again like you asked, I see this paragraph: > > "In this configuration, nginx first tests the IP address and port of the request against the listen directives of the server blocks. It then tests the ?Host? header field of the request against the server_name entries of the server blocks that matched the IP address and port." > > So in my previous configuration, if I send an SSL request to 127.0.0.81 with curl properly set up so it does SNI, e.g. > > curl -vk --resolve thing.com:443:127.0.0.81 https://thing.com/ > > I would expect it to first test the IP address and port of the request: > > 127.0.0.81:443 > > Given that I do not get to the "server 4" block, this appears to imply that 127.0.0.81:443 will not be matched by > > listen 443 ssl; > > or > > listen *:443 ssl; Yes, *:443 matches all addresses except explicitly specified in listen directives with the same port 443. Consider it as fallback. On FreeBSD you can use ?bind? parameter: listen *:443; listen 127.0.0.81:443 bind; And there will be two separate sockets: *:443 and 127.0.0.81:443. You can not use ?bind? on Linux however if one of listen addresses is 0.0.0.0 (wildcard, *). So this configuration without ?bind?: listen *:443; listen 127.0.0.81:443; emulates this two separate sockets behaviour in one 0.0.0.0:443 socket. > SNI does not look at the Host: header, so I wasn't considering it useful in this analysis. Is this wrong? SNI is used to find server with appropriate server_name. -- Igor Sysoev http://nginx.com > Your suggestion (which does work) seems to confirm that > > listen *:443 ssl; > > will not bind to all IP addresses. > >> This configuration does what you want: >> >> server { >> # server 4 >> listen 443 ssl; >> listen 127.0.0.81:443 ssl; >> server_name "thing.com"; >> ... >> } > > Naturally I've IP aliased the 127.0.0.81 (for testing). Perhaps the usage of IP aliases prevents "*" from having the meaning of "attach this server block to every IP you find"? Am I confused here? > > Thanks in advance for any insight provided. > -- > Dave Hayes - Consultant - Altadena CA, USA - dave at jetcafe.org > >>>> *The opinions expressed above are entirely my own* <<<< > > "Luke, you'll find many of the truths we cling to depend > greatly upon our point of view." - Obi-Wan Kenobi > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From dave at jetcafe.org Fri Nov 11 19:13:01 2016 From: dave at jetcafe.org (Dave Hayes) Date: Fri, 11 Nov 2016 11:13:01 -0800 Subject: Multiple SSL listen statements and SNI In-Reply-To: References: <77908fa3-04ec-286f-e826-4f9c2007251c@jetcafe.org> <4E0AEFEE-6F80-40E7-A6AB-639CF7F2B0DF@sysoev.ru> Message-ID: On 11/11/2016 10:49, Igor Sysoev wrote: > Yes, *:443 matches all addresses except explicitly specified in listen directives with the same port 443. Ah! Thank you very much! This statement cleared up my confusion. I didn't see this statement in any documentation, but I could have missed it. > Consider it as fallback. On FreeBSD you can use ??bind?? parameter: > > listen *:443; > listen 127.0.0.81:443 bind; > > And there will be two separate sockets: *:443 and 127.0.0.81:443. > You can not use ??bind?? on Linux however if one of listen addresses is 0.0.0.0 (wildcard, *). > > So this configuration without ??bind??: > > listen *:443; > listen 127.0.0.81:443; > > emulates this two separate sockets behaviour in one 0.0.0.0:443 socket. Nice to know that, as I do use FreeBSD. I'm still a bit curious; why would I want two separate sockets when I am already listening on 0.0.0.0? At first glance, I'd think the emulation suits my needs more; no sense in taking up memory for an extra socket right? -- Dave Hayes - Consultant - Altadena CA, USA - dave at jetcafe.org >>>> *The opinions expressed above are entirely my own* <<<< Learn to behave from those who cannot. From igor at sysoev.ru Fri Nov 11 19:29:13 2016 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 11 Nov 2016 22:29:13 +0300 Subject: Multiple SSL listen statements and SNI In-Reply-To: References: <77908fa3-04ec-286f-e826-4f9c2007251c@jetcafe.org> <4E0AEFEE-6F80-40E7-A6AB-639CF7F2B0DF@sysoev.ru> Message-ID: On 11 Nov 2016, at 22:13, Dave Hayes wrote: > On 11/11/2016 10:49, Igor Sysoev wrote: >> Yes, *:443 matches all addresses except explicitly specified in listen directives with the same port 443. > > Ah! Thank you very much! This statement cleared up my confusion. I didn't see this statement in any documentation, but I could have missed it. > >> Consider it as fallback. On FreeBSD you can use ??bind?? parameter: >> >> listen *:443; >> listen 127.0.0.81:443 bind; >> >> And there will be two separate sockets: *:443 and 127.0.0.81:443. >> You can not use ??bind?? on Linux however if one of listen addresses is 0.0.0.0 (wildcard, *). >> >> So this configuration without ??bind??: >> >> listen *:443; >> listen 127.0.0.81:443; >> >> emulates this two separate sockets behaviour in one 0.0.0.0:443 socket. > > Nice to know that, as I do use FreeBSD. I'm still a bit curious; why would I want two separate sockets when I am already listening on 0.0.0.0? When nginx listen on *:80 it is calls getsockname() to learn exact IP address which client connected to. With ?bind? nginx already knows the address and eliminates the syscall. > At first glance, I'd think the emulation suits my needs more; no sense in taking up memory for an extra socket right? I believe memory saving is negligeable. There is another case: You can configure listen addresses which are not exists on the host when nginx starts and will be available later via CARP or similar protocol. -- Igor Sysoev http://nginx.com From Michael.Power at ELOTOUCH.com Sat Nov 12 03:25:49 2016 From: Michael.Power at ELOTOUCH.com (Michael Power) Date: Sat, 12 Nov 2016 03:25:49 +0000 Subject: nginx upstream source ip address Message-ID: <7598C365-9B04-4A71-BCF2-AC3634535626@elotouch.com> I have an nginx web server with two ip addresses. I want to connect to two upstream servers, where one upstream is configured to use one of the ip addresses and the other upstream is configured to use the second ip address. Is this possible? Michael Power -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sat Nov 12 19:10:44 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 12 Nov 2016 20:10:44 +0100 Subject: nginx upstream source ip address In-Reply-To: <7598C365-9B04-4A71-BCF2-AC3634535626@elotouch.com> References: <7598C365-9B04-4A71-BCF2-AC3634535626@elotouch.com> Message-ID: You can make nginx listen on 2 IP addresses with 2 listen directives. Or make it listen on all addresses if you wish. For your upstream, configure an upstream block with those 2 IP addresses and make a proxy_pass pinting to it. --- *B. R.* On Sat, Nov 12, 2016 at 4:25 AM, Michael Power wrote: > I have an nginx web server with two ip addresses. I want to connect to > two upstream servers, where one upstream is configured to use one of the ip > addresses and the other upstream is configured to use the second ip address. > > > > Is this possible? > > > > Michael Power > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Nov 13 08:36:51 2016 From: nginx-forum at forum.nginx.org (gigihc11) Date: Sun, 13 Nov 2016 03:36:51 -0500 Subject: slow https performance compared to http Message-ID: <4127a55d4638d35163cc739c691ee860.NginxMailingListEnglish@forum.nginx.org> Hi, I have: nginx 1.11.3 Ubuntu 16.04.1 LTS openssl 1.0.2g-1ubuntu4.5 amd64 libssl1.0.0:amd64 1.0.2g-1ubuntu4.5 weak CPU: N3150 16 GB RAM with this test-setup: open_file_cache max=1000 inactive=360s; open_file_cache_valid 30s; I test on running ab command on the same host as nginx is. For a 1.6KB text file I get 4600 req/s with http and 550 req/sec with https. I get the same using or not gzip encoding. Why is this huge performance difference? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270898,270898#msg-270898 From lucas at lucasrolff.com Sun Nov 13 09:04:32 2016 From: lucas at lucasrolff.com (Lucas Rolff) Date: Sun, 13 Nov 2016 10:04:32 +0100 Subject: slow https performance compared to http In-Reply-To: <4127a55d4638d35163cc739c691ee860.NginxMailingListEnglish@forum.nginx.org> References: <4127a55d4638d35163cc739c691ee860.NginxMailingListEnglish@forum.nginx.org> Message-ID: <58282CA0.9000304@lucasrolff.com> Because you have the TLS handshake that has to be done which is CPU bound Try change things like ssl_ciphers (to something faster), and use ssl_session_cache/ -- Best Regards, Lucas Rolff gigihc11 wrote: > Hi, I have: > nginx 1.11.3 > Ubuntu 16.04.1 LTS > openssl 1.0.2g-1ubuntu4.5 amd64 > libssl1.0.0:amd64 1.0.2g-1ubuntu4.5 > weak CPU: N3150 > 16 GB RAM > > with this test-setup: > open_file_cache max=1000 inactive=360s; > open_file_cache_valid 30s; > > I test on running ab command on the same host as nginx is. > For a 1.6KB text file I get 4600 req/s with http and 550 req/sec with https. > I get the same using or not gzip encoding. > > Why is this huge performance difference? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270898,270898#msg-270898 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Sun Nov 13 09:38:29 2016 From: nginx-forum at forum.nginx.org (gigihc11) Date: Sun, 13 Nov 2016 04:38:29 -0500 Subject: slow https performance compared to http In-Reply-To: <58282CA0.9000304@lucasrolff.com> References: <58282CA0.9000304@lucasrolff.com> Message-ID: <648bc44aceb3a2c878f03e540d43b1d9.NginxMailingListEnglish@forum.nginx.org> Oh, I forgot to mention that the setup included also: ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270898,270900#msg-270900 From nginx-forum at forum.nginx.org Sun Nov 13 13:06:09 2016 From: nginx-forum at forum.nginx.org (gigihc11) Date: Sun, 13 Nov 2016 08:06:09 -0500 Subject: slow https performance compared to http In-Reply-To: <58282CA0.9000304@lucasrolff.com> References: <58282CA0.9000304@lucasrolff.com> Message-ID: <7e5fcb2b33b8e055d9d6c2fecd6963db.NginxMailingListEnglish@forum.nginx.org> I used https://mozilla.github.io/server-side-tls/ssl-config-generator/ in order to configure the ssl part for my nginx: https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=nginx-1.11.3&openssl=1.0.2g&hsts=yes&profile=intermediate but with no OCSP Stapling, ssl_trusted_certificate, resolver. I have the same results. What puzzles me is the huge gap between http and https performance. Is this http/https performance ration the usual situation or am I doing something really wrong? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270898,270901#msg-270901 From r at roze.lv Sun Nov 13 13:55:22 2016 From: r at roze.lv (Reinis Rozitis) Date: Sun, 13 Nov 2016 15:55:22 +0200 Subject: slow https performance compared to http In-Reply-To: <4127a55d4638d35163cc739c691ee860.NginxMailingListEnglish@forum.nginx.org> References: <4127a55d4638d35163cc739c691ee860.NginxMailingListEnglish@forum.nginx.org> Message-ID: <000001d23db5$93dc9f10$bb95dd30$@roze.lv> > I test on running ab command on the same host as nginx is. Did you test with -k (keepalive) on or off? Without -k means that ab will do handshake every time and you will be limited by cpu and depending on the chosen cipher the performance (requests per second) can vary a lot. rr From nginx-forum at forum.nginx.org Sun Nov 13 18:06:57 2016 From: nginx-forum at forum.nginx.org (gigihc11) Date: Sun, 13 Nov 2016 13:06:57 -0500 Subject: slow https performance compared to http In-Reply-To: <000001d23db5$93dc9f10$bb95dd30$@roze.lv> References: <000001d23db5$93dc9f10$bb95dd30$@roze.lv> Message-ID: Just to help those not knowing about -k option: -k Enable the HTTP KeepAlive feature, i.e., perform multiple requests within one HTTP session. Default is no KeepAlive I'll do it but I guess the test will no longer be so relevant because I want to simulate different users. Anyway, the question is in fact: is this http/https ratio normal? If is it than I'll live with it, but if not then I might do something wrong. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270898,270906#msg-270906 From luky-37 at hotmail.com Sun Nov 13 19:17:11 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Sun, 13 Nov 2016 19:17:11 +0000 Subject: AW: RE: slow https performance compared to http In-Reply-To: References: <000001d23db5$93dc9f10$bb95dd30$@roze.lv>, Message-ID: > I'll do it but I guess the test will no longer be so relevant because I want > to simulate different users. Real user/browser DO keep-alive. Sendings thousands of requests per second in dedicated TLS session is not what you would see in real life from real users. > Anyway, the question is in fact: is this http/https ratio normal? > If is it than I'll live with it, but if not then I might do something wrong. Your test (small files, each file in a new TLS session) is basically measuring the amount of TLS handshakes per second your CPU can manage, nothing else. Therefor, you don't have a "ratio" here, as that would require a realistic test settings. Enabling keepalive on ab is one of the things you can do. I don't know ab, so not sure if there is a better way. I also do not know if ab supports SSL session caching or TLS tickets, which you would have to keep in mind when benchmarking. The bottom point is, you need to understand your benchmark before you try to understand its result. From francis at daoine.org Sun Nov 13 20:18:08 2016 From: francis at daoine.org (Francis Daly) Date: Sun, 13 Nov 2016 20:18:08 +0000 Subject: nginx upstream source ip address In-Reply-To: <7598C365-9B04-4A71-BCF2-AC3634535626@elotouch.com> References: <7598C365-9B04-4A71-BCF2-AC3634535626@elotouch.com> Message-ID: <20161113201808.GO23518@daoine.org> On Sat, Nov 12, 2016 at 03:25:49AM +0000, Michael Power wrote: Hi there, > I have an nginx web server with two ip addresses. I want to connect to two upstream servers, where one upstream is configured to use one of the ip addresses and the other upstream is configured to use the second ip address. > I think that you are asking about setting the source address that nginx should use when it is making a proxy_pass connection to an upstream. If so: in general, nginx leaves that to the OS to decide. So if you can set your local routing table such that any traffic to upstream1 comes from ip1 and traffic to upstream2 comes from ip2, then nginx will Just Work (as will anything else on your nginx-serving machine). If you won't do that, then if you have the case where proxy_pass in one location goes to upstream1 and proxy_pass in another location goes to upstream2, then proxy_bind (http://nginx.org/r/proxy_bind) can do what you want. Cheers, f -- Francis Daly francis at daoine.org From rpaprocki at fearnothingproductions.net Sun Nov 13 21:23:20 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Sun, 13 Nov 2016 13:23:20 -0800 Subject: AW: RE: slow https performance compared to http In-Reply-To: References: <000001d23db5$93dc9f10$bb95dd30$@roze.lv> Message-ID: > Enabling keepalive on ab is one of the things you can do. I don't know > ab, so not sure if there is a better way. I also do not know if ab supports > SSL session caching or TLS tickets, which you would have to keep in > mind when benchmarking. ab does not support TLS tickets (you can verify this with wireshark), so even if you do enable HTTP keepalive, the full TLS handshake has to be performed for each request. it's a poor tool to measure total potential throughout of HTTPS servers. From r at roze.lv Sun Nov 13 23:17:20 2016 From: r at roze.lv (Reinis Rozitis) Date: Mon, 14 Nov 2016 01:17:20 +0200 Subject: AW: RE: slow https performance compared to http In-Reply-To: References: <000001d23db5$93dc9f10$bb95dd30$@roze.lv> Message-ID: <000001d23e04$151d51c0$3f57f540$@roze.lv> > ab does not support TLS tickets (you can verify this with wireshark), so even if > you do enable HTTP keepalive, the full TLS handshake has to be performed for > each request. > > it's a poor tool to measure total potential throughout of HTTPS servers. Now that you mention it I kind of remember patching ab a long time ago with this: http://marc.info/?l=apache-httpd-dev&m=118581179812925&w=1 (to use it together with keepalive). So something like httperf should work better as test tool (maybe also siege but not sure about ssl session reuse there). rr From jerika.sdiwc at gmail.com Mon Nov 14 04:37:10 2016 From: jerika.sdiwc at gmail.com (Jerika Joshua) Date: Mon, 14 Nov 2016 12:37:10 +0800 Subject: Call for Papers ICEND2017 e-Technologies and Networks for Development Message-ID: Dear Colleague: The Fifth International Conference on e-Technologies and Networks for Development (ICeND2017) http://sdiwc.net/conferences/5th-international-conference- on-e-technologies-and-networks-for-development/ We invite you to attend and participate in the Fifth International Conference on e-Technologies and Networks for Development (ICeND2017). This conference is part of five conferences of The Sixth World Congress on Computing, Engineering and Technology (WCET). All registered papers will be published in SDIWC Digital Library, and in the proceedings of the conference. IMPORTANT DATES: Submission Deadline Open until June 11, 2017 Notification of Acceptance June 20, 2017 or 4 weeks from the date of submission Camera Ready Submission July 1, 2017 Registration Deadline July 1, 2017 Conference Dates July 11-13, 2017 SUBMISSION INFORMATION & GUIDELINES: Submit full papers electronically as pdf format without author(s) name. See Submission Guidelines on the conference website. http://sdiwc.net/conferences/5th-international-conference- on-e-technologies-and-networks-for-development/openconf/openconf.php Reviewing uses double-blind review process by at least two reviewers. To facilitate this, authors need to ensure that their manuscripts are prepared in a way that does not show their identity. icend17 at sdiwc.net for more information. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Nov 14 08:07:50 2016 From: nginx-forum at forum.nginx.org (adrhc) Date: Mon, 14 Nov 2016 03:07:50 -0500 Subject: AW: RE: slow https performance compared to http In-Reply-To: References: Message-ID: <2dcd538bff4abbc8399d41a5ee7d944c.NginxMailingListEnglish@forum.nginx.org> Lukas Tribus Wrote: ------------------------------------------------------- > > I'll do it but I guess the test will no longer be so relevant > because I want > > to simulate different users. > > Real user/browser DO keep-alive. ... I agree but I think that separate/different simultaneous users won't use a common connection so for this very specific scenario keep-alive won't matter. Of course for every individual user keep-alive will matter but this aspect for the moment I won't to ignore in testing. For this specific scenario (separate/different simultaneous users) is this ratio http/https (4600/550) abnormal? PS: with Apache/2.4.18 (Ubuntu) the ration is 40% lower Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270898,270914#msg-270914 From luky-37 at hotmail.com Mon Nov 14 08:57:39 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 14 Nov 2016 08:57:39 +0000 Subject: AW: AW: RE: slow https performance compared to http In-Reply-To: <2dcd538bff4abbc8399d41a5ee7d944c.NginxMailingListEnglish@forum.nginx.org> References: , <2dcd538bff4abbc8399d41a5ee7d944c.NginxMailingListEnglish@forum.nginx.org> Message-ID: > I agree but I think that separate/different simultaneous users won't use a > common connection so for this very specific scenario keep-alive won't > matter. Of course for every individual user keep-alive will matter but this > aspect for the moment I won't to ignore in testing. It does matter, as there are no clients that request 1KB files all day. You will see a limited number of full TLS handshakes per second in reality, because there is keep-alive and there is TLS resumption. Of course every single user causes at the very least 1 TLS handshake. That's why ab would have to be *properly* configured to reflect that. > For this specific scenario (separate/different simultaneous users) is this > ratio http/https (4600/550) abnormal? > > PS: with Apache/2.4.18 (Ubuntu) the ration is 40% lower That depends: how many nginx workers do you have compared to how many apache threads and how does your per-core CPU load look like when benchmarking? From nginx-forum at forum.nginx.org Mon Nov 14 09:04:48 2016 From: nginx-forum at forum.nginx.org (adrhc) Date: Mon, 14 Nov 2016 04:04:48 -0500 Subject: AW: AW: RE: slow https performance compared to http In-Reply-To: References: Message-ID: Lukas Tribus Wrote: ------------------------------------------------------- > That depends: how many nginx workers do you have compared to > how many apache threads and how does your per-core CPU load > look like when benchmarking? > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx 4 threads and 4 CPU (both for apache and nginx) with 100% CPU load on test So, what's the answer now about the http/https (4600/550) ratio for the specific case I presented? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270898,270916#msg-270916 From nginx-forum at forum.nginx.org Mon Nov 14 09:26:44 2016 From: nginx-forum at forum.nginx.org (ganadara) Date: Mon, 14 Nov 2016 04:26:44 -0500 Subject: does not work on any request on windows In-Reply-To: <6A45E15B42C2BF71.868bc320-1455-4ee9-b034-cb2ca377b57d@mail.outlook.com> References: <6A45E15B42C2BF71.868bc320-1455-4ee9-b034-cb2ca377b57d@mail.outlook.com> Message-ID: <4b90ae89c4d5aed4ff7e000538cc16bc.NginxMailingListEnglish@forum.nginx.org> update information. This is the result of ngx_http_wait_request_handler after c->recv(c, b->last, size). The normal response has been successfully responded to the user request. Abnormal does not respond to user requests. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- normal p b $9 = { pos = 0x1ecbe00 "GET / HTTP/1.1\r\nHost: 192.168.40.76\r\nConnection: keep-alive\r\nCache-Control: max-age=0\r\nUpgrade-Insecure-Requests: 1\r\nUser-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8\r\nAccept-Encoding: gzip, deflate, sdch, br\r\nAccept-Language: ko,en-US;q=0.8,en;q=0.6,zh-CN;q=0.4,zh;q=0.2\r\n\r\n", last = 0x1ecbe00 "GET / HTTP/1.1\r\nHost: 192.168.40.76\r\nConnection: keep-alive\r\nCache-Control: max-age=0\r\nUpgrade-Insecure-Requests: 1\r\nUser-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8\r\nAccept-Encoding: gzip, deflate, sdch, br\r\nAccept-Language: ko,en-US;q=0.8,en;q=0.6,zh-CN;q=0.4,zh;q=0.2\r\n\r\n", file_pos = 0, file_last = 0, start = 0x1ecbe00 "GET / HTTP/1.1\r\nHost: 192.168.40.76\r\nConnection: keep-alive\r\nCache-Control: max-age=0\r\nUpgrade-Insecure-Requests: 1\r\nUser-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8\r\nAccept-Encoding: gzip, deflate, sdch, br\r\nAccept-Language: ko,en-US;q=0.8,en;q=0.6,zh-CN;q=0.4,zh;q=0.2\r\n\r\n", end = 0x1ecc200 "", tag = 0x0, file = 0x0, shadow = 0x0, temporary = 1, memory = 0, mmap = 0, recycled = 0, in_file = 0, flush = 0, sync = 0, last_buf = 0, last_in_chain = 0, last_shadow = 0, temp_file = 0, num = 0} abnormal p b pos = 0x1172730 "GET /index.html HTTP/1.1\r\nHost: 127.0.0.1\r\nConnection: keep-alive\r\nCache-Control: max-age=0\r\nUpgrade-Insecure-Requests: 1\r\nUser-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8\r\nAccept-Encoding: gzip, deflate, sdch, br\r\nAccept-Language: ko,en-US;q=0.8,en;q=0.6,zh-CN;q=0.4,zh;q=0.2\r\n\r\n\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\27 2\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\ r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\ 272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\27 2\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\002", last = 0x11728e4 "\r?\272\r?\272\r?\272\r?\272\r?\272\r?\272\r?\272\r?\272\r?\272\r?\272\r?\272\r?\272\r?\272\r?\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r \272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\2 72\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272 \r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r \272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\002", file_pos = 0, file_last = 0, start = 0x1172730 "GET /index.html HTTP/1.1\r\nHost: 127.0.0.1\r\nConnection: keep-alive\r\nCache-Control: max-age=0\r\nUpgrade-Insecure-Requests: 1\r\nUser-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64 ) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8\r\nAccept-Encoding: gzip, deflate, sdch , br\r\nAccept-Language: ko,en-US;q=0.8,en;q=0.6,zh-CN;q=0.4,zh;q=0.2\r\n\r\n\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\ 272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\27 2\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\ r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\ 272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\r\272\002", end = 0x1172b30 "\002", tag = 0x0, file = 0x0, shadow = 0x0, temporary = 1, memory = 0, mmap = 0, recycled = 0, in_file = 0, flush = 0, sync = 0, last_buf = 0, last_in_chain = 0, last_shadow = 0, temp_file = 0, num = 0} Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270875,270917#msg-270917 From nginx-forum at forum.nginx.org Mon Nov 14 10:08:33 2016 From: nginx-forum at forum.nginx.org (nemster) Date: Mon, 14 Nov 2016 05:08:33 -0500 Subject: custom logic after connection is closed Message-ID: <051d6259da4e8e25c9e924f2d92996a7.NginxMailingListEnglish@forum.nginx.org> Hi! Is it possible to write a plugin that does some additional stuff after a TLS (http/1.1, http/2.0) TCP connection. I would want to keep some extra struct for each TLS connection and manipulated it with every request, then once the TLS (or TCP) connection closes i would want to do some processing and cleanup. thanks n Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270918,270918#msg-270918 From lists at lazygranch.com Mon Nov 14 10:26:46 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Mon, 14 Nov 2016 02:26:46 -0800 Subject: Is this a valid request? Message-ID: <20161114022646.6b95d881@linux-h57q.site> I keep my nginx server set up dumb. (Don't need anything fancy at the moment). Is this request below possibly valid? I flag anything with a question mark in it as hacking, but maybe IOS makes some requests that some websites will process, and others would just ignore after the question mark. 444 72.49.13.171 - - [14/Nov/2016:06:55:52 +0000] "GET /ttr.htm?sa=X&sqi=2&ved=0ahUKEwiB7Nyj1afQAhWJZCYKHWLGAW8Q_B0IETAA HTTP/1.1" 0 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 10_1_1 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) GSA/20.3.136880903 Mobile/14B100 Safari/600.1.4" "-" From luky-37 at hotmail.com Mon Nov 14 10:42:02 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 14 Nov 2016 10:42:02 +0000 Subject: AW: AW: AW: RE: slow https performance compared to http In-Reply-To: References: , Message-ID: >?4 threads and 4 CPU (both for apache and nginx) with 100% CPU load on test >?So, what's the answer now about the http/https (4600/550) ratio for the >?specific case I presented? It should perform the same as Apache in this case. From ruz at sports.ru Mon Nov 14 11:13:18 2016 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Mon, 14 Nov 2016 14:13:18 +0300 Subject: x-accel-redirect to @location and empty $upstream_http_some_header Message-ID: Hi, One URL redirects to @streams location: HTTP/1.0 200 OK expires: 0 cache-control: no-cache, no-store, must-revalidate x-accel-redirect: @streams Content-Type: text/html; charset=utf-8 Status: 200 x-real-location: /stream/?user_id=153847603&lang=RU pragma: no-cache @streams Location looks like this: location @streams { proxy_set_header X-Real-IP $header_ip; ... more proxy sets... proxy_set_header X-Y ttt$upstream_http_x_real_location$upstream_http_status; proxy_set_header X-Z ttt$http_x_real_location; proxy_pass http://streams-backend$upstream_http_x_real_location; } However, $upstream_http_x_real_location variable is empty and request reaches backed with original URL. GET /core/user/stream/ HTTP/1.0 ... X-Y: ttt X-Z: ttt Tested with nginx 1.8.0 and 1.10.2 with the same outcome. Is it a bug? Misconfiguration on my side? Any workarounds available? -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Nov 14 12:38:19 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Nov 2016 15:38:19 +0300 Subject: custom logic after connection is closed In-Reply-To: <051d6259da4e8e25c9e924f2d92996a7.NginxMailingListEnglish@forum.nginx.org> References: <051d6259da4e8e25c9e924f2d92996a7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161114123819.GA8196@mdounin.ru> Hello! On Mon, Nov 14, 2016 at 05:08:33AM -0500, nemster wrote: > Hi! > Is it possible to write a plugin that does some additional stuff after a TLS > (http/1.1, http/2.0) TCP connection. > I would want to keep some extra struct for each TLS connection and > manipulated it with every request, then once the TLS (or TCP) connection > closes i would want to do some processing and cleanup. You can install a pool cleanup handler on a connection pool. Grep sources for ngx_pool_cleanup_add() for usage examples. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Nov 14 12:45:19 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Nov 2016 15:45:19 +0300 Subject: x-accel-redirect to @location and empty $upstream_http_some_header In-Reply-To: References: Message-ID: <20161114124519.GB8196@mdounin.ru> Hello! On Mon, Nov 14, 2016 at 02:13:18PM +0300, ?????? ??????? wrote: > One URL redirects to @streams location: > > HTTP/1.0 200 OK > expires: 0 > cache-control: no-cache, no-store, must-revalidate > x-accel-redirect: @streams > Content-Type: text/html; charset=utf-8 > Status: 200 > x-real-location: /stream/?user_id=153847603&lang=RU > pragma: no-cache > > @streams > > Location looks like this: > > location @streams { > proxy_set_header X-Real-IP $header_ip; > ... more proxy sets... > proxy_set_header X-Y > ttt$upstream_http_x_real_location$upstream_http_status; > proxy_set_header X-Z ttt$http_x_real_location; > proxy_pass http://streams-backend$upstream_http_x_real_location; > } > > However, $upstream_http_x_real_location variable is empty and request > reaches backed with original URL. That's expected. All $upstream_* variables are re-initialized as long as proxy module starts working in a new location, and since there were no response yet when the proxy_pass value is evaluated, it resolves to an empty value. If you want to use $upstream_* variables set by a response with X-Accel-Redirect, you have to store them somewhere else. For example, it can be done using the "set" directive of the rewrite module, which is evaluated before the request is proxied: set $stored_real_location $upstream_http_x_real_location; proxy_pass http://foo$stored_real_location; -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Mon Nov 14 12:55:39 2016 From: nginx-forum at forum.nginx.org (nemster) Date: Mon, 14 Nov 2016 07:55:39 -0500 Subject: custom logic after connection is closed In-Reply-To: <20161114123819.GA8196@mdounin.ru> References: <20161114123819.GA8196@mdounin.ru> Message-ID: Hi Maxim, Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Mon, Nov 14, 2016 at 05:08:33AM -0500, nemster wrote: > > > Hi! > > Is it possible to write a plugin that does some additional stuff > after a TLS > > (http/1.1, http/2.0) TCP connection. > > I would want to keep some extra struct for each TLS connection and > > manipulated it with every request, then once the TLS (or TCP) > connection > > closes i would want to do some processing and cleanup. > > You can install a pool cleanup handler on a connection pool. Grep > sources for ngx_pool_cleanup_add() for usage examples. looks like a good candidate, however from what i understand in ngx_http_close_connection that is kicked of in the end when ngx_destroy_pool is called. however the TLS session is deleted after that. ideally i would want access to TLS Parameters such as TLS Session Cookie and the crypto params. I could log them maybe at session start and then finalize only in the pool cleanup handler, but that would basically double that data in memory for no reason. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270918,270924#msg-270924 From nginx-forum at forum.nginx.org Mon Nov 14 14:20:50 2016 From: nginx-forum at forum.nginx.org (adrhc) Date: Mon, 14 Nov 2016 09:20:50 -0500 Subject: AW: AW: AW: RE: slow https performance compared to http In-Reply-To: References: Message-ID: Lukas Tribus Wrote: ------------------------------------------------------- > >?4 threads and 4 CPU (both for apache and nginx) with 100% CPU load > on test > >?So, what's the answer now about the http/https (4600/550) ratio for > the > >?specific case I presented? > > It should perform the same as Apache in this case. Well, as I said earlier, with Apache/2.4.18 (Ubuntu) the ratio is 40% lower comparing with same ratio for nginx. For https with nginx I get 550 req/sec while with Apache I get 350 req/sec. So I guess from your answer it's natural to have this ratio (http/https). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270898,270927#msg-270927 From nginx-forum at forum.nginx.org Mon Nov 14 15:04:08 2016 From: nginx-forum at forum.nginx.org (debilish99) Date: Mon, 14 Nov 2016 10:04:08 -0500 Subject: Bloking Bad bots Message-ID: Hello, I have a server with several domains, in the configuration file of each domain I have a line like this to block bad bots. If ($ http_user_agent ~ * (zealbot|MJ12bot|AhrefsBot|sogou|PaperLiBot|uipbot|DotBot|GetIntent|Cliqzbot|YandexBot|Nutch|TurnitinBot|IndeedBot) Return 403; } This works fine. The question is, if I increase the list of bad bots to 1000, for example, this would be a speed problem when nginx manages every request that arrives. I have domains that can have 500,000 hits daily and up to 20,000 hits. Thank you all. Greetings. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270930,270930#msg-270930 From lists at lazygranch.com Mon Nov 14 15:30:15 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Mon, 14 Nov 2016 07:30:15 -0800 Subject: Bloking Bad bots In-Reply-To: References: Message-ID: <20161114153015.5484627.22776.16301@lazygranch.com> You can block some of those bots at the firewall permanently. ? I use the nginx map feature in a similar manner, but I don't know if map is more efficient than your code. ?I started out blocking similar to your scheme, but the map feature looks clear to me in the conf file. Majestic and Sogou sure are annoying. For what I block, I use 444 rather than 403. (And yes, I know that destroys the matter/anti-matter mix of the universe, so don't lecture me.) I then eyeball the 444 hits periodically, using a script to pull the 444 requests out of the access.log file. I have another script to get just the IP addresses from access.log. For the search engines like Majestic and Sogou, which don't seem to have an IP space you can look up via BGP tools, I take the IP used and add it to my firewall blocking table. I can go weeks before a new IP gets used. ? Original Message ? From: debilish99 Sent: Monday, November 14, 2016 7:04 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Bloking Bad bots Hello, I have a server with several domains, in the configuration file of each domain I have a line like this to block bad bots. If ($ http_user_agent ~ * (zealbot|MJ12bot|AhrefsBot|sogou|PaperLiBot|uipbot|DotBot|GetIntent|Cliqzbot|YandexBot|Nutch|TurnitinBot|IndeedBot) Return 403; } This works fine. The question is, if I increase the list of bad bots to 1000, for example, this would be a speed problem when nginx manages every request that arrives. I have domains that can have 500,000 hits daily and up to 20,000 hits. Thank you all. Greetings. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270930,270930#msg-270930 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Nov 14 15:38:08 2016 From: nginx-forum at forum.nginx.org (mevans336) Date: Mon, 14 Nov 2016 10:38:08 -0500 Subject: Internal IP in HTTP Location Header Response? Message-ID: <8272a6247428a361521af814eac4babb.NginxMailingListEnglish@forum.nginx.org> Hello - we have been dinged on our network penetration test because one of our Nginx web servers is returning the internal IP in the HTTP location response header. This is our only Nginx server that is not acting as a reverse proxy, so I'm at a bit of a loss on how to disable Nginx returning the Internal IP? Here is the bulk of our config: server { listen 192.168.1.2:80; server_name mydomain.com www.mydomain.com location / { return 301 https://$server_name$request_uri; } } server { listen 192.168.1.2:443 ssl http2; server_name mydomain.com www.mydomain.com ssl on; ssl_certificate /etc/nginx/ssl/mycert.crt; ssl_certificate_key /etc/nginx/ssl/mykey.key ssl_protocols TLSv1.2 TLSv1.1 TLSv1; ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-A[...] ssl_prefer_server_ciphers on; ssl_dhparam /etc/nginx/ssl/dhparam.pem; ssl_stapling on; resolver 8.8.8.8 8.8.4.4 ipv6=off; location / { add_header X-Frame-Options SAMEORIGIN; add_header Strict-Transport-Security max-age=31536[...] root /usr/share/nginx/html/; index index.html; } } [+] Location Header: https://192.168.1.2/images/ [+] Result for my.external.ip.address found Internal IP: 192.168.1.2 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270932,270932#msg-270932 From anoopalias01 at gmail.com Mon Nov 14 15:40:46 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Mon, 14 Nov 2016 21:10:46 +0530 Subject: Bloking Bad bots In-Reply-To: <20161114153015.5484627.22776.16301@lazygranch.com> References: <20161114153015.5484627.22776.16301@lazygranch.com> Message-ID: I had asked the same question once and got no to the point response. So here is what I infer: the if causes nginx to check the header for each request against the list of patterns you have configured and return a 403 if found . So the processing slows down on each request to for the if processing.. If you see mod_security etc ..this is also doing something similar and doing a check on each request - so in that way (that is if you are willing to compromise lack of speed for the user agent checking) this is fine . But you are definitely making the nginx slower and consume more resource by adding the if there and making it more by increasing the list size. On Mon, Nov 14, 2016 at 9:00 PM, wrote: > You can block some of those bots at the firewall permanently. > > I use the nginx map feature in a similar manner, but I don't know if map > is more efficient than your code. ?I started out blocking similar to your > scheme, but the map feature looks clear to me in the conf file. > > Majestic and Sogou sure are annoying. For what I block, I use 444 rather > than 403. (And yes, I know that destroys the matter/anti-matter mix of the > universe, so don't lecture me.) I then eyeball the 444 hits periodically, > using a script to pull the 444 requests out of the access.log file. I have > another script to get just the IP addresses from access.log. > > For the search engines like Majestic and Sogou, which don't seem to have > an IP space you can look up via BGP tools, I take the IP used and add it to > my firewall blocking table. I can go weeks before a new IP gets used. > > Original Message > From: debilish99 > Sent: Monday, November 14, 2016 7:04 AM > To: nginx at nginx.org > Reply To: nginx at nginx.org > Subject: Bloking Bad bots > > Hello, > > I have a server with several domains, in the configuration file of each > domain I have a line like this to block bad bots. > > If ($ http_user_agent ~ * > (zealbot|MJ12bot|AhrefsBot|sogou|PaperLiBot|uipbot| > DotBot|GetIntent|Cliqzbot|YandexBot|Nutch|TurnitinBot|IndeedBot) > Return 403; > } > > This works fine. > > The question is, if I increase the list of bad bots to 1000, for example, > this would be a speed problem when nginx manages every request that > arrives. > > I have domains that can have 500,000 hits daily and up to 20,000 hits. > > Thank you all. > > Greetings. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,270930,270930#msg-270930 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruz at sports.ru Mon Nov 14 15:44:10 2016 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Mon, 14 Nov 2016 18:44:10 +0300 Subject: x-accel-redirect to @location and empty $upstream_http_some_header In-Reply-To: <20161114124519.GB8196@mdounin.ru> References: <20161114124519.GB8196@mdounin.ru> Message-ID: On Mon, Nov 14, 2016 at 3:45 PM, Maxim Dounin wrote: > Hello! > > On Mon, Nov 14, 2016 at 02:13:18PM +0300, ?????? ??????? wrote: > > > One URL redirects to @streams location: > > > > HTTP/1.0 200 OK > > expires: 0 > > cache-control: no-cache, no-store, must-revalidate > > x-accel-redirect: @streams > > Content-Type: text/html; charset=utf-8 > > Status: 200 > > x-real-location: /stream/?user_id=153847603&lang=RU > > pragma: no-cache > > > > @streams > > > > Location looks like this: > > > > location @streams { > > proxy_set_header X-Real-IP $header_ip; > > ... more proxy sets... > > proxy_set_header X-Y > > ttt$upstream_http_x_real_location$upstream_http_status; > > proxy_set_header X-Z ttt$http_x_real_location; > > proxy_pass http://streams-backend$upstream_http_x_real_location; > > } > > > > However, $upstream_http_x_real_location variable is empty and request > > reaches backed with original URL. > > That's expected. All $upstream_* variables are re-initialized as > long as proxy module starts working in a new location, and since > there were no response yet when the proxy_pass value is evaluated, > it resolves to an empty value. > > If you want to use $upstream_* variables set by a response with > X-Accel-Redirect, you have to store them somewhere else. For > example, it can be done using the "set" directive of the rewrite > module, which is evaluated before the request is proxied: > > set $stored_real_location $upstream_http_x_real_location; > proxy_pass http://foo$stored_real_location; > This helps, suspected something like this, but until the last moment couldn't believe upstream module re-initializes variables before request to upstream. Wonder if it can be delayed until after request phase. -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Nov 14 16:07:47 2016 From: nginx-forum at forum.nginx.org (mevans336) Date: Mon, 14 Nov 2016 11:07:47 -0500 Subject: Internal IP in HTTP Location Header Response? In-Reply-To: <8272a6247428a361521af814eac4babb.NginxMailingListEnglish@forum.nginx.org> References: <8272a6247428a361521af814eac4babb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0004e5aa3157a03285b7f1c2b39711fb.NginxMailingListEnglish@forum.nginx.org> Actually, I think this may have been because after upgrading Nginx, it reinstalled the default.conf file. I've removed it, left the config above, restarted Nginx, and the internal IP doesn't seem to be leaking any longer. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270932,270935#msg-270935 From lists at lazygranch.com Mon Nov 14 16:51:27 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Mon, 14 Nov 2016 08:51:27 -0800 Subject: Bloking Bad bots In-Reply-To: References: <20161114153015.5484627.22776.16301@lazygranch.com> Message-ID: <20161114165127.5484627.25383.16312@lazygranch.com> An HTML attachment was scrubbed... URL: From rpaprocki at fearnothingproductions.net Mon Nov 14 16:56:15 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Mon, 14 Nov 2016 08:56:15 -0800 Subject: Bloking Bad bots In-Reply-To: <20161114165127.5484627.25383.16312@lazygranch.com> References: <20161114153015.5484627.22776.16301@lazygranch.com> <20161114165127.5484627.25383.16312@lazygranch.com> Message-ID: On Mon, Nov 14, 2016 at 8:51 AM, wrote: > I'd be shocked if the map function doesn't use a smart search scheme > rather than check every item. > You're in for a bit of a shock then. It is a linear search :p Curious as to what you think it should look like instead? Getting back to the original question though, a map should (_should_) be faster than building a larger and larger regex, particularly if the map is doing string comparison as opposed to regex searching for each map member. Building large alternation-oriented regular expressions can get pretty expensive rather quickly, though some of that will depend on the regex engine and compile-time options (e.g, are you using PCRE JIT, etc). -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Mon Nov 14 17:01:14 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Mon, 14 Nov 2016 09:01:14 -0800 Subject: Bloking Bad bots In-Reply-To: References: <20161114153015.5484627.22776.16301@lazygranch.com> <20161114165127.5484627.25383.16312@lazygranch.com> Message-ID: <20161114170114.5484627.82011.16316@lazygranch.com> An HTML attachment was scrubbed... URL: From lists at ssl-mail.com Mon Nov 14 17:06:10 2016 From: lists at ssl-mail.com (lists at ssl-mail.com) Date: Mon, 14 Nov 2016 09:06:10 -0800 Subject: Bloking Bad bots In-Reply-To: References: <20161114153015.5484627.22776.16301@lazygranch.com> <20161114165127.5484627.25383.16312@lazygranch.com> Message-ID: <1479143170.2720057.787350465.5FAA2E43@webmail.messagingengine.com> fwiw, I use the map approach discussed here. I've a list of a hundred or so 'bad bots'. I reply with a 444. Screw 'em. IMO, the performance hit of blocking them is far less than the performance havoc they wreak if allowed to (try to) scan your site, &/or the inevitable flood of crap from your "new BFFs" originating from under dozens of rocks ... I also scan my logs for bad bot hits' 444 rejects (often using just fail2ban) , and when over whatever threshhold I set, I mod an firewall IPSET with the errant IP and that takes care of them for whatever time period I choose, with a much lower performance hit on my server. Ideal? Nope. WORKSFORME? Absolutely. From ph.gras at worldonline.fr Mon Nov 14 17:12:28 2016 From: ph.gras at worldonline.fr (Ph. Gras) Date: Mon, 14 Nov 2016 18:12:28 +0100 Subject: Bloking Bad bots In-Reply-To: References: Message-ID: Hi there ! so I do, with 2 different ways : ============================================== if ($http_user_agent ~* MJ12bot|SemrushBot) { return 403; } if ($http_user_agent ~* bot|crawl|spider|tools|java) { rewrite ^ http://www.cnrtl.fr/definition/cr?ole redirect; } ============================================== Regards, Ph. Gras Le 14 nov. 2016 ? 16:04, "debilish99" a ?crit : > Hello, > > I have a server with several domains, in the configuration file of each > domain I have a line like this to block bad bots. > > If ($ http_user_agent ~ * > (zealbot|MJ12bot|AhrefsBot|sogou|PaperLiBot|uipbot|DotBot|GetIntent|Cliqzbot|YandexBot|Nutch|TurnitinBot|IndeedBot) > Return 403; > } > > This works fine. > > The question is, if I increase the list of bad bots to 1000, for example, > this would be a speed problem when nginx manages every request that > arrives. > > I have domains that can have 500,000 hits daily and up to 20,000 hits. > > Thank you all. > > Greetings. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270930,270930#msg-270930 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Nov 14 22:18:15 2016 From: nginx-forum at forum.nginx.org (George) Date: Mon, 14 Nov 2016 17:18:15 -0500 Subject: Bloking Bad bots In-Reply-To: References: Message-ID: I use nginx maps which depending on user agent either block, rate limit or whitelist https://community.centminmod.com/threads/blocking-bad-or-aggressive-bots.6433/ as the list gets large nginx maps just make it easier to manage Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270930,270940#msg-270940 From nginx-forum at forum.nginx.org Mon Nov 14 23:14:43 2016 From: nginx-forum at forum.nginx.org (jwal) Date: Mon, 14 Nov 2016 18:14:43 -0500 Subject: Hide a request cookie in proxy_pass In-Reply-To: <20140829172725.GQ1849@mdounin.ru> References: <20140829172725.GQ1849@mdounin.ru> Message-ID: Hi, Thanks for this; it is pretty close to what I need. I just tried it out in the regex101.com editor and I think there might be a vulnerability: https://regex101.com/delete/ypHV2Yw6o3wHqGDQTHRPZw3r The client could include the same cookie name in twice. This regexp would only strip out one of them. If the client sets a Javascript cookie with the same name as the HttpOnly cookie you are trying to protect then they might end up getting the secret cookie passed through to the origin server. Not sure if you can contrive a practical attack from this observation. I have not yet found a general solution. In my case I am using the auth_request directive of Nginx so the auth_request service (a Python script) can provide the value of the onward Cookie header. Regards, James Posted at Nginx Forum: https://forum.nginx.org/read.php?2,252944,270941#msg-270941 From nginx-forum at forum.nginx.org Mon Nov 14 23:16:21 2016 From: nginx-forum at forum.nginx.org (jwal) Date: Mon, 14 Nov 2016 18:16:21 -0500 Subject: Hide a request cookie in proxy_pass In-Reply-To: References: <20140829172725.GQ1849@mdounin.ru> Message-ID: Oops: this is the correct link: https://regex101.com/r/RZltB6/1 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,252944,270942#msg-270942 From stardothosting at gmail.com Tue Nov 15 05:16:05 2016 From: stardothosting at gmail.com (Star Dot) Date: Tue, 15 Nov 2016 00:16:05 -0500 Subject: Is this a valid request? In-Reply-To: <20161114022646.6b95d881@linux-h57q.site> References: <20161114022646.6b95d881@linux-h57q.site> Message-ID: Dont see any traversal or injection attempt, but not knowing what is a "legitimate" request or the application architecture, its difficult to comment further. -- StackStar Managed Hosting Services : https://www.stackstar.com Shift8 Web Design in Toronto : https://www.shift8web.ca On Mon, Nov 14, 2016 at 5:26 AM, lists at lazygranch.com wrote: > I keep my nginx server set up dumb. (Don't need anything fancy at the > moment). Is this request below possibly valid? I flag anything with a > question mark in it as hacking, but maybe IOS makes some requests that > some websites will process, and others would just ignore after the > question mark. > > 444 72.49.13.171 - - [14/Nov/2016:06:55:52 +0000] "GET > /ttr.htm?sa=X&sqi=2&ved=0ahUKEwiB7Nyj1afQAhWJZCYKHWLGAW8Q_B0IETAA > HTTP/1.1" 0 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 10_1_1 like Mac OS X) > AppleWebKit/600.1.4 (KHTML, like Gecko) GSA/20.3.136880903 Mobile/14B100 > Safari/600.1.4" "-" > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stardothosting at gmail.com Tue Nov 15 05:22:02 2016 From: stardothosting at gmail.com (Star Dot) Date: Tue, 15 Nov 2016 00:22:02 -0500 Subject: Bloking Bad bots In-Reply-To: References: Message-ID: You could also look at the nginx module naxsi : https://github.com/nbs-system/naxsi More flexibility with regex and actions -- StackStar Managed Hosting Services : https://www.stackstar.com Shift8 Web Design in Toronto : https://www.shift8web.ca On Mon, Nov 14, 2016 at 10:04 AM, debilish99 wrote: > Hello, > > I have a server with several domains, in the configuration file of each > domain I have a line like this to block bad bots. > > If ($ http_user_agent ~ * > (zealbot|MJ12bot|AhrefsBot|sogou|PaperLiBot|uipbot| > DotBot|GetIntent|Cliqzbot|YandexBot|Nutch|TurnitinBot|IndeedBot) > Return 403; > } > > This works fine. > > The question is, if I increase the list of bad bots to 1000, for example, > this would be a speed problem when nginx manages every request that > arrives. > > I have domains that can have 500,000 hits daily and up to 20,000 hits. > > Thank you all. > > Greetings. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,270930,270930#msg-270930 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Nov 15 06:09:10 2016 From: nginx-forum at forum.nginx.org (mex) Date: Tue, 15 Nov 2016 01:09:10 -0500 Subject: Blocking tens of thousands of IP's In-Reply-To: <20161108225800.28C282C51184@mail.nginx.com> References: <20161108225800.28C282C51184@mail.nginx.com> Message-ID: <73cde68669a2c02b57437c8beafee059.NginxMailingListEnglish@forum.nginx.org> How do you transfer metrics from nginx to your pfsense? mayak Wrote: ------------------------------------------------------- > We are blocking 2.2 million addresses, however, we do it at the > firewall/router (pfsense pfBlocker). > > Ultra fast. > > HTH > > Mayak > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270680,270945#msg-270945 From lists at lazygranch.com Tue Nov 15 08:51:40 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 15 Nov 2016 00:51:40 -0800 Subject: Is this a valid request? In-Reply-To: References: <20161114022646.6b95d881@linux-h57q.site> Message-ID: <20161115085140.5484628.34646.16381@lazygranch.com> An HTML attachment was scrubbed... URL: From lists at lazygranch.com Tue Nov 15 08:54:47 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 15 Nov 2016 00:54:47 -0800 Subject: Bloking Bad bots In-Reply-To: References: Message-ID: <20161115085447.5484628.36994.16383@lazygranch.com> An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Nov 15 09:05:14 2016 From: nginx-forum at forum.nginx.org (debilish99) Date: Tue, 15 Nov 2016 04:05:14 -0500 Subject: Bloking Bad bots In-Reply-To: References: Message-ID: Many thanks to all for your contributions. My conclusion is that the method I use with many bots would be slow. It seems the best option is to use nginx maps: https://forum.nginx.org/read.php?11,255678,270417#msg-270417 https://community.centminmod.com/threads/blocking-bad-or-aggressive-bots.6433/ https://github.com/mitchellkrogza/nginx-ultimate-bad-bot-blocker Greetings to all. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270930,270948#msg-270948 From nginx-forum at forum.nginx.org Tue Nov 15 12:04:42 2016 From: nginx-forum at forum.nginx.org (piotr.pawlowski) Date: Tue, 15 Nov 2016 07:04:42 -0500 Subject: Wildcard in location directive Message-ID: <94c0b0d16f03e30e0ae740a807bb5780.NginxMailingListEnglish@forum.nginx.org> Gents, I am trying to setup location block which has wildcard 'inside' regex. Here is what I think should work: location ~ /documents/(.*)static=false$ { proxy_pass http://upstream; } location /documents { try_files $uri /test.html; } location = /test.html { expires 30s; } Unfortunately first location block is not working for me. My testing URL is domain.com/documents/23/582ad23330d63a078c967bc3/-1476160403/DUH/POI/2016-12-13/params?variable=my&var=0&many=other-variables&static=false . As you can see URI is quite complex, that is why I want to use wildcard. Any idea why it is not working and how to fix this? Thank you in advance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270949,270949#msg-270949 From nginx-forum at forum.nginx.org Tue Nov 15 12:26:55 2016 From: nginx-forum at forum.nginx.org (ganadara) Date: Tue, 15 Nov 2016 07:26:55 -0500 Subject: [SOLVED]does not work on any request on windows In-Reply-To: <4b90ae89c4d5aed4ff7e000538cc16bc.NginxMailingListEnglish@forum.nginx.org> References: <6A45E15B42C2BF71.868bc320-1455-4ee9-b034-cb2ca377b57d@mail.outlook.com> <4b90ae89c4d5aed4ff7e000538cc16bc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <559300616e5d4d12b4f48e312ae59b77.NginxMailingListEnglish@forum.nginx.org> Add configuration. --with-select_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_stub_status_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_auth_request_module --with-http_random_index_module --with-http_secure_link_module --with-http_slice_module --with-mail --with-stream --with-http_ssl_module --with-mail_ssl_module --with-stream_ssl_module --with-ipv6 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270875,270950#msg-270950 From francis at daoine.org Tue Nov 15 12:37:59 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 15 Nov 2016 12:37:59 +0000 Subject: Wildcard in location directive In-Reply-To: <94c0b0d16f03e30e0ae740a807bb5780.NginxMailingListEnglish@forum.nginx.org> References: <94c0b0d16f03e30e0ae740a807bb5780.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161115123759.GA2852@daoine.org> On Tue, Nov 15, 2016 at 07:04:42AM -0500, piotr.pawlowski wrote: Hi there, > location ~ /documents/(.*)static=false$ { > proxy_pass http://upstream; > } > > location /documents { > try_files $uri /test.html; > } > location = /test.html { > expires 30s; > } > > My testing URL is > domain.com/documents/23/582ad23330d63a078c967bc3/-1476160403/DUH/POI/2016-12-13/params?variable=my&var=0&many=other-variables&static=false "location" matches from the / after the domain name, to just before the first # or ? For this request, that is from /documents to /params So of these, the second is the location that nginx should use. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Nov 15 13:03:32 2016 From: nginx-forum at forum.nginx.org (piotr.pawlowski) Date: Tue, 15 Nov 2016 08:03:32 -0500 Subject: Wildcard in location directive In-Reply-To: <20161115123759.GA2852@daoine.org> References: <20161115123759.GA2852@daoine.org> Message-ID: <10352a09a65f9c1a2613ef3e4b8419a7.NginxMailingListEnglish@forum.nginx.org> OK, so do you know how to achieve my goal? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270949,270952#msg-270952 From shaunglass at gmail.com Tue Nov 15 14:11:11 2016 From: shaunglass at gmail.com (Shaun Glass) Date: Tue, 15 Nov 2016 16:11:11 +0200 Subject: Load Balance - Docker Message-ID: Good Day, We are testing DDC (Docker) and have 3 nodes each running a replica of the DTR and UCP. UCP - https://server.domain.com:444 DTR - https://server.domain.com:443 I am trying to setup load balancing with nginx but am getting no where. Two config files : --------------------------------------------------------------------------------------------------------------------------- upstream dtr_cluster { server 10.12.64.218:443; server 10.12.64.219:443; server 10.12.64.222:443; } server { listen 443; server_name domain.com docker-poc.domain.com; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://dtr_cluster/; proxy_redirect off; } } --------------------------------------------------------------------------------------------------------------------------- upstream ucp_cluster { server 10.12.64.218:444; server 10.12.64.219:444; server 10.12.64.222:444; } server { listen 444; server_name domain.com docker-poc.domain.com; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://dtr_cluster/; proxy_redirect off; } } --------------------------------------------------------------------------------------------------------------------------- access.log [15/Nov/2016:16:07:47 +0200] "\x16\x03\x01\x00\xC8\x01\x00\x00\xC4\x03\x03\xC7\xB3\x87\xE3'\xB8\xE7#\xC4G\xFD\xB6\xF2\xC5\x7FMC\x8Cs\xE5\x85\xACJ\x96\x0C\x0F\x90\x93\x0FQ\xCAy\x00\x00\x1A\xC0+\xC0/\xCC\xA9\xCC\xA8\xC0,\xC00\xC0" 400 173 "-" "-" "-" Nothing in error log ... but firefox just shows : Error code: SSL_ERROR_RX_RECORD_TOO_LONG Any help much appreciated. Regards Shaun -------------- next part -------------- An HTML attachment was scrubbed... URL: From medvedev.yp at gmail.com Tue Nov 15 14:16:29 2016 From: medvedev.yp at gmail.com (Yuriy Medvedev) Date: Tue, 15 Nov 2016 17:16:29 +0300 Subject: Load Balance - Docker In-Reply-To: References: Message-ID: 2016-11-15 17:11 GMT+03:00 Shaun Glass : > proxy_redirect Where you terminate ssl? -------------- next part -------------- An HTML attachment was scrubbed... URL: From shaunglass at gmail.com Tue Nov 15 14:34:08 2016 From: shaunglass at gmail.com (Shaun Glass) Date: Tue, 15 Nov 2016 16:34:08 +0200 Subject: Load Balance - Docker In-Reply-To: References: Message-ID: Mmmm ... I gather that would be at the Docker Nodes. Just want nginx that when receiving a connection just connects to either of the 3. On Tue, Nov 15, 2016 at 4:16 PM, Yuriy Medvedev wrote: > > 2016-11-15 17:11 GMT+03:00 Shaun Glass : > >> proxy_redirect > > > Where you terminate ssl? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From medvedev.yp at gmail.com Tue Nov 15 14:43:34 2016 From: medvedev.yp at gmail.com (Yuriy Medvedev) Date: Tue, 15 Nov 2016 17:43:34 +0300 Subject: Load Balance - Docker In-Reply-To: References: Message-ID: Use listen 443 ssl; 2016-11-15 17:34 GMT+03:00 Shaun Glass : > Mmmm ... I gather that would be at the Docker Nodes. Just want nginx that > when receiving a connection just connects to either of the 3. > > On Tue, Nov 15, 2016 at 4:16 PM, Yuriy Medvedev > wrote: > >> >> 2016-11-15 17:11 GMT+03:00 Shaun Glass : >> >>> proxy_redirect >> >> >> Where you terminate ssl? >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Nov 15 15:08:57 2016 From: nginx-forum at forum.nginx.org (Alt) Date: Tue, 15 Nov 2016 10:08:57 -0500 Subject: Are there plans for Nginx supporting HTTP/2 server push? In-Reply-To: References: Message-ID: <1bff753ae33f3cc30d01c394eaa93896.NginxMailingListEnglish@forum.nginx.org> Hello, Andrei Wrote: ------------------------------------------------------- > does not support HTTP/2 with "Server Push" (which most consider the > primary > boost in HTTP/2), however it is available in Nginx Plus (paid No, parallel requests are the primary boost of HTTP/2. Personally I'm very happy with the current implementation of HTTP/2 in nginx, I've no use of server push. Best Regards. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269749,270959#msg-270959 From mdounin at mdounin.ru Tue Nov 15 15:24:06 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 Nov 2016 18:24:06 +0300 Subject: nginx-1.11.6 Message-ID: <20161115152406.GM8196@mdounin.ru> Changes with nginx 1.11.6 15 Nov 2016 *) Change: format of the $ssl_client_s_dn and $ssl_client_i_dn variables has been changed to follow RFC 2253 (RFC 4514); values in the old format are available in the $ssl_client_s_dn_legacy and $ssl_client_i_dn_legacy variables. *) Change: when storing temporary files in a cache directory they will be stored in the same subdirectories as corresponding cache files instead of a separate subdirectory for temporary files. *) Feature: EXTERNAL authentication mechanism support in mail proxy. Thanks to Robert Norris. *) Feature: WebP support in the ngx_http_image_filter_module. *) Feature: variables support in the "proxy_method" directive. Thanks to Dmitry Lazurkin. *) Feature: the "http2_max_requests" directive in the ngx_http_v2_module. *) Feature: the "proxy_cache_max_range_offset", "fastcgi_cache_max_range_offset", "scgi_cache_max_range_offset", and "uwsgi_cache_max_range_offset" directives. *) Bugfix: graceful shutdown of old worker processes might require infinite time when using HTTP/2. *) Bugfix: in the ngx_http_mp4_module. *) Bugfix: "ignore long locked inactive cache entry" alerts might appear in logs when proxying WebSocket connections with caching enabled. *) Bugfix: nginx did not write anything to log and returned a response with code 502 instead of 504 when a timeout occurred during an SSL handshake to a backend. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Nov 15 15:54:19 2016 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 15 Nov 2016 10:54:19 -0500 Subject: [nginx-announce] nginx-1.11.6 In-Reply-To: <20161115152410.GN8196@mdounin.ru> References: <20161115152410.GN8196@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.11.6 for Windows https://kevinworthington.com/nginxwin1116 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Nov 15, 2016 at 10:24 AM, Maxim Dounin wrote: > Changes with nginx 1.11.6 15 Nov > 2016 > > *) Change: format of the $ssl_client_s_dn and $ssl_client_i_dn > variables > has been changed to follow RFC 2253 (RFC 4514); values in the old > format are available in the $ssl_client_s_dn_legacy and > $ssl_client_i_dn_legacy variables. > > *) Change: when storing temporary files in a cache directory they will > be stored in the same subdirectories as corresponding cache files > instead of a separate subdirectory for temporary files. > > *) Feature: EXTERNAL authentication mechanism support in mail proxy. > Thanks to Robert Norris. > > *) Feature: WebP support in the ngx_http_image_filter_module. > > *) Feature: variables support in the "proxy_method" directive. > Thanks to Dmitry Lazurkin. > > *) Feature: the "http2_max_requests" directive in the > ngx_http_v2_module. > > *) Feature: the "proxy_cache_max_range_offset", > "fastcgi_cache_max_range_offset", "scgi_cache_max_range_offset", > and > "uwsgi_cache_max_range_offset" directives. > > *) Bugfix: graceful shutdown of old worker processes might require > infinite time when using HTTP/2. > > *) Bugfix: in the ngx_http_mp4_module. > > *) Bugfix: "ignore long locked inactive cache entry" alerts might > appear > in logs when proxying WebSocket connections with caching enabled. > > *) Bugfix: nginx did not write anything to log and returned a response > with code 502 instead of 504 when a timeout occurred during an SSL > handshake to a backend. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Nov 15 18:07:29 2016 From: nginx-forum at forum.nginx.org (shiz) Date: Tue, 15 Nov 2016 13:07:29 -0500 Subject: nginx-1.11.6 In-Reply-To: <20161115152406.GM8196@mdounin.ru> References: <20161115152406.GM8196@mdounin.ru> Message-ID: Hi, I build it every time a new version is available. This one didn't make it. System is Debian 8 jessie (...) /usr/local/src/nginx/nginx-1.11.6/debian/modules/nginx-upstream-fair/ngx_http_upstream_fair_module.c: In function ?ngx_http_upstream_init_fair_rr?: /usr/local/src/nginx/nginx-1.11.6/debian/modules/nginx-upstream-fair/ngx_http_upstream_fair_module.c:543:28: error: ?ngx_http_upstream_srv_conf_t? has no member named ?default_port? if (us->port == 0 && us->default_port == 0) { ^ /usr/local/src/nginx/nginx-1.11.6/debian/modules/nginx-upstream-fair/ngx_http_upstream_fair_module.c:553:51: error: ?ngx_http_upstream_srv_conf_t? has no member named ?default_port? u.port = (in_port_t) (us->port ? us->port : us->default_port); ^ objs/Makefile:1849: recipe for target 'objs/addon/nginx-upstream-fair/ngx_http_upstream_fair_module.o' failed make[3]: *** [objs/addon/nginx-upstream-fair/ngx_http_upstream_fair_module.o] Error 1 make[3]: Leaving directory '/usr/local/src/nginx/nginx-1.11.6/debian/build-full' Makefile:8: recipe for target 'build' failed make[2]: *** [build] Error 2 make[2]: Leaving directory '/usr/local/src/nginx/nginx-1.11.6/debian/build-full' debian/rules:144: recipe for target 'build.arch.full' failed make[1]: *** [build.arch.full] Error 2 make[1]: Leaving directory '/usr/local/src/nginx/nginx-1.11.6' debian/rules:126: recipe for target 'build' failed make: *** [build] Error 2 dpkg-buildpackage: error: debian/rules build gave error exit status 2 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270963,270969#msg-270969 From francis at daoine.org Tue Nov 15 18:31:45 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 15 Nov 2016 18:31:45 +0000 Subject: Wildcard in location directive In-Reply-To: <10352a09a65f9c1a2613ef3e4b8419a7.NginxMailingListEnglish@forum.nginx.org> References: <20161115123759.GA2852@daoine.org> <10352a09a65f9c1a2613ef3e4b8419a7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161115183145.GB2852@daoine.org> On Tue, Nov 15, 2016 at 08:03:32AM -0500, piotr.pawlowski wrote: Hi there, > OK, so do you know how to achieve my goal? If my guess at what your goal is is correct, the following might work: == location ^~ /documents/ { if ($arg_static = false) { proxy_pass http://upstream; } try_files $uri /test.html; } location = /test.html { expires 30s; } == But if you want to do anything else, you may be better off doing the "error_page/return/location @" dance from the first example on https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/ Good luck with it, f -- Francis Daly francis at daoine.org From ru at nginx.com Tue Nov 15 18:36:26 2016 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 15 Nov 2016 21:36:26 +0300 Subject: nginx-1.11.6 In-Reply-To: References: <20161115152406.GM8196@mdounin.ru> Message-ID: <20161115183626.GE67837@lo0.su> On Tue, Nov 15, 2016 at 01:07:29PM -0500, shiz wrote: > Hi, > > I build it every time a new version is available. > > This one didn't make it. > > System is Debian 8 jessie > > (...) > > /usr/local/src/nginx/nginx-1.11.6/debian/modules/nginx-upstream-fair/ngx_http_upstream_fair_module.c: > In function ?ngx_http_upstream_init_fair_rr?: > /usr/local/src/nginx/nginx-1.11.6/debian/modules/nginx-upstream-fair/ngx_http_upstream_fair_module.c:543:28: > error: ?ngx_http_upstream_srv_conf_t? has no member named ?default_port? > if (us->port == 0 && us->default_port == 0) { > ^ > /usr/local/src/nginx/nginx-1.11.6/debian/modules/nginx-upstream-fair/ngx_http_upstream_fair_module.c:553:51: > error: ?ngx_http_upstream_srv_conf_t? has no member named ?default_port? > u.port = (in_port_t) (us->port ? us->port : us->default_port); > ^ > objs/Makefile:1849: recipe for target > 'objs/addon/nginx-upstream-fair/ngx_http_upstream_fair_module.o' failed > make[3]: *** > [objs/addon/nginx-upstream-fair/ngx_http_upstream_fair_module.o] Error 1 > make[3]: Leaving directory > '/usr/local/src/nginx/nginx-1.11.6/debian/build-full' > Makefile:8: recipe for target 'build' failed > make[2]: *** [build] Error 2 > make[2]: Leaving directory > '/usr/local/src/nginx/nginx-1.11.6/debian/build-full' > debian/rules:144: recipe for target 'build.arch.full' failed > make[1]: *** [build.arch.full] Error 2 > make[1]: Leaving directory '/usr/local/src/nginx/nginx-1.11.6' > debian/rules:126: recipe for target 'build' failed > make: *** [build] Error 2 > dpkg-buildpackage: error: debian/rules build gave error exit status 2 Please see these two changesets for an explanation: http://hg.nginx.org/nginx/rev/3fa5983b6b44 http://hg.nginx.org/nginx/rev/4dea01cf49e8 From nginx-forum at forum.nginx.org Tue Nov 15 19:57:04 2016 From: nginx-forum at forum.nginx.org (piotr.pawlowski) Date: Tue, 15 Nov 2016 14:57:04 -0500 Subject: Wildcard in location directive In-Reply-To: <20161115183145.GB2852@daoine.org> References: <20161115183145.GB2852@daoine.org> Message-ID: <8645eb1e596451999d5a37283173389a.NginxMailingListEnglish@forum.nginx.org> Thanks. This is exactly what I'have figured out and forgot to post here. Bottom line is that my approach, due to lack of knowledge, was wrong. static=false is a parameter which is not taken into account when NginX is making regex for 'location' block. 'Query string' is something I had to focus on. It is nicely described here: http://nginx.org/en/docs/http/request_processing.html#simple_php_site_configuration . Thanks all for a help! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270949,270972#msg-270972 From nginx-forum at forum.nginx.org Tue Nov 15 21:41:14 2016 From: nginx-forum at forum.nginx.org (erankor) Date: Tue, 15 Nov 2016 16:41:14 -0500 Subject: Cancelling aio operations on Linux In-Reply-To: <6a5409e671edb87db45a5e19da0e183f.NginxMailingListEnglish@forum.nginx.org> References: <6a5409e671edb87db45a5e19da0e183f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <368139c7529483dec660affdf1edf22a.NginxMailingListEnglish@forum.nginx.org> Hi, An update on this - we found the problem happens when the number of aio contexts (defaults to 32) is exceeded. When that happens nginx falls back to using regular (synchronous) io, and for some reason this makes the kernel not send completion notification for some pending aio requests. Increasing worker_aio_requests to a larger value (we use 1024) solved the problem for us. IMHO, it would have been better if nginx would have failed the request in this case instead of falling back to regular io. Or, at least, output some message to the error log. Thanks Eran Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269366,270973#msg-270973 From nginx-forum at forum.nginx.org Wed Nov 16 01:00:17 2016 From: nginx-forum at forum.nginx.org (shiz) Date: Tue, 15 Nov 2016 20:00:17 -0500 Subject: nginx-1.11.6 In-Reply-To: <20161115183626.GE67837@lo0.su> References: <20161115183626.GE67837@lo0.su> Message-ID: Thanks for the details. I've recompiled without the nginx-upstream-fair module and all went well. It looks unmaintened and I don't really need it. Code is 8 years old. Best! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270963,270974#msg-270974 From yongtao_you at yahoo.com Wed Nov 16 01:39:47 2016 From: yongtao_you at yahoo.com (Yongtao You) Date: Wed, 16 Nov 2016 01:39:47 +0000 (UTC) Subject: Two servers, one sign-in page? References: <1380944264.648830.1479260387806.ref@mail.yahoo.com> Message-ID: <1380944264.648830.1479260387806@mail.yahoo.com> Hi, I'm new to Nginx. I'm trying to configure Nginx as a reverse proxy to 2 backend services, as follows (see below). The backend1 server is already password protected, but backend2 is not. I'm trying to route backend2 sign-in to backend1 login page. The only thing that is not working is the following scenario: 1. Client tries to access localhost:81/some/resource without signing in first;2. Nginx redirect the client to localhost/login;3. After user signed in, it failed to redirect the user back to localhost:81/some/resource; It seems the $http_referer is empty. How can I pass the original URL to the sign-in page so that it can redirect back? Here is my configuration: http: {? ? upstream backend1 {? ? ? ? server 127.0.0.1:4000;? ? } ? ? upstream backend2 {? ? ? ? server 127.0.0.1:5000;? ? } ? ? server {? ? ? ? listen 80; ? ? ? ? location / {? ? ? ? ? ? proxy_pass http://backend1/;? ? ? ? } ? ? ? ? location = /login {? ? ? ? ? ? proxy_pass http://backend1/login;? ? ? ? ? ? ? ? ? ? ? ? ? proxy_set_header Cookie "redirect_to=$http_referer";? ? ? ? }? ? } ? ? server {? ? ? ? listen 81; ? ? ? ? location / {? ? ? ? ? ? auth_request /auth-proxy;? ? ? ? ? ? error_page 401 =302 http://localhost/login;? ? ? ? ? ? ? ? ? ? ? ? ? proxy_pass http://backend2/;? ? ? ? } ? ? ? ? location = /auth-proxy {? ? ? ? ? ? internal;? ? ? ? ? ? ? ? ? ? ? ? ? ... ...? ? ? ? }? ? }} Can someone please help? Thanks!Yongtao -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Nov 16 02:49:54 2016 From: nginx-forum at forum.nginx.org (Nanoq) Date: Tue, 15 Nov 2016 21:49:54 -0500 Subject: 502 Gateway Error - when accessing postgresql database Message-ID: <3c65b4090531a1e0206d1f8840f4c5c1.NginxMailingListEnglish@forum.nginx.org> Hi I'm building a website with nginx, uwsgi and a postgresql database. When I want to access the database with curl -i -X POST -H 'Content-Type: application/json' -d '{"username": "test", "password": "test", "email":"test"}' xxx.xxx.xxx I get the following error: HTTP/1.1 502 Bad Gateway The nginx/access.log file tells me: "POST /main/newuser HTTP/1.1" 502 173 "-" "curl/7.35.0" and the nginx/error.log file says: upstream prematurely closed connection while reading response header from upstream request: "POST /main/newuser HTTP/1.1", upstream: "uwsgi://unix:/var/www/mysite2/mysite2.sock:" Everything else, beside accessing the database seems to work. When I run the my webserver on the localhost the connection with the database works. Here my configuration files: sites-enabled/project upstream db { server xxx.xxx.xxx:5432; } server { listen 80; server_name xxx.xxx.xxx; location / { proxy_pass http://db; proxy_redirect off; include uwsgi_params; uwsgi_pass unix:/var/www/mysite2/mysite2.sock; } } the uwsgi.ini file; [uwsgi] module = wsgi master = true processes = 5 socket = mysite2.sock chmod-socket = 660 vacuum = true die-on-term = true python-path = /var/www/mysite2 Thanks in advance for your help! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270976,270976#msg-270976 From francis at daoine.org Wed Nov 16 08:26:42 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 16 Nov 2016 08:26:42 +0000 Subject: nginx-1.11.6 In-Reply-To: References: <20161115183626.GE67837@lo0.su> Message-ID: <20161116082642.GC2852@daoine.org> On Tue, Nov 15, 2016 at 08:00:17PM -0500, shiz wrote: Hi there, > I've recompiled without the nginx-upstream-fair module and all went well. > It looks unmaintened and I don't really need it. Code is 8 years old. "Unmaintained" possibly because it works and the nginx API that it uses had not changed in that time? I suspect that when someone that does want it takes a look, they will create a version for nginx-1.11.6 which avoids touching ->default_port and it will probably work again. Cheers, f -- Francis Daly francis at daoine.org From iippolitov at nginx.com Wed Nov 16 10:20:27 2016 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Wed, 16 Nov 2016 13:20:27 +0300 Subject: nginx-1.11.6 In-Reply-To: <20161116082642.GC2852@daoine.org> References: <20161115183626.GE67837@lo0.su> <20161116082642.GC2852@daoine.org> Message-ID: <1b4a64a1-77ce-dcfe-06f6-8f505a53f795@nginx.com> I believe, it is mostly unmaintained because of least_conn and least_time. On 16.11.2016 11:26, Francis Daly wrote: > On Tue, Nov 15, 2016 at 08:00:17PM -0500, shiz wrote: > > Hi there, > >> I've recompiled without the nginx-upstream-fair module and all went well. >> It looks unmaintened and I don't really need it. Code is 8 years old. > "Unmaintained" possibly because it works and the nginx API that it uses > had not changed in that time? > > I suspect that when someone that does want it takes a look, they will > create a version for nginx-1.11.6 which avoids touching ->default_port > and it will probably work again. > > Cheers, > > f From shaunglass at gmail.com Wed Nov 16 13:24:49 2016 From: shaunglass at gmail.com (Shaun Glass) Date: Wed, 16 Nov 2016 15:24:49 +0200 Subject: Load Balance - Docker In-Reply-To: References: Message-ID: Ok ... after some more work I have it as follow and working. I created the certificates mentioned below as well : upstream ucp_cluster { server 10.12.64.218:444; server 10.12.64.219:444; server 10.12.64.222:444; } server { listen 444 ssl; server_name docker-poc.domain.com; ssl on; ssl_certificate /etc/nginx/ssl/docker-poc.domain.com.crt; ssl_certificate_key /etc/nginx/ssl/docker-poc.domain.com.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256"; ssl_prefer_server_ciphers on; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass https://dtr_cluster/; proxy_redirect off; } } On Tue, Nov 15, 2016 at 4:43 PM, Yuriy Medvedev wrote: > Use listen 443 ssl; > > 2016-11-15 17:34 GMT+03:00 Shaun Glass : > >> Mmmm ... I gather that would be at the Docker Nodes. Just want nginx that >> when receiving a connection just connects to either of the 3. >> >> On Tue, Nov 15, 2016 at 4:16 PM, Yuriy Medvedev >> wrote: >> >>> >>> 2016-11-15 17:11 GMT+03:00 Shaun Glass : >>> >>>> proxy_redirect >>> >>> >>> Where you terminate ssl? >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Nov 16 21:11:30 2016 From: nginx-forum at forum.nginx.org (rov12) Date: Wed, 16 Nov 2016 16:11:30 -0500 Subject: Connecting Nginx to LDAP/Kerberos In-Reply-To: References: Message-ID: Hi I'm curious if you made this work? I have tried combining the spnego-http-auth-nginx-module for Kerberos and the nginx-auth-ldap module for LDAP. So far without success. I am able to get the authenticated user from Kerberos using the spnego module. But I've not figured out if the nginx-auth-ldap is able to use this user id when doing the LDAP lookup. It seems to be locked on basic auth. Thanks, ?. R?nne Posted at Nginx Forum: https://forum.nginx.org/read.php?2,269482,270998#msg-270998 From nginx-forum at forum.nginx.org Wed Nov 16 22:36:21 2016 From: nginx-forum at forum.nginx.org (George) Date: Wed, 16 Nov 2016 17:36:21 -0500 Subject: nginx-1.11.6 In-Reply-To: <20161115183626.GE67837@lo0.su> References: <20161115183626.GE67837@lo0.su> Message-ID: <106f1baf2b256a78c3e26f3d18765ed9.NginxMailingListEnglish@forum.nginx.org> Yeah some other nginx modules by OpenResty ran into 1.11.6 changes * https://github.com/openresty/redis2-nginx-module/issues/41 * https://github.com/openresty/memc-nginx-module/issues/26 the workarounds * https://github.com/openresty/redis2-nginx-module/pull/42 * https://github.com/openresty/memc-nginx-module/pull/27 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270963,270999#msg-270999 From nginx-forum at forum.nginx.org Thu Nov 17 09:17:36 2016 From: nginx-forum at forum.nginx.org (JoakimR) Date: Thu, 17 Nov 2016 04:17:36 -0500 Subject: set cache-control in a subfolder (php) Message-ID: Hi I'm trying to override my default cache-control in my /admin/ folder location /admin/ { add_header ?Cache-Control: no-cache?; try_files $uri =404; access_log off; } but everything have a wrong cache-age by running curl -I curl -I https://domain/admin/index.php HTTP/1.1 302 Moved Temporarily Server: nginx Date: Thu, 17 Nov 2016 09:10:02 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Set-Cookie: PHPSESSID=hf3slpa33833dt3hl168m4sv24; expires=Fri, 18-Nov-2016 09:10:02 GMT; Max-Age=86400; path=/ Expires: Sat, 19 Nov 2016 21:10:02 GMT Cache-Control: public, max-age=216000 Last-Modified: Sat, 21 May 2016 01:04:48 GMT Location: https://domain/admin/login.php?referrer=/admin/index.php X-XSS-Protection: 1; mode=block X-Frame-Options: SAMEORIGIN X-Content-Type-Options: nosniff I've tried all the directives from http://nginx.org/en/docs/http/ngx_http_headers_module.html#expires to try overriding this My goal is to have no-cache on all php files in this subfolder. Any one who could help me? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271004,271004#msg-271004 From francis at daoine.org Thu Nov 17 13:47:50 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 17 Nov 2016 13:47:50 +0000 Subject: set cache-control in a subfolder (php) In-Reply-To: References: Message-ID: <20161117134750.GE2852@daoine.org> On Thu, Nov 17, 2016 at 04:17:36AM -0500, JoakimR wrote: Hi there, > Hi I'm trying to override my default cache-control in my /admin/ folder One request is handled in one location. Only the config that applies in that location applies to the request. > location /admin/ { > add_header ?Cache-Control: no-cache?; > try_files $uri =404; > access_log off; > } > > but everything have a wrong cache-age by running curl -I > > curl -I https://domain/admin/index.php > HTTP/1.1 302 Moved Temporarily That suggests that the request for /admin/index.php is not handled in the location that you show. Put the config that you want in the location that is used; or perhaps make a new location for the requests that you care about, and make all of the config that matters apply in that location. > My goal is to have no-cache on all php files in this subfolder. Any one who > could help me? Populate "location ~ ^/admin/*.php$ {}", perhaps? f -- Francis Daly francis at daoine.org From agentzh at gmail.com Fri Nov 18 01:20:04 2016 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 17 Nov 2016 17:20:04 -0800 Subject: [ANN] OpenResty 1.11.2.2 released Message-ID: Hi folks, I am excited to announce the new formal release, 1.11.2.2, of the OpenResty web platform based on NGINX and LuaJIT: https://openresty.org/en/download.html Both the (portable) source code distribution and the Win32 binary distribution are provided on this Download page. We have many highlights in this release: 1. Includes new package management tool, opm, for community contributed OpenResty packages (only Lua libraries for now, may add support for dynamic nginx C modules in the future as well). See below for more details on opm: https://github.com/openresty/opm#readme https://opm.openresty.org/ We maintain our own central package server at opm.openresty.org. As of this writing, we already have 110 successful uploads across 74 distinct package names from 26 contributors. Come on, OPM authors! The users of our official Linux packages should install the openresty-opm package like this: yum install openresty-opm or dnf install openresty-opm 2. The memory footprint of nginx worker processes is now much smaller (as compared to the previous few OpenResty releases) after traffic peak (via the new lua_malloc_trim directive of ngx_http_lua_module). See https://github.com/openresty/lua-nginx-module#lua_malloc_trim 3. Includes new Lua library lua-resty-limit-traffic, which is the Lua-land equivalent of nginx's ngx_limit_conn and ngx_limit_req modules, but much more flexible and can be used in any request or timer contexts that can yield. See below for more details: https://github.com/openresty/lua-resty-limit-traffic#readme 4. Cosockets can now set different timeout threshold values for connect, send, and read at the same time: https://github.com/openresty/lua-nginx-module#tcpsocksettimeouts 5. New split() API function that supports Perl-compatible regexes in the new ngx.re Lua module. https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/re.md#readme 6. Now it is supported for 3rd-party NGINX C modules to implement their own shm-based storage and expose Lua API to the Lua land of OpenResty. See https://github.com/openresty/lua-nginx-module/commit/73bd64dbc 7. The Lua 5.2 language feature extension of LuaJIT 2.1 is now enabled by default: http://luajit.org/extensions.html#lua52 The complete change log since the last (formal) release, 1.11.2.1: * feature: added new command-line utility, opm of version 0.02, for managing community contributed OpenResty packages. * change: now we enable "-DLUAJIT_ENABLE_LUA52COMPAT" in our bundled LuaJIT build by default, which can be disabled by "./configure --without-luajit-lua52". note that this change may introduce some minor backeward incompatibilities on the Lua land, see http://luajit.org/extensions.html#lua52 for more details. * win32: upgraded OpenSSL to 1.0.2j. * win32: enabled http v2, http addition, http gzip static, http sub, and several other standard nginx modules by default. * updated the help text of "./configure --help" to sync with the new nginx 1.11.2 core. * "make install": now we also create directories "/site/pod/" and "/site/manifest/". * doc: updated README-win32.md to reflect recent changes. * added new component, lua-resty-limit-traffic, which is enabled by default and can be explicitly disabled via the "--without-lua_resty_limit_traffic" option of the "./configure" script during build. * upgraded ngx_lua to 0.10.7. * feature: added a new API function "tcpsock:settimeouts(connect_timeout, send_timeout, read_timeout)". thanks Dejiang Zhu for the patch. * feature: added public C API for 3rd-party NGINX C modules to register their own shm-based data structures for the Lua land usage (that is, to create custom siblings to lua_shared_dict). thanks helloyi and Dejiang Zhu for the patches. * feature, bugfix: added new config directive "lua_malloc_trim N" to periodically call malloc_trim(1) every "N" requests when "malloc_trim()" is available. by default, "lua_malloc_trim 1000" is configured. this should fix the glibc oddity of holding too much freed memory when it fails to use "brk()" to allocate memory in the data segment. thanks Shuxin Yang for the proposal. * bugfix: segmentation faults might happen when ngx.exec() was fed with unsafe URIs. thanks Jayce LiuChuan for the patch. * bugfix: ngx.req.set_header(): skips setting multi-value headers for bad requests to avoid segfaults. thanks Emil Stolarsky for the patch. * change: ssl_session_fetch_by_lua* and ssl_session_store_by_lua* are now only allowed in the "http {}" context. use of these session hooks in the "server {}" scope did not make much sense since server name dispatch happens *after* ssl session resumption anyway. thanks Dejiang Zhu for the patch. * optimize: optimized the lua_shared_dict node struct memory layout which can save 8 bytes for every key-value pair on 64-bit systems, for example. * doc: log level constants are also available in init_by_lua* and init_worker_by_lua* contexts. thanks kraml for the report and detailyang for the patch. * doc: documented the support of 307 status argument value in ngx.redirect(). * doc: use "*_by_lua_block {}" in all the configuration examples. thanks pj.vegan for the patch. * doc: documented how to easily test the ssl_session_fetch_by_lua* and ssl_session_store_by_lua* locally with a modern web browser. * upgraded lua-resty-core to 0.1.9. * feature: implemented the split() method in the ngx.re module. * optimize: resty.core.shdict: removed one unused top-level Lua variable. * upgraded ngx_headers_more to 0.32. * bugfix: more_set_input_headers: skips setting multi-value headers for bad requests to avoid segfaults. * upgraded lua-resty-redis to 0.26. * optimize: hmset: use "select" to avoid creating temporary Lua tables and to be more friendly to LuaJIT's JIT compiler. thanks spacewander for the patch. * upgraded lua-resty-dns to 0.18. * optimize: removed unused local Lua variables. thanks Thijs Schreijer for the patch. * change: stops seeding the random generator. This is user's responsibility now. thanks Thijs Schreijer for the patch. * upgraded lua-resty-mysql to 0.17. * bugfix: fixed the Lua exception "attempt to perform arithmetic on local len (a boolean value)". thanks Dmitry Kuzmin for the report. * doc: renamed the "errno" return value to "errcode" for consistency. thanks Soojin Nam for the patch. * upgraded lua-resty-websocket to 0.06. * optimize: minor code optimizations and cleanups from Aapo Talvensaari. * doc: fixed copy&paste mistakes found by rock59. * upgraded lua-resty-upload to 0.10. * feature: the "new()" method now accepts an optional 2nd arg to configure the max line size. * optimize: use the $http_content_type nginx built-in variable instead of "ngx.req.get_headers()["content-type"]". thanks Soojin Nam for the patch. * optimize: minor optimization from Will Bond. * optimize: various minor optimizations and cleanups from Soojin Nam, Will Bond, Aapo Talvensaari, and hamza. * upgraded resty-cli to 0.16. * feature: resty: forwarded more UNIX signals. thanks Zekai.Zheng for the patch. * feature: restydoc: added new option "-r DIR" for specifying a custom root directory. * feature: restydoc: added support for comment syntax, "# ...", in the "resty.index" file. * bugfix: resty: literal single quotes led to nginx configuration errors in -e option values. thanks spacewander for the report. * bugfix: restydoc-index: we did not ignore POD files in the output directory if they are also inside the input directory. * bugfix: restydoc-index: we should only ignore "pod" directories in the output directory, not the whole output directory. * bugfix: restydoc-index: we swallowed the section name right after the "Table of Contents" section (if any). * upgraded LuaJIT to v2.1-20161104: https://github.com/openresty/luajit2/tags * imported Mike Pall's latest changes: * LJ_GC64: Fix HREF for pointers. * LJ_FR2: Fix slot 1 handling. * Fix GC step size calculation. * LJ_GC64: Fix "jit.on/off". * Fix "-jp=a" mode for builtins. * ARM: Fix BLX encoding for Thumb interworking calls. * Initialize uv->immutable for upvalues of loaded chunks. * Windows/x86: Add MSVC flags for debug build with exception interop. * Fix exit status for "luajit -b". * Must preserve "J->fold.ins" (fins) around call to "lj_ir_ksimd()". * Emit bytecode in .c/.h files with unsigned char type. * Set arg table before evaluating "LUA_INIT" and "-e" chunks. * Fix for cdata vs. non-cdata arithmetics/comparisons. * Fix unused vars etc. in internal Lua files. * Properly clean up state before restart of trace assembly. * Drop leftover regs in 'for' iterator assignment, too. * MIPS: Support MIPS16 interlinking. * x64/LJ_GC64: Fix code generation for IR_KNULL call argument. * Fix PHI remarking in SINK pass. * LJ_GC64: Set correct nil value when clearing a cdata finalizer. * LJ_GC64: Ensure all IR slot fields are initialized. * LJ_GC64: Allow optional use of the system memory allocator. * Fix Valgrind suppressions. * Don't try to record outermost "pcall()" return to lower frame. * MIPS: Fix build failures and warnings. * Proper fix for LJ_GC64 changes to "asm_href()". * MIPS64, part 1: Add MIPS64 support to interpreter. * DynASM/MIPS: Add missing MIPS64 instructions. * x64/LJ_GC64: Fix "__call" metamethod for tailcall. * Fix collateral damage from LJ_GC64 changes to asm_href(). * Use MAP_TRYFIXED for the probing memory allocator, if available. * x86: Don't spill an explicit REF_BASE in the IR. * x64/LJ_GC64: Add missing backend support and enable JIT compilation. * LJ_FR2: Add support for trace recording and snapshots. * Embed 64 bit constants directly in the IR, using two slots. * Simplify GCtrace * reference embedding for trace stitching. * Make the IR immovable after assembly. * Add guard for obscure aliasing between open upvalues and SSA slots. * Workaround for MinGW headers lacking some exception definitions. * Remove assumption that lj_math_random_step() doesn't clobber FPRs. The HTML version of the change log with lots of helpful hyper-links can be browsed here: https://openresty.org/en/changelog-1011002.html OpenResty is a full-fledged web platform by bundling the standard Nginx core, Lua/LuaJIT, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: https://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: https://qa.openresty.org/ Enjoy! -agentzh From nginx at 2xlp.com Fri Nov 18 18:32:42 2016 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Fri, 18 Nov 2016 13:32:42 -0500 Subject: deployment issue (or feature request?) - multiple `include` with `log_format` Message-ID: <4F2F9A8A-952C-4C93-BB1F-F8ACC5A60B1B@2xlp.com> I'm not sure if this is a feature request or just an issue with our deployment... We host many domains, often partitioned across many configuration files (ie: sites-enabled/domain1.conf, sites-enabled/domain2.conf, sites-enabled/domain3.conf, ) An issue that has complicated our setup is the `log_format` directive. We use a few variations of a log format (different domain groups need/dont-need different data in the logs) and wanted to centrally manage them in a library of `include` files The problem is that a `log_format` by any given name can only be declared once, so our configuration can not do this: macros/log_formats/app_custom.conf log_format app_custom '$remote_addr - [$time_local][$host] $status '... sites-enabled/domain1.conf include macros/log_formats/app_custom.conf; server { } sites-enabled/domain1.conf include macros/log_formats/app_custom.conf; server { } we either have to preload the log formats in the main config, or just define a custom log_format for each domain. has anyone dealt with this in the past, and found a good solution for deployments (short of requesting an `include_once` feature)? From zxcvbn4038 at gmail.com Fri Nov 18 19:43:48 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Fri, 18 Nov 2016 14:43:48 -0500 Subject: Blocking tens of thousands of IP's In-Reply-To: <20161108231505.5468242.61961.15929@lazygranch.com> References: <74A4D440E25E6843BC8E324E67BB3E394558B28E@N060XBOXP38.kroger.com> <36432CF7-BD34-4DAC-B452-01D551114D7F@2xlp.com> <20161108225800.721012C511E4@mail.nginx.com> <20161108231505.5468242.61961.15929@lazygranch.com> Message-ID: OVH and Hetzner CIDR lists from RIPE are huge because of all the tiny subnets - however they compress down really well if you merge all the adjacent networks, you end up with a few dozen entires each. Whatever set of CIDRs you are putting in a set, always merge them unless you need to know which specific range in your source list was hit, thats my advice. My experience is that both OVH and Hetzner take abuse complaints seriously and if you make the effort to contact them then they will either compel their customers to respond to you or cut them off if they can't be contacted. However when you get to the former eastern block countries and onward that doesn't happen - maybe Google Translate is just really bad at Indo-European languages, but I suspect its more cultural. On Tue, Nov 8, 2016 at 6:15 PM, wrote: > Is that 2.2 million CIDRs, or actual addresses? > > I use IPFW with tables for about 20k CIDRs. I don't see any significant > server load. It seems to me nginx has a big enough task that it makes sense > to offload the blocking to something that is more tightly integrated to the > OS. > > At a bare minimum, block OVH and Hetzner. People bash the Russians and old > Soviet block countries for hacking, but OVH and Hetzner are far worse. > > > Original Message > From: mayak > Sent: Tuesday, November 8, 2016 2:58 PM > To: nginx at nginx.org > Reply To: nginx at nginx.org > Subject: Re: Blocking tens of thousands of IP's > > On 11/08/2016 07:28 PM, Jonathan Vanasco wrote: > > On Nov 4, 2016, at 5:43 AM, mex wrote: > > > >> we do a similar thing but keep a counter within nginx (lua_shared_dict > FTW) > >> and export this stuff via /badass - location. > >> > >> although its not realtime we have a delay of 5 sec which is enough for > us > > We are blocking 2.2 million addresses, however, we do it at the > firewall/router (pfsense pfBlocker). > > Ultra fast. > > HTH > > Mayak > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Fri Nov 18 19:55:13 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Fri, 18 Nov 2016 14:55:13 -0500 Subject: Using NGINX as a forward proxy Message-ID: I know its not encouraged but I am trying to use Nginx (specifically openresty/1.11.2.1 which is based on Nginx 1.11.2) as a forward proxy. I did a quick setup based on all the examples I found in Google and tried "GET http://www.google.com/" as an example and found: This does work: location / { resolver 127.0.0.1; proxy_pass $scheme://www.google.com$request_uri$is_args$args; } This does not: location / { resolver 127.0.0.1; proxy_pass $scheme://$http_host$request_uri$is_args$args; } In the latter example it seems that using the $http_host variable is causing the problem - there is a hang for a few seconds then a 502 error is delivered. Is there any unofficial advice? I also see this module: https://github.com/chobits/ngx_http_proxy_connect_module And I really don't want to change nginx core and am concerned it'll stop being developed after being merged with tengine, but it looks like a possible solution - anyone have success with it? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Nov 18 20:15:10 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 18 Nov 2016 23:15:10 +0300 Subject: Using NGINX as a forward proxy In-Reply-To: References: Message-ID: <20161118201509.GF8196@mdounin.ru> Hello! On Fri, Nov 18, 2016 at 02:55:13PM -0500, CJ Ess wrote: > I know its not encouraged but I am trying to use Nginx (specifically > openresty/1.11.2.1 which is based on Nginx 1.11.2) as a forward proxy. > > I did a quick setup based on all the examples I found in Google and tried > "GET http://www.google.com/" as an example and found: > > This does work: > > location / { > resolver 127.0.0.1; > proxy_pass $scheme://www.google.com$request_uri$is_args$args; > } > > This does not: > > location / { > resolver 127.0.0.1; > proxy_pass $scheme://$http_host$request_uri$is_args$args; > } > > In the latter example it seems that using the $http_host variable is > causing the problem - there is a hang for a few seconds then a 502 error is > delivered. > > Is there any unofficial advice? Unofficial advice is to try looking into error log, it is expected to contain details. In this particular configuration I see at least the following problems: - $http_host is used, but Host header won't be available in case of forward proxy; $host is to be used instead, it will also contain host from the request line. - $request_uri already includes request arguments, "$is_args$args" part is not needed. -- Maxim Dounin http://nginx.org/ From zxcvbn4038 at gmail.com Fri Nov 18 21:54:53 2016 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Fri, 18 Nov 2016 16:54:53 -0500 Subject: Using NGINX as a forward proxy In-Reply-To: <20161118201509.GF8196@mdounin.ru> References: <20161118201509.GF8196@mdounin.ru> Message-ID: Thank you, very helpful! I was able to get it working with my http://www.google.com test case. it looks like my next problem is that the upstream is always contacted on port 80 regardless of the scheme or the port specified in the uri. Is there a handy variable that has the correct value (user specified or scheme default)? I looked though the vaiable descriptions but didn't see any that looked appropriate. On Fri, Nov 18, 2016 at 3:15 PM, Maxim Dounin wrote: > Hello! > > On Fri, Nov 18, 2016 at 02:55:13PM -0500, CJ Ess wrote: > > > I know its not encouraged but I am trying to use Nginx (specifically > > openresty/1.11.2.1 which is based on Nginx 1.11.2) as a forward proxy. > > > > I did a quick setup based on all the examples I found in Google and tried > > "GET http://www.google.com/" as an example and found: > > > > This does work: > > > > location / { > > resolver 127.0.0.1; > > proxy_pass $scheme://www.google.com$request_uri$is_args$args; > > } > > > > This does not: > > > > location / { > > resolver 127.0.0.1; > > proxy_pass $scheme://$http_host$request_uri$is_args$args; > > } > > > > In the latter example it seems that using the $http_host variable is > > causing the problem - there is a hang for a few seconds then a 502 error > is > > delivered. > > > > Is there any unofficial advice? > > Unofficial advice is to try looking into error log, it is expected > to contain details. > > In this particular configuration I see at least the following problems: > > - $http_host is used, but Host header won't be available in case of > forward proxy; $host is to be used instead, it will also contain > host from the request line. > > - $request_uri already includes request arguments, "$is_args$args" > part is not needed. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gerardmattison455 at gmail.com Sat Nov 19 21:08:24 2016 From: gerardmattison455 at gmail.com (Gerard Mattison) Date: Sat, 19 Nov 2016 13:08:24 -0800 Subject: Help with securing "route" cookie Message-ID: Hello all, I am using nginx with nginx-sticky-module-ng for distributing the load among servers per specific user session for my java application. One of the issue I having is that when I ran a vulnerability assessment, the "route" cookie is coming up as not secure. Attached image shows the issue. I appreciate any can help me on how to make the route cookie secure. Thanks in advance. Best Regards, Gerard *nginx configuration* upstream jetty { sticky secure; server 10.1.10.1:8080 fail_timeout=3s; server 10.1.10.2:8080 fail_timeout=3s; server 10.1.10.3:8080 fail_timeout=3s; } server { listen 80; server_name webapp.contoso.com; return 301 https://$host$request_uri; } server { listen 443 ssl; server_name webapp.contoso.com; access_log /var/log/nginx/webapp.contoso.com-access.log; error_log /var/log/nginx/webapp.contoso.com-error.log; ssl on; ssl_certificate /etc/nginx/ssl/chain.crt; ssl_certificate_key /etc/nginx/ssl/ssl.key; location / { proxy_pass http://jetty/; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_connect_timeout 90; proxy_send_timeout 180; proxy_read_timeout 180; proxy_buffer_size 128k; proxy_buffers 100 256k; proxy_busy_buffers_size 256k; proxy_intercept_errors on; } include deny_dots.conf; } -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Snap156.gif Type: image/gif Size: 16427 bytes Desc: not available URL: From yongtao_you at yahoo.com Sun Nov 20 00:18:15 2016 From: yongtao_you at yahoo.com (Yongtao You) Date: Sun, 20 Nov 2016 00:18:15 +0000 (UTC) Subject: Injecting Set-Cookie in Reverse Proxy? References: <273841972.399324.1479601095803.ref@mail.yahoo.com> Message-ID: <273841972.399324.1479601095803@mail.yahoo.com> Hi, I'm setting up a reverse proxy to my backend service as follows: server {? ? location / { ? ? ? ??auth_request /auth;? ? ? ? error_page 401 =302 /login;? ? ? ??proxy_pass http://backend/;? ? ? ? add_header Set-Cookie "my=xyz"; # Can I add this in the response from backend?? ? }} I would like to inject a Set-Cookie header in the response from the backend service. Is that possible? The above does not work. Thanks.Yongtao -------------- next part -------------- An HTML attachment was scrubbed... URL: From yongtao_you at yahoo.com Sun Nov 20 06:28:53 2016 From: yongtao_you at yahoo.com (Yongtao You) Date: Sun, 20 Nov 2016 06:28:53 +0000 (UTC) Subject: Injecting Set-Cookie in Reverse Proxy? In-Reply-To: <273841972.399324.1479601095803@mail.yahoo.com> References: <273841972.399324.1479601095803.ref@mail.yahoo.com> <273841972.399324.1479601095803@mail.yahoo.com> Message-ID: <98263948.473578.1479623333611@mail.yahoo.com> I was able to do this with the "headers more" module. Thanks!Yongtao On Saturday, November 19, 2016 4:22 PM, Yongtao You via nginx wrote: Hi, I'm setting up a reverse proxy to my backend service as follows: server {? ? location / { ? ? ? ??auth_request /auth;? ? ? ? error_page 401 =302 /login;? ? ? ??proxy_pass http://backend/;? ? ? ? add_header Set-Cookie "my=xyz"; # Can I add this in the response from backend?? ? }} I would like to inject a Set-Cookie header in the response from the backend service. Is that possible? The above does not work. Thanks.Yongtao _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.cox at kroger.com Sun Nov 20 14:51:24 2016 From: eric.cox at kroger.com (Cox, Eric S) Date: Sun, 20 Nov 2016 14:51:24 +0000 Subject: Custom Error Log Format Message-ID: <74A4D440E25E6843BC8E324E67BB3E39455C5438@N060XBOXP38.kroger.com> Has anyone done anything with using a lua script, third party module etc to be able to define a customer log error format? Currently the log format I believe is YYYY/MM/DD HH:MM:SS [LEVEL] PID#TID: *CID MESSAGE Parsing/rewriting this is possible with some custom scripts but I would like to do this inside of nginx and have the output the way I need to begin with. Example adding some sort of predefined delimiter between each field, etc. Thanks. ________________________________ This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is confidential and protected by law from unauthorized disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at samad.com.au Sun Nov 20 22:28:06 2016 From: alex at samad.com.au (Alex Samad) Date: Mon, 21 Nov 2016 09:28:06 +1100 Subject: Feature request ? Message-ID: Hi I do a lot of stuff with client certs, we have just moved from an inhouse RP to using NGINX. But I find that the amount of information about the client cert is very limited. compared to say squid / apache. For example I looking for end date for the client cert. It would be nice if this sort of information could be provided by env variables .. instead of me having to process the raw pem format on every request. what are other people doing ? Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From ingham.k at gmail.com Mon Nov 21 11:09:20 2016 From: ingham.k at gmail.com (Igor Karymov) Date: Mon, 21 Nov 2016 14:09:20 +0300 Subject: web socket connection time Message-ID: Hello. I'm using Nginx as WebSocket proxy and trying to set up the metric that will show me how quick gateway can establish new connections. Unfortunately, looks like when I parse logs I have only "response_time" that in the case of WebSockets will be better to call "connection time". Any ideas how can I get time between GET request and 101 response? -------------- next part -------------- An HTML attachment was scrubbed... URL: From philip.walenta at gmail.com Mon Nov 21 11:20:16 2016 From: philip.walenta at gmail.com (Philip Walenta) Date: Mon, 21 Nov 2016 05:20:16 -0600 Subject: web socket connection time In-Reply-To: References: Message-ID: Igor, Not sure if this will help, but I gather several metrics from the front end to make a determination how long back end responses take. Here's a snippet from my log format that might help: "upstream_server":"$upstream_addr", "req_total_time":"$request_time", "req_upstream_time":"$upstream_response_time", "upstream_conn_time":"$upstream_connect_time", "upstream_header_time":"$upstream_header_time" You can find all the variables here: http://nginx.org/en/docs/varindex.html On Mon, Nov 21, 2016 at 5:09 AM, Igor Karymov wrote: > Hello. I'm using Nginx as WebSocket proxy and trying to set up the metric > that will show me how quick gateway can establish new connections. > Unfortunately, looks like when I parse logs I have only "response_time" > that in the case of WebSockets will be better to call "connection time". > Any ideas how can I get time between GET request and 101 response? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Nov 21 14:35:20 2016 From: francis at daoine.org (Francis Daly) Date: Mon, 21 Nov 2016 14:35:20 +0000 Subject: Help with securing "route" cookie In-Reply-To: References: Message-ID: <20161121143520.GF2852@daoine.org> On Sat, Nov 19, 2016 at 01:08:24PM -0800, Gerard Mattison wrote: Hi there, > One of the issue I having is that when I ran a vulnerability assessment, > the "route" cookie is coming up as not secure. It looks like the cookie should be secure. Is there any change that you used this browser to access this server; then reconfigured the server to add the "secure" options and reloaded the config; and then refreshed the page in the browser? If so, that would explain it -- you have to arrange that the browser removes the previous session cookie (for example, by closing the browser or just by deleting the cookie). If the browser presents a cookie, the server will not send a new one. And it is only the new one that will be marked "Secure" or not. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Nov 21 14:44:18 2016 From: francis at daoine.org (Francis Daly) Date: Mon, 21 Nov 2016 14:44:18 +0000 Subject: Feature request ? In-Reply-To: References: Message-ID: <20161121144418.GG2852@daoine.org> On Mon, Nov 21, 2016 at 09:28:06AM +1100, Alex Samad wrote: Hi there, > But I find that the amount of information about the client cert is very > limited. compared to say squid / apache. > > For example I looking for end date for the client cert. It would be nice > if this sort of information could be provided by env variables .. instead > of me having to process the raw pem format on every request. Either nginx has to do the work to present the information for all requests for all users; or you have to do it for your use case. I suspect that what nginx currently does is mostly "what seemed useful to many people", with a bit of "someone wrote the code". For example: why do you care about the certificate end date? You should (in general) care what the result of client certificate verification is, which (I hope) includes a date check. If you have a special use case that wants the end-date for some other reason, then you get to write the code for your special case. I guess that it is possible that, if it is believed that information is generally useful, it could be auto-exposed by nginx. Possibly the reason it is not, is that no-one has asked for it. Cheers, f -- Francis Daly francis at daoine.org From ingham.k at gmail.com Mon Nov 21 15:10:17 2016 From: ingham.k at gmail.com (Igor Karymov) Date: Mon, 21 Nov 2016 18:10:17 +0300 Subject: web socket connection time In-Reply-To: References: Message-ID: Thank you Philip! upstream_header_time it's exactly what I'm looking for. On Mon, Nov 21, 2016 at 2:20 PM, Philip Walenta wrote: > Igor, > > Not sure if this will help, but I gather several metrics from the front > end to make a determination how long back end responses take. > > Here's a snippet from my log format that might help: > > "upstream_server":"$upstream_addr", "req_total_time":"$request_time", > "req_upstream_time":"$upstream_response_time", "upstream_conn_time":"$upstream_connect_time", > "upstream_header_time":"$upstream_header_time" > > You can find all the variables here: > > http://nginx.org/en/docs/varindex.html > > On Mon, Nov 21, 2016 at 5:09 AM, Igor Karymov wrote: > >> Hello. I'm using Nginx as WebSocket proxy and trying to set up the metric >> that will show me how quick gateway can establish new connections. >> Unfortunately, looks like when I parse logs I have only "response_time" >> that in the case of WebSockets will be better to call "connection time". >> Any ideas how can I get time between GET request and 101 response? >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Nov 21 22:27:40 2016 From: nginx-forum at forum.nginx.org (hheiko) Date: Mon, 21 Nov 2016 17:27:40 -0500 Subject: Help with securing "route" cookie In-Reply-To: <20161121143520.GF2852@daoine.org> References: <20161121143520.GF2852@daoine.org> Message-ID: My cookie from the sticky modules comes flagged as unsecure, I can delete it and close the browser, no change. You can check it out at https://wahl.hannover-stadt.de and then check the "route" cookie. Heiko Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271052,271104#msg-271104 From francis at daoine.org Mon Nov 21 23:57:57 2016 From: francis at daoine.org (Francis Daly) Date: Mon, 21 Nov 2016 23:57:57 +0000 Subject: Help with securing "route" cookie In-Reply-To: References: <20161121143520.GF2852@daoine.org> Message-ID: <20161121235757.GH2852@daoine.org> On Mon, Nov 21, 2016 at 05:27:40PM -0500, hheiko wrote: Hi there, > My cookie from the sticky modules comes flagged as unsecure, I can delete > it and close the browser, no change. > You can check it out at https://wahl.hannover-stadt.de and then check the > "route" cookie. $ curl -s -i https://wahl.hannover-stadt.de | grep -i '^set-cookie:\|^date:' Date: Mon, 21 Nov 2016 23:49:01 GMT Set-Cookie: route=0; Expires=Mon, 21-Nov-2016 23:54:01 GMT; Path=/ That suggests that the effective configuration you have is sticky expires=5m; If your actual configuration is something different, then it may be worth checking module and server versions for compatibility, or confirming that the config you think is running is the config that is running. Who knows; maybe there is a problem that can be fixed in the module. f -- Francis Daly francis at daoine.org From tjlp at sina.com Tue Nov 22 04:10:30 2016 From: tjlp at sina.com (tjlp at sina.com) Date: Tue, 22 Nov 2016 12:10:30 +0800 Subject: How to configure the nginx log to file and stdout at the same time? Message-ID: <20161122041030.99B122A00A6@webmail.sinamail.sina.com.cn> Hi, Is it possible to configure the nginx log to standard output (stdout or stderr) and log files at the same time? Thanks Liu Peng -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Nov 22 10:21:45 2016 From: nginx-forum at forum.nginx.org (Phani Sreenivasa Prasad) Date: Tue, 22 Nov 2016 05:21:45 -0500 Subject: how to read $body_bytes_sent nginx variable Message-ID: <8c8af1533df01e5461b24ec96f2ec17d.NginxMailingListEnglish@forum.nginx.org> Hi I want to read the nginx the value of nginx variable $body_bytes_sent from my fastcgi application to check how many bytes nginx had sent to client? I tried something below fastcgi_param BODY_BYTES_SENT $body_bytes_sent in my fastcgi_params and trying to read the fastcgi param value from the fastcgi application. But it always returns 0. Please help me how to read nginx variables(apart from those defined default in fastcgi_params file) from my fastcgi app and when to read? I need to know when the value of $body_bytes_sent will get populated? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271108,271108#msg-271108 From mdounin at mdounin.ru Tue Nov 22 12:52:43 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Nov 2016 15:52:43 +0300 Subject: how to read $body_bytes_sent nginx variable In-Reply-To: <8c8af1533df01e5461b24ec96f2ec17d.NginxMailingListEnglish@forum.nginx.org> References: <8c8af1533df01e5461b24ec96f2ec17d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161122125243.GJ8196@mdounin.ru> Hello! On Tue, Nov 22, 2016 at 05:21:45AM -0500, Phani Sreenivasa Prasad wrote: > Hi > > I want to read the nginx the value of nginx variable $body_bytes_sent from > my fastcgi application to check how many bytes nginx had sent to client? > > I tried something below > > fastcgi_param BODY_BYTES_SENT $body_bytes_sent in my fastcgi_params > > and trying to read the fastcgi param value from the fastcgi application. But > it always returns 0. > > Please help me how to read nginx variables(apart from those defined default > in fastcgi_params file) from my fastcgi app and when to read? I need to know > when the value of $body_bytes_sent will get populated? The $body_bytes_sent variable represents number of bytes sent to a client. Obviously it is expected to be 0 before you've sent a response. Effectively it is only available while logging a request. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Tue Nov 22 13:02:37 2016 From: nginx-forum at forum.nginx.org (Phani Sreenivasa Prasad) Date: Tue, 22 Nov 2016 08:02:37 -0500 Subject: how to read $body_bytes_sent nginx variable In-Reply-To: <20161122125243.GJ8196@mdounin.ru> References: <20161122125243.GJ8196@mdounin.ru> Message-ID: Hi So, how can I read this value from my fastcgi app for each request/response? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271108,271113#msg-271113 From vbart at nginx.com Tue Nov 22 13:11:08 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 22 Nov 2016 16:11:08 +0300 Subject: how to read $body_bytes_sent nginx variable In-Reply-To: References: <20161122125243.GJ8196@mdounin.ru> Message-ID: <17832503.HjjmYDoWQq@vbart-workstation> On Tuesday 22 November 2016 08:02:37 Phani Sreenivasa Prasad wrote: > Hi > > So, how can I read this value from my fastcgi app for each request/response? > [..] You can read it from the access_log file. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Tue Nov 22 13:22:56 2016 From: nginx-forum at forum.nginx.org (Phani Sreenivasa Prasad) Date: Tue, 22 Nov 2016 08:22:56 -0500 Subject: how to read $body_bytes_sent nginx variable In-Reply-To: <17832503.HjjmYDoWQq@vbart-workstation> References: <17832503.HjjmYDoWQq@vbart-workstation> Message-ID: <8c9d26d895dcf8e36f81aa14ce6de1ec.NginxMailingListEnglish@forum.nginx.org> on my dev setup, the logs are disabled due to memory constraint. Also the log_format directive would log many more fields into that file which I am not interested in. is there a way I can read it through fastcgi param from my fastcgi app? I see one problem here is - nginx sends all env variables as a record when the request comes initially and later on it doesnt send any more env variables after sending the response. Any solution for this? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271108,271115#msg-271115 From vbart at nginx.com Tue Nov 22 13:29:46 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 22 Nov 2016 16:29:46 +0300 Subject: how to read $body_bytes_sent nginx variable In-Reply-To: <8c9d26d895dcf8e36f81aa14ce6de1ec.NginxMailingListEnglish@forum.nginx.org> References: <17832503.HjjmYDoWQq@vbart-workstation> <8c9d26d895dcf8e36f81aa14ce6de1ec.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3351356.AXrEIRqszP@vbart-workstation> On Tuesday 22 November 2016 08:22:56 Phani Sreenivasa Prasad wrote: > on my dev setup, the logs are disabled due to memory constraint. Also the > log_format directive would log many more fields into that file which I am > not interested in. > You can configure the log_format directive as you want (even to log only one variable). > is there a way I can read it through fastcgi param from my fastcgi app? I > see one problem here is - nginx sends all env variables as a record when the > request comes initially and later on it doesnt send any more env variables > after sending the response. > > Any solution for this? > That's how the FastCGI protocol works. -- wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Tue Nov 22 13:42:19 2016 From: nginx-forum at forum.nginx.org (Phani Sreenivasa Prasad) Date: Tue, 22 Nov 2016 08:42:19 -0500 Subject: how to read $body_bytes_sent nginx variable In-Reply-To: <3351356.AXrEIRqszP@vbart-workstation> References: <3351356.AXrEIRqszP@vbart-workstation> Message-ID: no other way to read this env variable other than reading it from access_log file? is it possible to export this to all other processes running on a system? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271108,271117#msg-271117 From nginx-forum at forum.nginx.org Tue Nov 22 15:46:36 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Tue, 22 Nov 2016 10:46:36 -0500 Subject: Help with securing "route" cookie In-Reply-To: References: <20161121143520.GF2852@daoine.org> Message-ID: <8211f6b102a63ccb6835433f615005c8.NginxMailingListEnglish@forum.nginx.org> The 'secure' option is not working? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271052,271119#msg-271119 From nginx-forum at forum.nginx.org Tue Nov 22 18:45:59 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Tue, 22 Nov 2016 13:45:59 -0500 Subject: Help with securing "route" cookie In-Reply-To: <20161121235757.GH2852@daoine.org> References: <20161121235757.GH2852@daoine.org> Message-ID: Some workarounds: http://serverfault.com/questions/496749/in-nginx-reverse-proxy-how-to-set-the-secure-flag-for-cookies https://maximilian-boehm.com/hp2134/NGINX-as-Proxy-Rewrite-Set-Cookie-to-Secure-and-HttpOnly.htm Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271052,271120#msg-271120 From mikydevel at yahoo.fr Tue Nov 22 18:51:28 2016 From: mikydevel at yahoo.fr (Mik J) Date: Tue, 22 Nov 2016 18:51:28 +0000 (UTC) Subject: Reverse proxy should send server_name References: <333730741.4037174.1479840688559.ref@mail.yahoo.com> Message-ID: <333730741.4037174.1479840688559@mail.yahoo.com> Hello, I don't know how to finalise my reverse proxy setup. Client <--Internet-->Reverse_Proxy<--LAN-->Web_ServerWhen a client connects to FQDN, the request is followed to the IP address of the webserver such aslocation ^~ / { ???????? proxy_pass??????? http://10.1.1.1/service1;And it works but the request appears is if the client typed http://10.1.1.1/service1/ from the web server point of view The problem comes when some applications on the web server behind the reverse proxy wants to see the request as if the client typedhttp://service1.mydomain.org/ I would be tempted to write this on my reverse proxylocation ^~ / { ???????? proxy_pass??????? http://10.1.1.1/service1;But it wouldn't work because the request would be dns solved and not sent to 10.1.1.1 What should I write on the reverse proxy so that the IP paquet is sent to 10.1.1.1 but the HTTP GET request hits the virtual host service1.mydomain.org on the back end web server ? Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Nov 22 20:55:31 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 22 Nov 2016 20:55:31 +0000 Subject: Reverse proxy should send server_name In-Reply-To: <333730741.4037174.1479840688559@mail.yahoo.com> References: <333730741.4037174.1479840688559.ref@mail.yahoo.com> <333730741.4037174.1479840688559@mail.yahoo.com> Message-ID: <20161122205531.GI2852@daoine.org> On Tue, Nov 22, 2016 at 06:51:28PM +0000, Mik J via nginx wrote: Hi there, > location ^~ / { > ???????? proxy_pass??????? http://10.1.1.1/service1;And it works but the request appears is if the client typed http://10.1.1.1/service1/ from the web server point of view > What should I write on the reverse proxy so that the IP paquet is sent to 10.1.1.1 but the HTTP GET request hits the virtual host service1.mydomain.org on the back end web server ? Either use "proxy_set_header" (http://nginx.org/r/proxy_set_header) to set Host (and consider "proxy_redirect" too); or create an "upstream" called service1.mydomain.org and "proxy_pass" to that. Note that if your "location" ends in /, you probably want your "proxy_pass" to end in / too. Cheers, f -- Francis Daly francis at daoine.org From mikydevel at yahoo.fr Tue Nov 22 21:28:51 2016 From: mikydevel at yahoo.fr (Mik J) Date: Tue, 22 Nov 2016 21:28:51 +0000 (UTC) Subject: Reverse proxy should send server_name In-Reply-To: <20161122205531.GI2852@daoine.org> References: <333730741.4037174.1479840688559.ref@mail.yahoo.com> <333730741.4037174.1479840688559@mail.yahoo.com> <20161122205531.GI2852@daoine.org> Message-ID: <623060094.4202271.1479850131198@mail.yahoo.com> Hello Francis,Thank you very much.Everything works fine. Have a nice week Le Mardi 22 novembre 2016 21h55, Francis Daly a ?crit : On Tue, Nov 22, 2016 at 06:51:28PM +0000, Mik J via nginx wrote: Hi there, > location ^~ / { > ???????? proxy_pass??????? http://10.1.1.1/service1;And it works but the request appears is if the client typed http://10.1.1.1/service1/ from the web server point of view > What should I write on the reverse proxy so that the IP paquet is sent to 10.1.1.1 but the HTTP GET request hits the virtual host service1.mydomain.org on the back end web server ? Either use "proxy_set_header" (http://nginx.org/r/proxy_set_header) to set Host (and consider "proxy_redirect" too); or create an "upstream" called service1.mydomain.org and "proxy_pass" to that. Note that if your "location" ends in /, you probably want your "proxy_pass" to end in / too. Cheers, ??? f -- Francis Daly? ? ? ? francis at daoine.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Nov 23 09:14:23 2016 From: nginx-forum at forum.nginx.org (noci) Date: Wed, 23 Nov 2016 04:14:23 -0500 Subject: Issue with websocket behind nginx behind a haproxy SNI TLS reverse proxy Message-ID: <17b6b147ff83b609d097fdb777f26698.NginxMailingListEnglish@forum.nginx.org> Hi, I have a strange problem. Setup: Internet ---> haproxy (SNI TLS Routing) --> nginx (Webserver) --> Websocket based server (WebRTC) haproxy has no certificates, it checks the TLS Hello message for :443 traffic and then forwards to the right server based on SNI. ==> haproxy cannot alter the stream sent through. Doing a request through this pipeline to start a websocket connection looses the Upgrade & Connection setting coming from the internet. When making a request that bypasses the haproxy those header elements ARE present. Unfortunately haproxy is a requirement because of various servers being used. The only difference i can see is that in the case of haproxy the request comes from a local address (same subnet as nginx server) . I tried to follow the processing of data through haproxy but that takes a lot more time... Curl Request: GET /webrtc/ws?curl HTTP/1.1 Host: nc.xxxxxxx.net Accept: / Pragma: no-cache Origin: https://nc.xxxxxxx.net Accept-Encoding: gzip, deflate, sdch, br Sec-WebSocket-Version: 13 Accept-Language: en-US,en;q=0.8,nl;q=0.6 Sec-WebSocket-Key: QBKcxyaLv5Om+scMeDUbBg== User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.59 Safari/537.36 Upgrade: websocket Cache-Control: no-cache Cookie: oc_sessionPassphrase=XcOZFOaPnqqbv1 Connection: Upgrade Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits DNT: 1 Parsed by nginx: 2016/11/23 01:09:20 [debug] 25097#0: *309 http header: "Host: nc.xxxxxxx.net:443" 2016/11/23 01:09:20 [debug] 25097#0: *309 http header: "Connection: close" 2016/11/23 01:09:20 [debug] 25097#0: *309 http header: "Accept: /" 2016/11/23 01:09:20 [debug] 25097#0: *309 http header: "Pragma: no-cache" 2016/11/23 01:09:20 [debug] 25097#0: *309 http header: "Origin: https://nc.xxxxxxx.net" 2016/11/23 01:09:20 [debug] 25097#0: *309 http header: "Accept-Encoding: gzip, deflate, sdch, br" 2016/11/23 01:09:20 [debug] 25097#0: *309 http header: "Sec-WebSocket-Version: 13" 2016/11/23 01:09:20 [debug] 25097#0: *309 http header: "Accept-Language: en-US,en;q=0.8,nl;q=0.6" 2016/11/23 01:09:20 [debug] 25097#0: *309 http header: "Sec-WebSocket-Key: QBKcxcxxxcxcxyaLv5Om+scMeDUbBg==" 2016/11/23 01:09:20 [debug] 25097#0: *309 http header: "User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.59 Safari/537.36" 2016/11/23 01:09:20 [debug] 25097#0: *309 http header: "Cache-Control: no-cache" 2016/11/23 01:09:20 [debug] 25097#0: *309 http header: "Cookie: oc_sessionPassphrase=XcOZ9q5bYP% 2016/11/23 01:09:20 [debug] 25097#0: *309 http header: "Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits" 2016/11/23 01:09:20 [debug] 25097#0: *309 http header: "DNT: 1" 2016/11/23 01:09:20 [debug] 25097#0: *309 http header done The UserAgent & Cookie get followed by Upgrade & Connection resp. but they are NOT seen/parsed by nginx code.... Note that when i Force the Upgrade & Connection headers on the /webrtc/ws URI (using a specific location) every thing works as designed, it is just that the Upgrade & Connection headers seem to be dropped from the incomming request. ($http_upgrade is empty). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271128,271128#msg-271128 From nginx-forum at forum.nginx.org Wed Nov 23 14:22:35 2016 From: nginx-forum at forum.nginx.org (noci) Date: Wed, 23 Nov 2016 09:22:35 -0500 Subject: how to read $body_bytes_sent nginx variable In-Reply-To: <8c9d26d895dcf8e36f81aa14ce6de1ec.NginxMailingListEnglish@forum.nginx.org> References: <17832503.HjjmYDoWQq@vbart-workstation> <8c9d26d895dcf8e36f81aa14ce6de1ec.NginxMailingListEnglish@forum.nginx.org> Message-ID: Log through syslog to another system? If the other system isn't listening there is no harm done... (Slightly more network traffic). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271108,271130#msg-271130 From nginx-forum at forum.nginx.org Wed Nov 23 14:33:13 2016 From: nginx-forum at forum.nginx.org (noci) Date: Wed, 23 Nov 2016 09:33:13 -0500 Subject: Blocking tens of thousands of IP's In-Reply-To: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> References: <74A4D440E25E6843BC8E324E67BB3E3945589FFC@N060XBOXP38.kroger.com> Message-ID: <6a45e3a0c2562b03fde25679d6d4d6ce.NginxMailingListEnglish@forum.nginx.org> fail2ban comes to mind (ipset + iptables + logscanner). http://www.fail2ban.org/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270680,271131#msg-271131 From nginx-forum at forum.nginx.org Wed Nov 23 14:57:57 2016 From: nginx-forum at forum.nginx.org (noci) Date: Wed, 23 Nov 2016 09:57:57 -0500 Subject: Issue with websocket behind nginx behind a haproxy SNI TLS reverse proxy In-Reply-To: <17b6b147ff83b609d097fdb777f26698.NginxMailingListEnglish@forum.nginx.org> References: <17b6b147ff83b609d097fdb777f26698.NginxMailingListEnglish@forum.nginx.org> Message-ID: I tried both V1.10.1 and V1.11.6 same behaviour Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271128,271132#msg-271132 From nginx-forum at forum.nginx.org Wed Nov 23 18:22:42 2016 From: nginx-forum at forum.nginx.org (Sushma) Date: Wed, 23 Nov 2016 13:22:42 -0500 Subject: Nginx headers to be set at the time of request processing Message-ID: <962e6408ca2bf8e26e28dc6659b37408.NginxMailingListEnglish@forum.nginx.org> Hi all, I understand that proxy_set_header sets headers before proxying the request to the upstream server. Is there some way I can set the headers at the time of request processing itself not just before proxying to upstream servers? Thanks, Sushma Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271147,271147#msg-271147 From francis at daoine.org Wed Nov 23 20:21:46 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 23 Nov 2016 20:21:46 +0000 Subject: Nginx headers to be set at the time of request processing In-Reply-To: <962e6408ca2bf8e26e28dc6659b37408.NginxMailingListEnglish@forum.nginx.org> References: <962e6408ca2bf8e26e28dc6659b37408.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161123202146.GJ2852@daoine.org> On Wed, Nov 23, 2016 at 01:22:42PM -0500, Sushma wrote: Hi there, > I understand that proxy_set_header sets headers before proxying the request > to the upstream server. proxy_set_header adds extra http headers to the request that nginx sends to its upstream as part of the proxy_pass configuration. > Is there some way I can set the headers at the time of request processing > itself not just before proxying to upstream servers? I do not know what you mean by that question. What headers are you referring to, specifically? Thanks, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Nov 23 21:12:21 2016 From: nginx-forum at forum.nginx.org (CeeGeeDev) Date: Wed, 23 Nov 2016 16:12:21 -0500 Subject: Nginx headers to be set at the time of request processing In-Reply-To: <962e6408ca2bf8e26e28dc6659b37408.NginxMailingListEnglish@forum.nginx.org> References: <962e6408ca2bf8e26e28dc6659b37408.NginxMailingListEnglish@forum.nginx.org> Message-ID: To be clear, this is for a custom http module we wrote. We want it to know about request headers that we set in the nginx.conf (for proxy_pass etc). But it doesn't seem like the custom module receives the modified request headers. Is there any way to both a) set a request header for proxy_pass but also b) have custom http module code living inside the same nginx also process the request that has these alterations to the set of request headers? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271147,271149#msg-271149 From mdounin at mdounin.ru Thu Nov 24 12:54:27 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 24 Nov 2016 15:54:27 +0300 Subject: Issue with websocket behind nginx behind a haproxy SNI TLS reverse proxy In-Reply-To: <17b6b147ff83b609d097fdb777f26698.NginxMailingListEnglish@forum.nginx.org> References: <17b6b147ff83b609d097fdb777f26698.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20161124125427.GR8196@mdounin.ru> Hello! On Wed, Nov 23, 2016 at 04:14:23AM -0500, noci wrote: > I have a strange problem. > > Setup: > Internet ---> haproxy (SNI TLS Routing) --> nginx (Webserver) --> Websocket > based server (WebRTC) > haproxy has no certificates, it checks the TLS Hello message for :443 > traffic and then forwards to the right server based on SNI. > ==> haproxy cannot alter the stream sent through. > > Doing a request through this pipeline to start a websocket connection looses > the Upgrade & Connection setting coming from the internet. > When making a request that bypasses the haproxy those header elements ARE > present. > Unfortunately haproxy is a requirement because of various servers being > used. [...] > Parsed by nginx: > 2016/11/23 01:09:20 [debug] 25097#0: *309 http header: "Host: > nc.xxxxxxx.net:443" > 2016/11/23 01:09:20 [debug] 25097#0: *309 http header: "Connection: close" [...] >From the nginx logs provided it is clear that Update and Connection headers were removed/changed somewhere before nginx. Additionally, it looks like the Host header was changed from "nc.xxxxxxx.net" to "nc.xxxxxxx.net:443". You have to look on what happens in haproxy and/or between haproxy and nginx. A trivial thing to check is the client address as seen by nginx - make sure it belongs to haproxy and there are no additional intermediate proxies. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Nov 24 16:22:38 2016 From: nginx-forum at forum.nginx.org (noci) Date: Thu, 24 Nov 2016 11:22:38 -0500 Subject: Issue with websocket behind nginx behind a haproxy SNI TLS reverse proxy In-Reply-To: <20161124125427.GR8196@mdounin.ru> References: <20161124125427.GR8196@mdounin.ru> Message-ID: <763d2e78a16f56134710e1105c29c020.NginxMailingListEnglish@forum.nginx.org> the haproxy is conforming to the following setup: http://blog.haproxy.com/2012/04/13/enhanced-ssl-load-balancing-with-server-name-indication-sni-tls-extension/ Look for: Choose a server using SNI: aka SSL routing No certificates available to haproxy, so no decoding and/or adding removing headers. disecting of traffic is purely based on SSL Client Hello providing an SNI. (tcp mode forwarding) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271128,271164#msg-271164 From hemanthnews at yahoo.com Fri Nov 25 11:33:02 2016 From: hemanthnews at yahoo.com (hemanthnews at yahoo.com) Date: Fri, 25 Nov 2016 17:03:02 +0530 Subject: Trouble in configuring fir REST support Message-ID: <31770.61596.bm@smtp141.mail.sg3.yahoo.com> Hi, Following is the environment OS: CentOS 7 (64 bit) NGINX: 1.10.1 PHP/PHP-FPM:? 5.6 ZF2 Apache 2.4 Nginx configured on port-80 and apache on port-9080 I am having trouble in configuring for REST support using nginx + php + zf2 When I enter /user-rest while using APACHE (Port: 9080), I get all the user data When I try with /user-rest (nginx on port:80), I get a blank screen with Firefox and Chrome reports ?HTTP ERROR 500? Following is my configuration file under /etc/nginx/conf.d/pixflex_nginx.conf Would appreciate feedback / fix to support REST server { 2 listen 80 default; 3 listen 443 ssl; 4 server_name $hostname; 5 client_max_body_size 8192M; 6 client_header_timeout 300s; 7 client_body_timeout 300s; 8 fastcgi_read_timeout 300s; 9 fastcgi_buffers 16 128k; 10 fastcgi_buffer_size 256k; 11 12 #SSL Support - key & certificate location; 13 ssl_certificate /etc/pki/tls/certs/ca.crt; 14 ssl_certificate_key /etc/pki/tls/private/ca.key; 15 16 #VirtualHost for HTML Support 17 location / { 18 #root /usr/share/nginx/html; 19 limit_rate 512k; 20 limit_conn pfs 100; 21 add_header 'Access-Control-Allow-Origin' "*"; 22 add_header 'Access-Control-Allow-Credentials' 'true'; 23 add_header 'Access-Control-Allow-Headers' 'Content-Type,accept,x-wsse,origin'; 24 add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE'; 25 26 root /opt/riversilica/pixflex/install/app_server/pixflex/public; 27 index index.php index.phtml index.html index.htm; 28 try_files $uri $uri/ /index.php$is_args$args; 29 } 30 31 #error_page 404 /404.html; 32 #redirect server error pages to the static page /50x.html 33 34 #error_page 500 502 503 504 /50x.html; 35 #location = /50x.html { 36 # root /usr/share/nginx/html; 37 #} 38 39 #proxy the PHP scripts to Apache listening on 127.0.0.1:80 40 #location ~ \.php$ { 41 # proxy_pass http://127.0.0.1; 42 #} 43 44 #pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 45 location ~ \.php$ { 46 #root /usr/share/nginx/html; 47 limit_rate 512k; 48 limit_conn pfs 100; 49 50 root /opt/riversilica/pixflex/install/app_server/pixflex/public; 51 try_files $uri =404; 52 fastcgi_pass 127.0.0.1:9000; 53 fastcgi_index index.php; 54 fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; 55 fastcgi_split_path_info ^(.+\.php)(/.+)$; 56 fastcgi_intercept_errors on; 57 fastcgi_read_timeout 300; 58 include fastcgi_params; 59 } 60 61 # deny access to .htaccess files, if Apache's document root 62 # concurs with nginx's one 63 location ~ /\.ht { 64 deny all; 65 } 66 } -------------------- -Best Hemanth -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Fri Nov 25 11:45:37 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Fri, 25 Nov 2016 17:15:37 +0530 Subject: Trouble in configuring fir REST support In-Reply-To: <31770.61596.bm@smtp141.mail.sg3.yahoo.com> References: <31770.61596.bm@smtp141.mail.sg3.yahoo.com> Message-ID: What does the error log say? On Fri, Nov 25, 2016 at 5:03 PM, Hemanthnews via nginx wrote: > Hi, > > Following is the environment > > OS: CentOS 7 (64 bit) > > NGINX: 1.10.1 > > PHP/PHP-FPM: 5.6 > > ZF2 > > Apache 2.4 > > > > Nginx configured on port-80 and apache on port-9080 > > > > I am having trouble in configuring for REST support using nginx + php + > zf2 > > When I enter /user-rest while using APACHE (Port: 9080), > I get all the user data > > When I try with /user-rest (nginx on port:80), I get a blank > screen with Firefox and Chrome reports ?HTTP ERROR 500? > > > > > > Following is my configuration file under /etc/nginx/conf.d/pixflex_ > nginx.conf > > Would appreciate feedback / fix to support REST > > > > server { > > 2 listen 80 default; > > 3 listen 443 ssl; > > 4 server_name $hostname; > > 5 client_max_body_size 8192M; > > 6 client_header_timeout 300s; > > 7 client_body_timeout 300s; > > 8 fastcgi_read_timeout 300s; > > 9 fastcgi_buffers 16 128k; > > 10 fastcgi_buffer_size 256k; > > 11 > > 12 #SSL Support - key & certificate location; > > 13 ssl_certificate /etc/pki/tls/certs/ca.crt; > > 14 ssl_certificate_key /etc/pki/tls/private/ca.key; > > 15 > > 16 #VirtualHost for HTML Support > > 17 location / { > > 18 #root /usr/share/nginx/html; > > 19 limit_rate 512k; > > 20 limit_conn pfs 100; > > 21 add_header 'Access-Control-Allow-Origin' "*"; > > 22 add_header 'Access-Control-Allow-Credentials' > 'true'; > > 23 add_header 'Access-Control-Allow-Headers' > 'Content-Type,accept,x-wsse,origin'; > > 24 add_header 'Access-Control-Allow-Methods' 'GET, > POST, OPTIONS, PUT, DELETE'; > > 25 > > 26 root /opt/riversilica/pixflex/ > install/app_server/pixflex/public; > > 27 index index.php index.phtml index.html index.htm; > > 28 try_files $uri $uri/ /index.php$is_args$args; > > 29 } > > 30 > > 31 #error_page 404 /404.html; > > 32 #redirect server error pages to the static page /50x.html > > 33 > > 34 #error_page 500 502 503 504 /50x.html; > > 35 #location = /50x.html { > > 36 # root /usr/share/nginx/html; > > 37 #} > > 38 > > 39 #proxy the PHP scripts to Apache listening on 127.0.0.1:80 > > 40 #location ~ \.php$ { > > 41 # proxy_pass http://127.0.0.1; > > 42 #} > > 43 > > 44 #pass the PHP scripts to FastCGI server listening on > 127.0.0.1:9000 > > 45 location ~ \.php$ { > > 46 #root /usr/share/nginx/html; > > 47 limit_rate 512k; > > 48 limit_conn pfs 100; > > 49 > > 50 root /opt/riversilica/pixflex/ > install/app_server/pixflex/public; > > 51 try_files $uri =404; > > 52 fastcgi_pass 127.0.0.1:9000; > > 53 fastcgi_index index.php; > > 54 fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > > 55 fastcgi_split_path_info ^(.+\.php)(/.+)$; > > 56 fastcgi_intercept_errors on; > > 57 fastcgi_read_timeout 300; > > 58 include fastcgi_params; > > 59 } > > 60 > > 61 # deny access to .htaccess files, if Apache's document root > > 62 # concurs with nginx's one > > 63 location ~ /\.ht { > > 64 deny all; > > 65 } > > 66 } > > > > > > > > > > > > -------------------- > -Best > Hemanth > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From hemanthnews at yahoo.com Fri Nov 25 11:47:35 2016 From: hemanthnews at yahoo.com (hemanthnews at yahoo.com) Date: Fri, 25 Nov 2016 17:17:35 +0530 Subject: Trouble in configuring fir REST support In-Reply-To: References: <31770.61596.bm@smtp141.mail.sg3.yahoo.com> Message-ID: <811829.68012.bm@smtp123.mail.sg3.yahoo.com> Hi Anoop, The /var/log/nginx/error.log file is empty ? -------------------- -Best Hemanth From: Anoop Alias Sent: Friday, November 25, 2016 5:15 PM To: Nginx Cc: hemanthnews at yahoo.com Subject: Re: Trouble in configuring fir REST support What does the error log say? On Fri, Nov 25, 2016 at 5:03 PM, Hemanthnews via nginx wrote: Hi, Following is the environment OS: CentOS 7 (64 bit) NGINX: 1.10.1 PHP/PHP-FPM:? 5.6 ZF2 Apache 2.4 ? Nginx configured on port-80 and apache on port-9080 ? I am having trouble in configuring for REST support using nginx + php + zf2 When I enter /user-rest while using APACHE (Port: 9080), I get all the user data When I try with /user-rest ?(nginx on port:80), I get a blank screen? with Firefox and Chrome reports ?HTTP ERROR 500? ? ? Following is my configuration file under /etc/nginx/conf.d/pixflex_nginx.conf Would appreciate feedback / fix to support REST ? server { ????? 2???????? listen?????? 80 default; ????? 3???????? listen?????? 443 ssl; ????? 4???????? server_name? $hostname; ????? 5???????? client_max_body_size 8192M; ????? 6???????? client_header_timeout 300s; ????? 7???????? client_body_timeout 300s; ????? 8???????? fastcgi_read_timeout 300s; ???? ?9???????? fastcgi_buffers 16 128k; ???? 10???????? fastcgi_buffer_size 256k; ???? 11 ???? 12???????? #SSL Support - key & certificate location; ???? 13???????? ssl_certificate /etc/pki/tls/certs/ca.crt; ???? 14???????? ssl_certificate_key /etc/pki/tls/private/ca.key; ???? 15 ???? 16???????? #VirtualHost for HTML Support ???? 17???????? location / { ???? 18???????????????? #root?? /usr/share/nginx/html; ???? 19???????????????? limit_rate 512k; ???? 20???????????????? limit_conn pfs 100; ???? 21???????????? ????add_header 'Access-Control-Allow-Origin' "*"; ???? 22???????????????? add_header 'Access-Control-Allow-Credentials' 'true'; ???? 23???????????????? add_header 'Access-Control-Allow-Headers' 'Content-Type,accept,x-wsse,origin'; ???? 24???????????????? add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE'; ???? 25 ???? 26???????????????? root /opt/riversilica/pixflex/install/app_server/pixflex/public; ???? 27???????????????? index? index.php index.phtml index.html index.htm; ???? 28 ????????????????try_files $uri $uri/ /index.php$is_args$args; ???? 29????????? } ???? 30 ?????31???????? #error_page? 404????????????? /404.html; ???? 32???????? #redirect server error pages to the static page /50x.html ???? 33 ???? 34???????? #error_page?? 500 502 503 504? /50x.html; ???? 35???????? #location = /50x.html { ???? 36???????? #?????? root?? /usr/share/nginx/html; ???? 37???????? #} ???? 38 ???? 39???????? #proxy the PHP scripts to Apache listening on 127.0.0.1:80 ? ???40???????? #location ~ \.php$ { ???? 41???????? #??? proxy_pass?? http://127.0.0.1; ???? 42???????? #} ???? 43 ???? 44???????? #pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 ???? 45???????? location ~ \.php$ { ???? 46?????????????? ??#root?? /usr/share/nginx/html; ???? 47???????????????? limit_rate 512k; ???? 48???????????????? limit_conn pfs 100; ???? 49 ???? 50???????????????? root /opt/riversilica/pixflex/install/app_server/pixflex/public; ???? 51???????????????? try_files $uri =404; ???? 52???????????????? fastcgi_pass?? 127.0.0.1:9000; ???? 53???????????????? fastcgi_index? index.php; ???? 54???????????????? fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; ???? 55???????????????? fastcgi_split_path_info ^(.+\.php)(/.+)$; ?????56???????????????? fastcgi_intercept_errors on; ???? 57???????????????? fastcgi_read_timeout 300; ???? 58???????????????? include fastcgi_params; ???? 59???????? } ???? 60 ???? 61???????? # deny access to .htaccess files, if Apache's document root ???? 62???????? # concurs with nginx's one ???? 63???????? location ~ /\.ht { ???? 64???????????????? deny? all; ???? 65???????? } ???? 66 } ? ? ? ? ? -------------------- -Best Hemanth ? _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -- Anoop P Alias? -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Fri Nov 25 11:53:44 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Fri, 25 Nov 2016 17:23:44 +0530 Subject: Trouble in configuring fir REST support In-Reply-To: <811829.68012.bm@smtp123.mail.sg3.yahoo.com> References: <31770.61596.bm@smtp141.mail.sg3.yahoo.com> <811829.68012.bm@smtp123.mail.sg3.yahoo.com> Message-ID: You can put a phpinfo page and see if that works. I am not sure why you mention apache as you are not proxy passing Also while not related to the error Try root /opt/riversilica/pixflex/install/app_server/pixflex/public; in the server {} block instead of location / and repeating in php . Read nginx pitfalls for a better understanding of why this is good. On Fri, Nov 25, 2016 at 5:17 PM, wrote: > Hi Anoop, > > The /var/log/nginx/error.log file is empty ? > > > > -------------------- > -Best > Hemanth > > > > *From: *Anoop Alias > *Sent: *Friday, November 25, 2016 5:15 PM > *To: *Nginx > *Cc: *hemanthnews at yahoo.com > *Subject: *Re: Trouble in configuring fir REST support > > > > What does the error log say? > > > > On Fri, Nov 25, 2016 at 5:03 PM, Hemanthnews via nginx > wrote: > > Hi, > > Following is the environment > > OS: CentOS 7 (64 bit) > > NGINX: 1.10.1 > > PHP/PHP-FPM: 5.6 > > ZF2 > > Apache 2.4 > > > > Nginx configured on port-80 and apache on port-9080 > > > > I am having trouble in configuring for REST support using nginx + php + > zf2 > > When I enter /user-rest while using APACHE (Port: 9080), > I get all the user data > > When I try with /user-rest (nginx on port:80), I get a blank > screen with Firefox and Chrome reports ?HTTP ERROR 500? > > > > > > Following is my configuration file under /etc/nginx/conf.d/pixflex_ > nginx.conf > > Would appreciate feedback / fix to support REST > > > > server { > > 2 listen 80 default; > > 3 listen 443 ssl; > > 4 server_name $hostname; > > 5 client_max_body_size 8192M; > > 6 client_header_timeout 300s; > > 7 client_body_timeout 300s; > > 8 fastcgi_read_timeout 300s; > > 9 fastcgi_buffers 16 128k; > > 10 fastcgi_buffer_size 256k; > > 11 > > 12 #SSL Support - key & certificate location; > > 13 ssl_certificate /etc/pki/tls/certs/ca.crt; > > 14 ssl_certificate_key /etc/pki/tls/private/ca.key; > > 15 > > 16 #VirtualHost for HTML Support > > 17 location / { > > 18 #root /usr/share/nginx/html; > > 19 limit_rate 512k; > > 20 limit_conn pfs 100; > > 21 add_header 'Access-Control-Allow-Origin' "*"; > > 22 add_header 'Access-Control-Allow-Credentials' > 'true'; > > 23 add_header 'Access-Control-Allow-Headers' > 'Content-Type,accept,x-wsse,origin'; > > 24 add_header 'Access-Control-Allow-Methods' 'GET, > POST, OPTIONS, PUT, DELETE'; > > 25 > > 26 root /opt/riversilica/pixflex/ > install/app_server/pixflex/public; > > 27 index index.php index.phtml index.html index.htm; > > 28 try_files $uri $uri/ /index.php$is_args$args; > > 29 } > > 30 > > 31 #error_page 404 /404.html; > > 32 #redirect server error pages to the static page /50x.html > > 33 > > 34 #error_page 500 502 503 504 /50x.html; > > 35 #location = /50x.html { > > 36 # root /usr/share/nginx/html; > > 37 #} > > 38 > > 39 #proxy the PHP scripts to Apache listening on 127.0.0.1:80 > > 40 #location ~ \.php$ { > > 41 # proxy_pass http://127.0.0.1; > > 42 #} > > 43 > > 44 #pass the PHP scripts to FastCGI server listening on > 127.0.0.1:9000 > > 45 location ~ \.php$ { > > 46 #root /usr/share/nginx/html; > > 47 limit_rate 512k; > > 48 limit_conn pfs 100; > > 49 > > 50 root /opt/riversilica/pixflex/ > install/app_server/pixflex/public; > > 51 try_files $uri =404; > > 52 fastcgi_pass 127.0.0.1:9000; > > 53 fastcgi_index index.php; > > 54 fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > > 55 fastcgi_split_path_info ^(.+\.php)(/.+)$; > > 56 fastcgi_intercept_errors on; > > 57 fastcgi_read_timeout 300; > > 58 include fastcgi_params; > > 59 } > > 60 > > 61 # deny access to .htaccess files, if Apache's document root > > 62 # concurs with nginx's one > > 63 location ~ /\.ht { > > 64 deny all; > > 65 } > > 66 } > > > > > > > > > > > > -------------------- > -Best > Hemanth > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > > *Anoop P Alias* > > > > > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From hemanthnews at yahoo.com Fri Nov 25 12:05:51 2016 From: hemanthnews at yahoo.com (hemanthnews at yahoo.com) Date: Fri, 25 Nov 2016 17:35:51 +0530 Subject: Trouble in configuring fir REST support In-Reply-To: References: <31770.61596.bm@smtp141.mail.sg3.yahoo.com> <811829.68012.bm@smtp123.mail.sg3.yahoo.com> Message-ID: <850010.34753.bm@smtp148.mail.sg3.yahoo.com> Hi Anoop, Phpinfo() is working fine ? is there something to look for specifically? I need to move from APACHE to NGINX .. so as a back-up, APACHE has been configured to work on 9080 port. Once NGINX works, APACHE will be removed. Thanks for pointer on ?in the server {} block instead of location / ?and repeating in php . Read nginx pitfalls for a better understanding of why this is good.? ? will look into this. I guess this is nothing to do with the problem -------------------- -Best Hemanth From: Anoop Alias Sent: Friday, November 25, 2016 5:23 PM To: hemanthnews at yahoo.com Cc: Nginx Subject: Re: Trouble in configuring fir REST support You can put a phpinfo?page and see if that works.? I am not sure why you mention apache as you are not proxy passing Also while not related to the error ?Try? root /opt/riversilica/pixflex/install/app_server/pixflex/public;? in the server {} block instead of location / ?and repeating in php . Read nginx pitfalls for a better understanding of why this is good. On Fri, Nov 25, 2016 at 5:17 PM, wrote: Hi Anoop, The /var/log/nginx/error.log file is empty ? ? -------------------- -Best Hemanth ? From: Anoop Alias Sent: Friday, November 25, 2016 5:15 PM To: Nginx Cc: hemanthnews at yahoo.com Subject: Re: Trouble in configuring fir REST support ? What does the error log say? ? On Fri, Nov 25, 2016 at 5:03 PM, Hemanthnews via nginx wrote: Hi, Following is the environment OS: CentOS 7 (64 bit) NGINX: 1.10.1 PHP/PHP-FPM:? 5.6 ZF2 Apache 2.4 ? Nginx configured on port-80 and apache on port-9080 ? I am having trouble in configuring for REST support using nginx + php + zf2 When I enter /user-rest while using APACHE (Port: 9080), I get all the user data When I try with /user-rest ?(nginx on port:80), I get a blank screen? with Firefox and Chrome reports ?HTTP ERROR 500? ? ? Following is my configuration file under /etc/nginx/conf.d/pixflex_nginx.conf Would appreciate feedback / fix to support REST ? server { ????? 2???????? listen?????? 80 default; ????? 3???????? listen?????? 443 ssl; ????? 4???????? server_name? $hostname; ????? 5???????? client_max_body_size 8192M; ????? 6???????? client_header_timeout 300s; ????? 7???????? client_body_timeout 300s; ????? 8???????? fastcgi_read_timeout 300s; ???? ?9???????? fastcgi_buffers 16 128k; ???? 10???????? fastcgi_buffer_size 256k; ???? 11 ???? 12???????? #SSL Support - key & certificate location; ???? 13???????? ssl_certificate /etc/pki/tls/certs/ca.crt; ???? 14???????? ssl_certificate_key /etc/pki/tls/private/ca.key; ???? 15 ???? 16???????? #VirtualHost for HTML Support ???? 17???????? location / { ???? 18???????????????? #root?? /usr/share/nginx/html; ???? 19???????????????? limit_rate 512k; ???? 20???????????????? limit_conn pfs 100; ???? 21???????????? ????add_header 'Access-Control-Allow-Origin' "*"; ???? 22???????????????? add_header 'Access-Control-Allow-Credentials' 'true'; ???? 23???????????????? add_header 'Access-Control-Allow-Headers' 'Content-Type,accept,x-wsse,origin'; ???? 24???????????????? add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE'; ???? 25 ???? 26???????????????? root /opt/riversilica/pixflex/install/app_server/pixflex/public; ???? 27???????????????? index? index.php index.phtml index.html index.htm; ???? 28 ????????????????try_files $uri $uri/ /index.php$is_args$args; ???? 29????????? } ???? 30 ?????31???????? #error_page? 404????????????? /404.html; ???? 32???????? #redirect server error pages to the static page /50x.html ???? 33 ???? 34???????? #error_page?? 500 502 503 504? /50x.html; ???? 35???????? #location = /50x.html { ???? 36???????? #?????? root?? /usr/share/nginx/html; ???? 37???????? #} ???? 38 ???? 39???????? #proxy the PHP scripts to Apache listening on 127.0.0.1:80 ? ???40???????? #location ~ \.php$ { ???? 41???????? #??? proxy_pass?? http://127.0.0.1; ???? 42???????? #} ???? 43 ???? 44???????? #pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 ???? 45???????? location ~ \.php$ { ???? 46?????????????? ??#root?? /usr/share/nginx/html; ???? 47???????????????? limit_rate 512k; ???? 48???????????????? limit_conn pfs 100; ???? 49 ???? 50???????????????? root /opt/riversilica/pixflex/install/app_server/pixflex/public; ???? 51???????????????? try_files $uri =404; ???? 52???????????????? fastcgi_pass?? 127.0.0.1:9000; ???? 53???????????????? fastcgi_index? index.php; ???? 54???????????????? fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; ???? 55???????????????? fastcgi_split_path_info ^(.+\.php)(/.+)$; ?????56???????????????? fastcgi_intercept_errors on; ???? 57???????????????? fastcgi_read_timeout 300; ???? 58???????????????? include fastcgi_params; ???? 59???????? } ???? 60 ???? 61???????? # deny access to .htaccess files, if Apache's document root ???? 62???????? # concurs with nginx's one ???? 63???????? location ~ /\.ht { ???? 64???????????????? deny? all; ???? 65???????? } ???? 66 } ? ? ? ? ? -------------------- -Best Hemanth ? _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx ? -- Anoop P Alias? ? ? -- Anoop P Alias? -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Fri Nov 25 12:14:33 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Fri, 25 Nov 2016 17:44:33 +0530 Subject: Trouble in configuring fir REST support In-Reply-To: <850010.34753.bm@smtp148.mail.sg3.yahoo.com> References: <31770.61596.bm@smtp141.mail.sg3.yahoo.com> <811829.68012.bm@smtp123.mail.sg3.yahoo.com> <850010.34753.bm@smtp148.mail.sg3.yahoo.com> Message-ID: yes nothing to do with the problem at hand. You can also try to execute the index.php using the php (cli) and see if if there is an error etc. probably turn on display_error in php.ini . Its strange nginx is not logging any errors. Since the phpinfo page is working..this is more or less a problem with php than nginx I think. Good luck. On Fri, Nov 25, 2016 at 5:35 PM, wrote: > Hi Anoop, > > Phpinfo() is working fine ? is there something to look for specifically? > > > > I need to move from APACHE to NGINX .. so as a back-up, APACHE has been > configured to work on 9080 port. Once NGINX works, APACHE will be removed. > > > > Thanks for pointer on ?in the server {} block instead of location / and > repeating in php . Read nginx pitfalls for a better understanding of why > this is good.? ? will look into this. I guess this is nothing to do with > the problem > > > > -------------------- > -Best > Hemanth > > > > *From: *Anoop Alias > *Sent: *Friday, November 25, 2016 5:23 PM > *To: *hemanthnews at yahoo.com > *Cc: *Nginx > > *Subject: *Re: Trouble in configuring fir REST support > > > > You can put a phpinfo page and see if that works. > > I am not sure why you mention apache as you are not proxy passing > > > > Also while not related to the error Try > > > > root /opt/riversilica/pixflex/install/app_server/pixflex/public; > > > > in the server {} block instead of location / and repeating in php . Read > nginx pitfalls for a better understanding of why this is good. > > > > > > > > > > On Fri, Nov 25, 2016 at 5:17 PM, wrote: > > Hi Anoop, > > The /var/log/nginx/error.log file is empty ? > > > > -------------------- > -Best > Hemanth > > > > *From: *Anoop Alias > *Sent: *Friday, November 25, 2016 5:15 PM > *To: *Nginx > *Cc: *hemanthnews at yahoo.com > *Subject: *Re: Trouble in configuring fir REST support > > > > What does the error log say? > > > > On Fri, Nov 25, 2016 at 5:03 PM, Hemanthnews via nginx > wrote: > > Hi, > > Following is the environment > > OS: CentOS 7 (64 bit) > > NGINX: 1.10.1 > > PHP/PHP-FPM: 5.6 > > ZF2 > > Apache 2.4 > > > > Nginx configured on port-80 and apache on port-9080 > > > > I am having trouble in configuring for REST support using nginx + php + > zf2 > > When I enter /user-rest while using APACHE (Port: 9080), > I get all the user data > > When I try with /user-rest (nginx on port:80), I get a blank > screen with Firefox and Chrome reports ?HTTP ERROR 500? > > > > > > Following is my configuration file under /etc/nginx/conf.d/pixflex_ > nginx.conf > > Would appreciate feedback / fix to support REST > > > > server { > > 2 listen 80 default; > > 3 listen 443 ssl; > > 4 server_name $hostname; > > 5 client_max_body_size 8192M; > > 6 client_header_timeout 300s; > > 7 client_body_timeout 300s; > > 8 fastcgi_read_timeout 300s; > > 9 fastcgi_buffers 16 128k; > > 10 fastcgi_buffer_size 256k; > > 11 > > 12 #SSL Support - key & certificate location; > > 13 ssl_certificate /etc/pki/tls/certs/ca.crt; > > 14 ssl_certificate_key /etc/pki/tls/private/ca.key; > > 15 > > 16 #VirtualHost for HTML Support > > 17 location / { > > 18 #root /usr/share/nginx/html; > > 19 limit_rate 512k; > > 20 limit_conn pfs 100; > > 21 add_header 'Access-Control-Allow-Origin' "*"; > > 22 add_header 'Access-Control-Allow-Credentials' > 'true'; > > 23 add_header 'Access-Control-Allow-Headers' > 'Content-Type,accept,x-wsse,origin'; > > 24 add_header 'Access-Control-Allow-Methods' 'GET, > POST, OPTIONS, PUT, DELETE'; > > 25 > > 26 root /opt/riversilica/pixflex/ > install/app_server/pixflex/public; > > 27 index index.php index.phtml index.html index.htm; > > 28 try_files $uri $uri/ /index.php$is_args$args; > > 29 } > > 30 > > 31 #error_page 404 /404.html; > > 32 #redirect server error pages to the static page /50x.html > > 33 > > 34 #error_page 500 502 503 504 /50x.html; > > 35 #location = /50x.html { > > 36 # root /usr/share/nginx/html; > > 37 #} > > 38 > > 39 #proxy the PHP scripts to Apache listening on 127.0.0.1:80 > > 40 #location ~ \.php$ { > > 41 # proxy_pass http://127.0.0.1; > > 42 #} > > 43 > > 44 #pass the PHP scripts to FastCGI server listening on > 127.0.0.1:9000 > > 45 location ~ \.php$ { > > 46 #root /usr/share/nginx/html; > > 47 limit_rate 512k; > > 48 limit_conn pfs 100; > > 49 > > 50 root /opt/riversilica/pixflex/ > install/app_server/pixflex/public; > > 51 try_files $uri =404; > > 52 fastcgi_pass 127.0.0.1:9000; > > 53 fastcgi_index index.php; > > 54 fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > > 55 fastcgi_split_path_info ^(.+\.php)(/.+)$; > > 56 fastcgi_intercept_errors on; > > 57 fastcgi_read_timeout 300; > > 58 include fastcgi_params; > > 59 } > > 60 > > 61 # deny access to .htaccess files, if Apache's document root > > 62 # concurs with nginx's one > > 63 location ~ /\.ht { > > 64 deny all; > > 65 } > > 66 } > > > > > > > > > > > > -------------------- > -Best > Hemanth > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > > *Anoop P Alias* > > > > > > > > > > -- > > *Anoop P Alias* > > > > > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From hemanthnews at yahoo.com Mon Nov 28 04:23:01 2016 From: hemanthnews at yahoo.com (hemanthnews at yahoo.com) Date: Mon, 28 Nov 2016 09:53:01 +0530 Subject: Trouble in configuring fir REST support In-Reply-To: References: <31770.61596.bm@smtp141.mail.sg3.yahoo.com> <811829.68012.bm@smtp123.mail.sg3.yahoo.com> <850010.34753.bm@smtp148.mail.sg3.yahoo.com> Message-ID: <954871.3869.bm@smtp118.mail.sg3.yahoo.com> Hi Anoop, Let me look into the php config and see anything comes up -------------------- -Best Hemanth From: Anoop Alias Sent: Friday, November 25, 2016 5:44 PM To: hemanthnews at yahoo.com Cc: Nginx Subject: Re: Trouble in configuring fir REST support yes nothing to do with the problem at hand. You can also try to execute the index.php using the php (cli) and see if if there is an error etc. probably turn on display_error in php.ini . Its strange nginx is not logging any errors. Since the phpinfo page is working..this is more or less a problem with php than nginx I think. Good luck. On Fri, Nov 25, 2016 at 5:35 PM, wrote: Hi Anoop, Phpinfo() is working fine ? is there something to look ?for specifically? ? I need to move from APACHE to NGINX .. so as a back-up, APACHE has been configured to work on 9080 port. Once NGINX works, APACHE will be removed. ? Thanks for pointer on ?in the server {} block instead of location / ?and repeating in php . Read nginx pitfalls for a better understanding of why this is good.? ? will look into this. I guess this is nothing to do with the problem ? -------------------- -Best Hemanth ? From: Anoop Alias Sent: Friday, November 25, 2016 5:23 PM To: hemanthnews at yahoo.com Cc: Nginx Subject: Re: Trouble in configuring fir REST support ? You can put a phpinfo?page and see if that works.? I am not sure why you mention apache as you are not proxy passing ? Also while not related to the error ?Try? ? root /opt/riversilica/pixflex/install/app_server/pixflex/public;? ? in the server {} block instead of location / ?and repeating in php . Read nginx pitfalls for a better understanding of why this is good. ? ? ? ? On Fri, Nov 25, 2016 at 5:17 PM, wrote: Hi Anoop, The /var/log/nginx/error.log file is empty ? ? -------------------- -Best Hemanth ? From: Anoop Alias Sent: Friday, November 25, 2016 5:15 PM To: Nginx Cc: hemanthnews at yahoo.com Subject: Re: Trouble in configuring fir REST support ? What does the error log say? ? On Fri, Nov 25, 2016 at 5:03 PM, Hemanthnews via nginx wrote: Hi, Following is the environment OS: CentOS 7 (64 bit) NGINX: 1.10.1 PHP/PHP-FPM:? 5.6 ZF2 Apache 2.4 ? Nginx configured on port-80 and apache on port-9080 ? I am having trouble in configuring for REST support using nginx + php + zf2 When I enter /user-rest while using APACHE (Port: 9080), I get all the user data When I try with /user-rest ?(nginx on port:80), I get a blank screen? with Firefox and Chrome reports ?HTTP ERROR 500? ? ? Following is my configuration file under /etc/nginx/conf.d/pixflex_nginx.conf Would appreciate feedback / fix to support REST ? server { ????? 2???????? listen?????? 80 default; ????? 3???????? listen?????? 443 ssl; ????? 4???????? server_name? $hostname; ????? 5???????? client_max_body_size 8192M; ????? 6???????? client_header_timeout 300s; ????? 7???????? client_body_timeout 300s; ????? 8???????? fastcgi_read_timeout 300s; ???? ?9???????? fastcgi_buffers 16 128k; ???? 10???????? fastcgi_buffer_size 256k; ???? 11 ???? 12???????? #SSL Support - key & certificate location; ???? 13???????? ssl_certificate /etc/pki/tls/certs/ca.crt; ???? 14???????? ssl_certificate_key /etc/pki/tls/private/ca.key; ???? 15 ???? 16???????? #VirtualHost for HTML Support ???? 17???????? location / { ???? 18???????????????? #root?? /usr/share/nginx/html; ???? 19???????????????? limit_rate 512k; ???? 20???????????????? limit_conn pfs 100; ???? 21???????????? ????add_header 'Access-Control-Allow-Origin' "*"; ???? 22???????????????? add_header 'Access-Control-Allow-Credentials' 'true'; ???? 23???????????????? add_header 'Access-Control-Allow-Headers' 'Content-Type,accept,x-wsse,origin'; ???? 24???????????????? add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE'; ???? 25 ???? 26???????????????? root /opt/riversilica/pixflex/install/app_server/pixflex/public; ???? 27???????????????? index? index.php index.phtml index.html index.htm; ???? 28 ????????????????try_files $uri $uri/ /index.php$is_args$args; ???? 29????????? } ???? 30 ?????31???????? #error_page? 404????????????? /404.html; ???? 32???????? #redirect server error pages to the static page /50x.html ???? 33 ???? 34???????? #error_page?? 500 502 503 504? /50x.html; ???? 35???????? #location = /50x.html { ???? 36???????? #?????? root?? /usr/share/nginx/html; ???? 37???????? #} ???? 38 ???? 39???????? #proxy the PHP scripts to Apache listening on 127.0.0.1:80 ? ???40???????? #location ~ \.php$ { ???? 41???????? #??? proxy_pass?? http://127.0.0.1; ???? 42???????? #} ???? 43 ???? 44???????? #pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 ???? 45???????? location ~ \.php$ { ???? 46?????????????? ??#root?? /usr/share/nginx/html; ???? 47???????????????? limit_rate 512k; ???? 48???????????????? limit_conn pfs 100; ???? 49 ???? 50???????????????? root /opt/riversilica/pixflex/install/app_server/pixflex/public; ???? 51???????????????? try_files $uri =404; ???? 52???????????????? fastcgi_pass?? 127.0.0.1:9000; ???? 53???????????????? fastcgi_index? index.php; ???? 54???????????????? fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; ???? 55???????????????? fastcgi_split_path_info ^(.+\.php)(/.+)$; ?????56???????????????? fastcgi_intercept_errors on; ???? 57???????????????? fastcgi_read_timeout 300; ???? 58???????????????? include fastcgi_params; ???? 59???????? } ???? 60 ???? 61???????? # deny access to .htaccess files, if Apache's document root ???? 62???????? # concurs with nginx's one ???? 63???????? location ~ /\.ht { ???? 64???????????????? deny? all; ???? 65???????? } ???? 66 } ? ? ? ? ? -------------------- -Best Hemanth ? _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx ? -- Anoop P Alias? ? ? ? -- Anoop P Alias? ? ? -- Anoop P Alias? -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Mon Nov 28 19:37:35 2016 From: steve at greengecko.co.nz (steve) Date: Tue, 29 Nov 2016 08:37:35 +1300 Subject: SNI and certs. Message-ID: <2e6db66a-7241-006a-9d2a-15ae7da003bc@greengecko.co.nz> This is more a philosophical than technical question I suppose. Is it necessary to purchase a SSL cert for all domains sharing an IP address if one of them uses https? It seems that search engines are probing https: even for sites that don't offer it, just because it's available for others, with the end result that pages are being attributed to the wrong site. To me, it seems to be a problem generated by the search engine spiders themselves ( or just a further push to force all content to https: ), and the only ways I can see of properly addressing this are - dedicate one IP address to SNI / https sites, and one to http: only - purchase a SSL cert for all sites, even if only used to forward https to http. Doesn anyone have a better solution ( nginx of course! ) Cheers, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From luky-37 at hotmail.com Mon Nov 28 20:55:30 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 28 Nov 2016 20:55:30 +0000 Subject: AW: SNI and certs. In-Reply-To: <2e6db66a-7241-006a-9d2a-15ae7da003bc@greengecko.co.nz> References: <2e6db66a-7241-006a-9d2a-15ae7da003bc@greengecko.co.nz> Message-ID: > It seems that search engines are probing https: even for sites that > don't offer it Which is fine. > just because it's available for others, with the end > result that pages are being attributed to the wrong site. Sounds like an assumption. Any real life experience and evidence backing this? Sounds simply enough to drop the HTTPS request if the certificate doesn't match the hostname. Every standard wget/curl/lynx application drops the TLS session by default in this case, I don't see why a crawler wouldn't. > Does anyone have a better solution ( nginx of course! ) If this is a real problem (which I doubt), I guess you could just serve a 403 Forbidden from the default hosts. Lukas From jeff.dyke at gmail.com Mon Nov 28 21:07:59 2016 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Mon, 28 Nov 2016 16:07:59 -0500 Subject: SNI and certs. In-Reply-To: References: <2e6db66a-7241-006a-9d2a-15ae7da003bc@greengecko.co.nz> Message-ID: Just a personal preference, but i put an https version in front of all sites(and redirect 80 to 443) and keep the certs up to date for free with lets-encrypt/certbot (i have nothing to do with the company), with SNI, one IP. This is simple as I keep the nginx configurations up to date with a configuration management tool (saltstack in my case). That's my philosophy on 80 vs 443 and a mixed case, i like the consistency in my configuration and the ability to maintain groups of configuration types based on site needs. And you do get a small SEO boost for being https forward. Jeff On Mon, Nov 28, 2016 at 3:55 PM, Lukas Tribus wrote: > > It seems that search engines are probing https: even for sites that > > don't offer it > > Which is fine. > > > > > just because it's available for others, with the end > > result that pages are being attributed to the wrong site. > > Sounds like an assumption. Any real life experience and > evidence backing this? > > Sounds simply enough to drop the HTTPS request if the > certificate doesn't match the hostname. > > Every standard wget/curl/lynx application drops the TLS session > by default in this case, I don't see why a crawler wouldn't. > > > > > Does anyone have a better solution ( nginx of course! ) > > If this is a real problem (which I doubt), I guess you could just > serve a 403 Forbidden from the default hosts. > > > Lukas > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Nov 28 23:56:29 2016 From: nginx-forum at forum.nginx.org (CeeGeeDev) Date: Mon, 28 Nov 2016 18:56:29 -0500 Subject: Nginx headers to be set at the time of request processing In-Reply-To: <20161123202146.GJ2852@daoine.org> References: <20161123202146.GJ2852@daoine.org> Message-ID: Francis Daly Wrote: ------------------------------------------------------- > On Wed, Nov 23, 2016 at 01:22:42PM -0500, Sushma wrote: > > Hi there, > > > I understand that proxy_set_header sets headers before proxying the > > request to the upstream server. > > proxy_set_header adds extra http headers to the request that nginx > sends to its upstream as part of the proxy_pass configuration. > > > Is there some way I can set the headers at the time of request > > processing itself not just before proxying to upstream servers? > > I do not know what you mean by that question. > > What headers are you referring to, specifically? CeeGeeDev wrote: ------------------------------------------------------- > To be clear, this is for a custom http module we wrote. We want it to > know about request headers that we set in the nginx.conf (for > proxy_pass etc). But it doesn't seem like the custom module receives > the modified request headers. Is there any way to both a) set a > request header for proxy_pass but also b) have custom http module > code living inside the same nginx also process the request that has > these alterations to the set of request headers? So for the benefit of the community: our plan is to implement a custom configuration directive in our http module to allow us to inform ourselves about various header overrides made in the nginx configuration file that should override various request headers in the actual request data structures during request processing in our http module code (in our business logic only... will have no effect on downstream request header values). There seems to be no built-in alternative for nginx custom http module developers (apologies if this question is better suited to the development list), at least none that we can find documented anywhere. Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271147,271207#msg-271207 From steve at greengecko.co.nz Tue Nov 29 00:47:02 2016 From: steve at greengecko.co.nz (steve) Date: Tue, 29 Nov 2016 13:47:02 +1300 Subject: AW: SNI and certs. In-Reply-To: References: <2e6db66a-7241-006a-9d2a-15ae7da003bc@greengecko.co.nz> Message-ID: <77e1acff-98d4-df88-b6f1-0274a5fdfd0d@greengecko.co.nz> On 11/29/2016 09:55 AM, Lukas Tribus wrote: >> It seems that search engines are probing https: even for sites that >> don't offer it > Which is fine. > > > >> just because it's available for others, with the end >> result that pages are being attributed to the wrong site. > Sounds like an assumption. Any real life experience and > evidence backing this? yes > Sounds simply enough to drop the HTTPS request if the > certificate doesn't match the hostname. > > Every standard wget/curl/lynx application drops the TLS session > by default in this case, I don't see why a crawler wouldn't. > > > >> Does anyone have a better solution ( nginx of course! ) > If this is a real problem (which I doubt), I guess you could just > serve a 403 Forbidden from the default hosts. > > > Lukas > Not sure why you're doubting me here Lukas. Yes, this is a problem. No I'm not making it up. My understanding is that SSL is sorted before handing over to manage the content. As such, an incorrect or missing cert will fail, and a missing https server block will be handled by the default one ( or the one alphabetically first if not set ). So, as stated above this is a real problem. Sure in hindsight it's a configuration shortfall / cockup, but it didn't occur to me that search engines would be attempting to force https. Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From mzuzcak at secit.sk Tue Nov 29 08:08:23 2016 From: mzuzcak at secit.sk (=?UTF-8?B?TWF0ZWogWnV6xI3DoWs=?=) Date: Tue, 29 Nov 2016 09:08:23 +0100 Subject: Drupal 7, nginx with ModSecurity - How to resolve that 404 error page please? Message-ID: <59045150-af5f-1d02-9560-07d84a8da026@secit.sk> Hello all, I have installed Drupal 7 on latest version of Nginx web server which was compiled with support of ModSecurity module. I have activated core OWASP rule set. But when I active ModSecurity in my virtual host config file for my Drupal 7 web I do not login, register or reset password with this error in log: [error] 11158#0: *1 open() "/var/www/MY_WEBSITE/node" failed (2: No such file or directory), client: IP, server: MY_SERVER, request: "POST /node?destination=node HTTP/1.1", host: "MY_WEBSITE", referrer: "http://MY_WEBSITE/" And client gets 404 error page. I applied these practices https://geekflare.com/modsecurity-owasp-core-rule-set-nginx/ and https://www.netnea.com/cms/2016/11/22/securing-drupal-with-modsecurity-and-the-core-rule-set-crs3/ When I change SecRuleEngine from "On" to "DetectionOnly" result is the same, For correct operation I have to "switch off" ModSecurity in virtual host config for domain. So please have you any advices for solving this problem? Help me please. Many thanks! -- Best Regards Matej Zuzcak From luky-37 at hotmail.com Tue Nov 29 08:28:41 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 29 Nov 2016 08:28:41 +0000 Subject: AW: AW: SNI and certs. In-Reply-To: <77e1acff-98d4-df88-b6f1-0274a5fdfd0d@greengecko.co.nz> References: <2e6db66a-7241-006a-9d2a-15ae7da003bc@greengecko.co.nz> , <77e1acff-98d4-df88-b6f1-0274a5fdfd0d@greengecko.co.nz> Message-ID: > >?Any real life experience and evidence backing this? > yes Care to elaborate? > Not sure why you're doubting me here Lukas. Yes, this is a problem. No > I'm not making it up. We know that crawlers like Googlebot try HTTPS as well, even if there is no https link towards the website. That is well known information and publicly documented. What I don't see is why and how that would be a problem, even when HTTPS is not properly setup for that particular domain. Does it cause warnings in the webmaster tools? Who cares? Does it affect your ranking? I doubt it. Does it index pages or error pages from the default website and assign to your website? I doubt that even more. > As such, an incorrect or missing cert will fail, and a missing > https server block will be handled by the default one ( or the one > alphabetically first if not set ). So serving a 403 or returning 444 from the default block should be fine. > it didn't occur to me that search engines would be attempting > to force https. Just because they attempt to use HTTPS doesn't mean the fail to handle the case where HTTPS is not properly setup for this particular website. The way to properly deal with this would be to abort the TLS handshake. Haproxy can do this with the strict-sni directive, but nginx does not support that. Lukas From francis at daoine.org Tue Nov 29 12:58:01 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 29 Nov 2016 12:58:01 +0000 Subject: Nginx headers to be set at the time of request processing In-Reply-To: References: <20161123202146.GJ2852@daoine.org> Message-ID: <20161129125801.GA2958@daoine.org> On Mon, Nov 28, 2016 at 06:56:29PM -0500, CeeGeeDev wrote: Hi there, Thanks for expanding on what you are doing. I confess that I am still not sure what it is; but that's ok -- I don't need to understand. > So for the benefit of the community: our plan is to implement a custom > configuration directive in our http module to allow us to inform ourselves > about various header overrides made in the nginx configuration file that > should override various request headers in the actual request data > structures during request processing in our http module code (in our > business logic only... will have no effect on downstream request header > values). There seems to be no built-in alternative for nginx custom http > module developers (apologies if this question is better suited to the > development list), at least none that we can find documented anywhere. location /test/ { proxy_set_header a a; fastcgi_param b b; my_directive_a c c; my_directive_b d d; } For a request that is handled in that location, three of those directives could send some "extra" information to the upstream, if a suitable "*_pass" directive were active. No "*_pass" directive is active, so those three directives are effectively unused for this request. What do you expect your module to report for a request handled in this location? That may make it clearer what you are trying to achieve. (If it does not, feel free to ignore this mail.) Thanks, f -- Francis Daly francis at daoine.org From r1ch+nginx at teamliquid.net Tue Nov 29 19:31:04 2016 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 29 Nov 2016 20:31:04 +0100 Subject: AW: SNI and certs. In-Reply-To: References: <2e6db66a-7241-006a-9d2a-15ae7da003bc@greengecko.co.nz> <77e1acff-98d4-df88-b6f1-0274a5fdfd0d@greengecko.co.nz> Message-ID: There's no "nice" way to handle this in nginx as far as I'm aware. I think the best setup is a default vhost with a generic (server hostname?) certificate, and for any bots or clients that ignore the common name mismatch you can return the 421 Misdirected Request code. https://httpstatuses.com/421 On Tue, Nov 29, 2016 at 9:28 AM, Lukas Tribus wrote: > > > Any real life experience and evidence backing this? > > yes > > Care to elaborate? > > > > > Not sure why you're doubting me here Lukas. Yes, this is a problem. No > > I'm not making it up. > > We know that crawlers like Googlebot try HTTPS as well, even if there is no > https link towards the website. That is well known information and publicly > documented. > > What I don't see is why and how that would be a problem, even when HTTPS > is not properly setup for that particular domain. > > Does it cause warnings in the webmaster tools? Who cares? > Does it affect your ranking? I doubt it. > Does it index pages or error pages from the default website and assign to > your website? I doubt that even more. > > > > > As such, an incorrect or missing cert will fail, and a missing > > https server block will be handled by the default one ( or the one > > alphabetically first if not set ). > > So serving a 403 or returning 444 from the default block should be fine. > > > > > it didn't occur to me that search engines would be attempting > > to force https. > > Just because they attempt to use HTTPS doesn't mean the fail to handle > the case where HTTPS is not properly setup for this particular website. > > > > The way to properly deal with this would be to abort the TLS handshake. > Haproxy can do this with the strict-sni directive, but nginx does not > support > that. > > > > Lukas > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Tue Nov 29 19:38:27 2016 From: steve at greengecko.co.nz (steve) Date: Wed, 30 Nov 2016 08:38:27 +1300 Subject: AW: AW: SNI and certs. In-Reply-To: References: <2e6db66a-7241-006a-9d2a-15ae7da003bc@greengecko.co.nz> <77e1acff-98d4-df88-b6f1-0274a5fdfd0d@greengecko.co.nz> Message-ID: <3ee4e299-2004-64dc-8ede-cc648bd2cde9@greengecko.co.nz> On 11/29/2016 09:28 PM, Lukas Tribus wrote: > > What I don't see is why and how that would be a problem, even when HTTPS > is not properly setup for that particular domain. > > Does it cause warnings in the webmaster tools? Who cares? > Does it affect your ranking? I doubt it. > Does it index pages or error pages from the default website and assign to > your website? I doubt that even more. > > > Does it upset my customer? YES. That's all the justification I need. Feel free to disagree, but I really did put up a request for suggestions on how people solve this problem, not to have a philosophical debate on the matter. Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From luky-37 at hotmail.com Tue Nov 29 20:17:14 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 29 Nov 2016 20:17:14 +0000 Subject: AW: AW: AW: SNI and certs. In-Reply-To: <3ee4e299-2004-64dc-8ede-cc648bd2cde9@greengecko.co.nz> References: <2e6db66a-7241-006a-9d2a-15ae7da003bc@greengecko.co.nz> <77e1acff-98d4-df88-b6f1-0274a5fdfd0d@greengecko.co.nz> , <3ee4e299-2004-64dc-8ede-cc648bd2cde9@greengecko.co.nz> Message-ID: > > Does it cause warnings in the webmaster tools? Who cares? > > Does it affect your ranking? I doubt it. > > Does it index pages or error pages from the default website and assign to > > your website? I doubt that even more. > > Does it upset my customer? YES. > > That's all the justification I need. That's fine, then why not just say that? Instead you pretended to know about a huge problem with (a) crawler(s) that would probably have affected every third website. That would have been a huge deal, that everyone wanted to know about, if real. If you come on this mailing list claiming you can remotely crash every nginx instance, most likely people would like to clarify specifics and fix the problem, don't you think? > Feel free to disagree but I really did put up a request for suggestions > on how people solve this problem, not to have a philosophical debate on > the matter. What I wanted to know is if there is a major bug in one of the crawlers, which is more or less what you suggested. Now we know its not, and that's great, because that means SEO is not fucked up for millions of websites out there in a very common configuration. Besides, I did provide suggestions about the only way to handle this in nginx (return specific error codes or certificates from the default server block) and what would be ideal instead (aborting the TLS handshake like haproxy does with strict-sni enabled). lukas From steve at greengecko.co.nz Tue Nov 29 21:25:43 2016 From: steve at greengecko.co.nz (steve) Date: Wed, 30 Nov 2016 10:25:43 +1300 Subject: AW: AW: AW: SNI and certs. In-Reply-To: References: <2e6db66a-7241-006a-9d2a-15ae7da003bc@greengecko.co.nz> <77e1acff-98d4-df88-b6f1-0274a5fdfd0d@greengecko.co.nz> <3ee4e299-2004-64dc-8ede-cc648bd2cde9@greengecko.co.nz> Message-ID: <5f006372-7966-97f7-b4ef-c7052c11b7f6@greengecko.co.nz> On 11/30/2016 09:17 AM, Lukas Tribus wrote: >>> Does it cause warnings in the webmaster tools? Who cares? >>> Does it affect your ranking? I doubt it. >>> Does it index pages or error pages from the default website and assign to >>> your website? I doubt that even more. >> Does it upset my customer? YES. >> >> That's all the justification I need. > That's fine, then why not just say that? Why should I? I clearly defined the problem/misconfiguration. I don't really see the need to justify why I want to fix it. > > Instead you pretended to know about a huge problem with (a) crawler(s) that > would probably have affected every third website. That would have been a huge > deal, that everyone wanted to know about, if real. No. I said this would affect anyone using a mixed http/https setup over SNI. I also said it was something I hadn't thought of, and as such was a cock up in my configuration. > > If you come on this mailing list claiming you can remotely crash every nginx > instance, most likely people would like to clarify specifics and fix the problem, > don't you think? If I did make that claim, I'd describe exactly how I just crashed nginx.org. Of course I'd do that privately... Interestingly, there are many posts on this subject, try googling them. > > > >> Feel free to disagree but I really did put up a request for suggestions >> on how people solve this problem, not to have a philosophical debate on >> the matter. > What I wanted to know is if there is a major bug in one of the crawlers, which > is more or less what you suggested. Now we know its not, and that's great, > because that means SEO is not fucked up for millions of websites out there > in a very common configuration. Well, you told me it doesn't happen... WTF? IT WILL CRAWL THE DEFAULT HTTPS: TARGET IF ALLOWED. I'll leave you to do your own research if you don't believe me. Ass-u-me. > > Besides, I did provide suggestions about the only way to handle this in nginx > (return specific error codes or certificates from the default server block) and > what would be ideal instead (aborting the TLS handshake like haproxy does > with strict-sni enabled). And what cert would you use in this default block that matches, so the crawler receives a meaningful response, rather than an incorrect cert ( which they don't like )? I'm plenty old enough to realise I'll never know everything, and if my knowledge is deficient in this field, please show me where, or point me to where I can research further. > > > lukas > Time to stop feeding the troll I think. Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From daniel at linux-nerd.de Tue Nov 29 21:51:08 2016 From: daniel at linux-nerd.de (Daniel) Date: Tue, 29 Nov 2016 22:51:08 +0100 Subject: Rewrite rules Message-ID: <1A48301D-27B2-4423-9F3C-22A05F2B8E2B@linux-nerd.de> Hi there, i try to setup some rules. I have to rules which conflicts. rewrite ^/(.*?)/(.*?)/(.*)$ /$3; #rewrite ^/$ /a/b permanent; the first Rules is needed from our developer the second rules os for a request. The Goal is to redirect all request on any domain.com to anydomain.com/a/b Maybe some has a hint for me. Cheers Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Tue Nov 29 22:26:24 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 29 Nov 2016 22:26:24 +0000 Subject: AW: AW: AW: AW: SNI and certs. In-Reply-To: <5f006372-7966-97f7-b4ef-c7052c11b7f6@greengecko.co.nz> References: <2e6db66a-7241-006a-9d2a-15ae7da003bc@greengecko.co.nz> <77e1acff-98d4-df88-b6f1-0274a5fdfd0d@greengecko.co.nz> <3ee4e299-2004-64dc-8ede-cc648bd2cde9@greengecko.co.nz> , <5f006372-7966-97f7-b4ef-c7052c11b7f6@greengecko.co.nz> Message-ID: > Why should I? I clearly defined the problem/misconfiguration. I don't > really see the need to justify why I want to fix it. To help others, myself included to comprehend a possible problem in similar configurations and learn more about it. After all, this is a community. > Well, you told me it doesn't happen... WTF? I said I doubt it, and asked for more details. Not sure why that would offend you. > And what cert would you use in this default block that matches, so the > crawler receives a meaningful response, rather than an incorrect cert ( > which they don't like )? Obviously there is no correct/matching certificate for this domain name in the first place, otherwise we wouldn't be in this situation. As I said, the best way would be to drop the TLS handshake, but nginx doesn't support this afaik. Lukas From nginx-forum at forum.nginx.org Wed Nov 30 16:45:45 2016 From: nginx-forum at forum.nginx.org (mevans336) Date: Wed, 30 Nov 2016 11:45:45 -0500 Subject: Reverse Proxy - Both Servers Returned in Logs? Message-ID: <3257ad3d2b1a691c8e03dd3f43d2c795.NginxMailingListEnglish@forum.nginx.org> We are experiencing an issue where we have Nginx configured as a reverse proxy. SSL terminates with Nginx also. On the back end are two Wildfly servers. If a session is bound to server 2 (via Nginx ip_hash) after 30 minutes the user is redirected back to server 1 and the following is logged in Nginx. We can also recreate by stopping server1 and forcing all requests to server2, then restarting server1. Sessions are immediately sent back to server1. [30/Nov/2016:11:34:55 -0500] 172.72.208.12 - - - mysite.mydomaincom to: 10.0.0.106:8080, 10.0.0.107:8080: GET /blank.gif HTTP/1.1 upstream_response_time 0.000, 0.007 msec 1480523695.444 request_time 0.007 What would cause both back end servers to be logged? A normal request only shows a single IP for the backend server. Here is the relevant nginx.conf: upstream prodtemp { ip_hash; server 10.0.0.106:8080 max_fails=1 fail_timeout=5s; server 10.0.0.107:8080 max_fails=1 fail_timeout=5s; keepalive 50; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271247,271247#msg-271247 From nginx-forum at forum.nginx.org Wed Nov 30 18:07:08 2016 From: nginx-forum at forum.nginx.org (mevans336) Date: Wed, 30 Nov 2016 13:07:08 -0500 Subject: Reverse Proxy - Both Servers Returned in Logs? In-Reply-To: <3257ad3d2b1a691c8e03dd3f43d2c795.NginxMailingListEnglish@forum.nginx.org> References: <3257ad3d2b1a691c8e03dd3f43d2c795.NginxMailingListEnglish@forum.nginx.org> Message-ID: <35c170761546861d82a6b85743615de9.NginxMailingListEnglish@forum.nginx.org> We've noticed that if we flip the order of the backend servers, the server the user is directed to flips. upstream prodtemp { ip_hash; server 10.0.0.107:8080 max_fails=1 fail_timeout=5s; server 10.0.0.106:8080 max_fails=1 fail_timeout=5s; keepalive 50; } That results in the user being sent to server2. This is Nginx 1.10.2 FYI. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271247,271249#msg-271249 From nginx-forum at forum.nginx.org Wed Nov 30 18:39:20 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Wed, 30 Nov 2016 13:39:20 -0500 Subject: AW: AW: AW: AW: SNI and certs. In-Reply-To: References: Message-ID: <2a9758fa91272f4d3e014eaa9ec634f0.NginxMailingListEnglish@forum.nginx.org> Lukas Tribus Wrote: ------------------------------------------------------- > As I said, the best way would be to drop the TLS handshake, but nginx > doesn't support this afaik. If you mind the overhead, ssl_preread_server_name could be used for this. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271204,271250#msg-271250 From nginx at 2xlp.com Wed Nov 30 19:30:23 2016 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Wed, 30 Nov 2016 14:30:23 -0500 Subject: SNI and certs. In-Reply-To: References: <2e6db66a-7241-006a-9d2a-15ae7da003bc@greengecko.co.nz> Message-ID: <1BABB4AE-15D5-45C1-B09D-073157DEE1F1@2xlp.com> On Nov 28, 2016, at 4:07 PM, Jeff Dyke wrote: > And you do get a small SEO boost for being https forward. Not necessarily -- some SEO engines are now doing the opposite, and penalizing non-https sites. Google announced plans to start labeling non-https sites as "insecure" in 2017 too. It's incredibly simple (and free) to set up SSL via LetsEncrypt on all domains - so I would do that. On Nov 28, 2016, at 2:37 PM, steve wrote: > It seems that search engines are probing https: even for sites that don't offer it, just because it's available for others, with the end result that pages are being attributed to the wrong site. In terms of your current situation with SEO and attribution -- can the original poster share any examples of the search engines and domains/results? I'd honestly love to see some of what is going on, what you interpreted pretty much never happens. A search engine might probe for data via https; but it won't attribute a resource to a domain/protocol it didn't actually load it from. This alleged Search Engine behavior is something that I've never seen with Google, Bing (or other "standard" engines) and I've managed SEO for a handful of top publishers. From my experience and a lack of evidence, I have no reason to believe this is the actual problem. OTOMH, there are a lot of possible issues that could cause this. The most likely issue is that there is a misconfiguration on nginx and 3 things are happening: 1. there exists a link to the "wrong domain" for the content somewhere on the internet 2. nginx is serving a file on the "wrong domain" 3. the pages do not list a "canonical url" If you have a thoroughly broken nginx installation and are serving the content on a wrong domain, almost every search engine will transfer the resource's link equity to the canonical URL. They're only going to show the data on the wrong domain/scheme if you allowed it to be served on the wrong domain/scheme, and failed to include a canonical. If you are dealing with a broken search engine/spider for random service, there are lots of those, and you want to address it.... The problem could be because the client doesn't process SSL or SNI correctly, so you might be able to do: A) single certificate HTTPS on IP#1 + (SNI HTTPS & plain-http on IP#2) B) single certificate HTTPS on IP#1 + SNI HTTPS on IP#2 + Plain HTTP on IP#3 You could also just isolate the given spiders by their browser id, and handle them with custom content or redirects. None of the major search engines work in the manner you suggest though. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arwin at petabi.com Wed Nov 30 20:15:46 2016 From: arwin at petabi.com (Ya-wen Lin) Date: Wed, 30 Nov 2016 12:15:46 -0800 Subject: How to config the path for the dynamic module of nginx to access? Message-ID: Hi, I've read the related documents and tried googling but the results are all about how to assign the path to the module's implementation c code for nginx to compile a module. My module will read/write files upon end-user's request, and my question is how to set the path so that my module will directly read/write in that directory. I found that without extra settings, my module will read/write the folder I initiate nginx. For example, if I run sudo nginx where pwd is /Users/me/ html/data, then the dynamic module will read/write files from /Users/me/html/data. What would be the best practice to set the path to access data for my module? Thank you very much. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Wed Nov 30 22:09:13 2016 From: steve at greengecko.co.nz (steve) Date: Thu, 1 Dec 2016 11:09:13 +1300 Subject: SNI and certs. In-Reply-To: <1BABB4AE-15D5-45C1-B09D-073157DEE1F1@2xlp.com> References: <2e6db66a-7241-006a-9d2a-15ae7da003bc@greengecko.co.nz> <1BABB4AE-15D5-45C1-B09D-073157DEE1F1@2xlp.com> Message-ID: <05a0fe3d-325f-6e35-2725-6d6cd5a1c9a2@greengecko.co.nz> On 12/01/2016 08:30 AM, Jonathan Vanasco wrote: > > On Nov 28, 2016, at 4:07 PM, Jeff Dyke wrote: > >> And you do get a small SEO boost for being https forward. > > Not necessarily -- some SEO engines are now doing the opposite, and > penalizing non-https sites. Google announced plans to start labeling > non-https sites as "insecure" in 2017 too. > > It's incredibly simple (and free) to set up SSL via LetsEncrypt on all > domains - so I would do that. The LetsEncrypt concept was corrupted from the start by it's use by hackers / malware sites. If you're serious with security then an oldschool $10 cert from Comodo is far better. Sure LE is a solution, but multiple SSL cert providers is getting a bit complex really. ( plus LE have already been hacked themselves ) > > > On Nov 28, 2016, at 2:37 PM, steve wrote: > >> It seems that search engines are probing https: even for sites that >> don't offer it, just because it's available for others, with the end >> result that pages are being attributed to the wrong site. > > In terms of your current situation with SEO and attribution -- can the > original poster share any examples of the search engines and > domains/results? I'd honestly love to see some of what is going on, > what you interpreted pretty much never happens. A search engine might > probe for data via https; but it won't attribute a resource to a > domain/protocol it didn't actually load it from. This alleged > Search Engine behavior is something that I've never seen with Google, > Bing (or other "standard" engines) and I've managed SEO for a handful > of top publishers. From my experience and a lack of evidence, I have > no reason to believe this is the actual problem. Well, no as I've fixed this. However, if you have a probe for site x on https: and it doesn't exist, then the default https site for that IP address will be returned. Depending on configuration, it may still be attributed to the original search domain. I don't understand why people keep trying to shoot me down on this! > > OTOMH, there are a lot of possible issues that could cause this. > > The most likely issue is that there is a misconfiguration on nginx and > 3 things are happening: > 1. there exists a link to the "wrong domain" for the content somewhere > on the internet > 2. nginx is serving a file on the "wrong domain" > 3. the pages do not list a "canonical url" > > If you have a thoroughly broken nginx installation and are serving the > content on a wrong domain, almost every search engine will transfer > the resource's link equity to the canonical URL. They're only going > to show the data on the wrong domain/scheme if you allowed it to be > served on the wrong domain/scheme, and failed to include a canonical. Note: I host these sites, I do not write the sites in question. Addition of canonical headers is beyond my remit, although I suppose nginx could be coerced into adding one. Interestingly, neither of the CMSes I primarily work with ( Magento and WordPress ) seem to add in canonical headers either. I must research this further. > > If you are dealing with a broken search engine/spider for random > service, there are lots of those, and you want to address it.... The > problem could be because the client doesn't process SSL or SNI > correctly, so you might be able to do: > > A) single certificate HTTPS on IP#1 + (SNI HTTPS & plain-http on IP#2) > B) single certificate HTTPS on IP#1 + SNI HTTPS on IP#2 + Plain HTTP > on IP#3 > > You could also just isolate the given spiders by their browser id, and > handle them with custom content or redirects. > > None of the major search engines work in the manner you suggest though. The problem was with Google... -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Nov 30 23:31:51 2016 From: nginx-forum at forum.nginx.org (badtzhou) Date: Wed, 30 Nov 2016 18:31:51 -0500 Subject: proxy_ignore_client_abort not working on linux Message-ID: <2851681161d8f01648a225ec9feae946.NginxMailingListEnglish@forum.nginx.org> I can't get proxy_ignore_client_abort to work correctly on linux. The default option is off. But when I proxy a large cacheable file, nginx doesn't close the backend connection right away when client abort the request. The backend connection was not closed until the entire file has been buffered and cached. Any idea why? Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271255,271255#msg-271255