? ? ? ? ? ? ? ? ? ? ? ? ? ? Datasheet ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
Error occurs during the upload? - >
File handle is not returned when upload() is called with the HTML form name.$handle = upload($form);
FastCGI sent in stderr: "Use of uninitialized value $handle in at /alex/bin/diag/upload line 275.?at /lxi/pil/www/bin/diag/upload line 275. main::UploadFile("datasheet", "/lxi/pil/www/html/dataman", "datasheet.pdf") called at /alex/bin/diag/upload line 137" while reading response header from upstream, client: 192.168.2.130, server: localhost, request: "POST /bin/diag/upload HTTP/1.1", upstream: "fastcgi://unix:/tmp/cgi.sock:", host: "192.168.2.145", referrer: "http://192.168.2.145/bin/diag/stage2"
Do i have to add anymore changes in the NGINX config so that the form data based file handler can be retrieved by the Perl CGI upload function?
Any help is much appreciated?
Thanks,
Alex.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mdounin at mdounin.ru Mon Apr 1 17:04:10 2019
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 1 Apr 2019 20:04:10 +0300
Subject: Keepalived Connections Reset after reloading the configuration
(HUP Signal)
In-Reply-To: <56774ba55034a4075b78189191a29a62.NginxMailingListEnglish@forum.nginx.org>
References:
<56774ba55034a4075b78189191a29a62.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20190401170410.GT1877@mdounin.ru>
Hello!
On Thu, Mar 28, 2019 at 08:49:48PM -0400, darthhexx wrote:
> Hi,
>
> We are seeing some fallout from this behaviour on keep-alive connections
> when proxying traffic from remote POPs back to an Origin DC that, due to
> latency, brings about a race condition in the socket shutdown sequence. The
> result being the fateful "upstream prematurely closed connection while
> reading response header from upstream" in the Remote POP.
>
> A walk through of what we are seeing:
>
> 1. Config reload happens on the Origin DC.
> 2. Socket shutdowns are sent to all open, but not transacting, keep-alive
> connections.
> 3. Remote POP sends data on a cached connection at around the same time as
> #2, because at this point it has not received the disconnect yet.
> 4. Remote POP then receives the disconnect and errors with "upstream
> prematurely..".
>
> Ideally we should be able to have the Origin honour the
> `worker_shutdown_timeout` (or some other setting) for keep-alive
> connections. That way we would be able to use the `keepalive_timeout`
> setting for upstreams to ensure the upstream's cached connections always
> time out before a worker is shutdown. Would that be possible or is there
> another way to mitigate this scenario?
As per HTTP RFC, clients are expected to be prepared to such close
events (https://tools.ietf.org/html/rfc2616#section-8.1.4). In
nginx, if an error happens when nginx tries to use a cached
connection, it automatically tries again as long as it is
permitted by "proxy_next_upstream"
(http://nginx.org/r/proxy_next_upstream).
--
Maxim Dounin
http://mdounin.ru/
From nginx-forum at forum.nginx.org Tue Apr 2 09:00:19 2019
From: nginx-forum at forum.nginx.org (DieterK)
Date: Tue, 02 Apr 2019 05:00:19 -0400
Subject: map with preserving the URL
Message-ID: <41d1bae4261f22741a81e790e1820502.NginxMailingListEnglish@forum.nginx.org>
Hello,
I'm trying to make the URLs on my site more "friendly" with map, but I don't
understand the right way.
My config looks like this:
=========
map $uri_lowercase $new {
include /foobar/rewriterules.map;
}
server {
listen 443 ssl;
[...]
location / {
if ($new) {
rewrite ^ $new redirect;
}
location ~ \.php$ {
fastcgi_pass unix:/var/run/foobar;
include global/php.conf;
}
}
The rewriterules.map looks like this:
/product/foo /product.php?id=100;
/product/bar /product.php?id=200;
This also works so far, but I want to preserve the URL in the address bar.
I already tried it with break instead of redirect, unfortunately it doesn't
seem to work. (error 500)
What's the right way to do this?
Thanks
Dieter
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283588,283588#msg-283588
From nginx-forum at forum.nginx.org Wed Apr 3 08:17:49 2019
From: nginx-forum at forum.nginx.org (anish10dec)
Date: Wed, 03 Apr 2019 04:17:49 -0400
Subject: Caching OPTIONS Response
Message-ID: <5c641c05607fd320febd6f185f880c13.NginxMailingListEnglish@forum.nginx.org>
We are using Nginx to deliver Widevine Streaming over Web.
Website sends OPTIONS request as a preflight check with every fragment
request for streaming.
Since Nginx by default caches GET, HEAD, we tried including OPTIONS method
to cache on Nginx.
proxy_cache_methods GET HEAD OPTIONS;
Gives error messsage as Invalid value.
Below links says OPTIONS cannot be cached
https://forum.nginx.org/read.php?2,253403,253408
This is causing all the request of preflight check from Browser to load
Origin Server having Nginx.
Please suggest a way to handle OPTIONS request
Regards,
Anish
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283592,283592#msg-283592
From hellobhaskar at yahoo.co.in Wed Apr 3 13:37:05 2019
From: hellobhaskar at yahoo.co.in (Ramachandra Bhaskar)
Date: Wed, 3 Apr 2019 13:37:05 +0000 (UTC)
Subject: Secondary auth - guidance and configuration help
References: <126319133.16072614.1554298625482.ref@mail.yahoo.com>
Message-ID: <126319133.16072614.1554298625482@mail.yahoo.com>
Hello
We are having a legacy system(say a.b.c.d)?? which uses http basic authentication (username/password)Currently we are using nginx ingress controller to pass all the requests coming to webserver(say 1.2.3.4)? using kubernetes "auth-url" annotation to the legacy system and if successful we are forwarding to our application server.(say w.s.x.c)
We want to do few things?
we want a consolidated nginx server(/container) which can use do secondary authentication with legacy system and also cache successful requests.is that possible ? We want to reduce number of hits going to legacy system for authentication thats our end goal
https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-subrequest-authentication/#https://docs.nginx.com/nginx/admin-guide/content-cache/content-caching/
RegardsBhaskar
-
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From valery+nginxen at grid.net.ru Wed Apr 3 14:06:44 2019
From: valery+nginxen at grid.net.ru (Valery Kholodkov)
Date: Wed, 3 Apr 2019 16:06:44 +0200
Subject: Secondary auth - guidance and configuration help
In-Reply-To: <126319133.16072614.1554298625482@mail.yahoo.com>
References: <126319133.16072614.1554298625482.ref@mail.yahoo.com>
<126319133.16072614.1554298625482@mail.yahoo.com>
Message-ID:
Yes, it's achievable with scripting/separate fixture. Requires shared
storage, like memcached/redis/etc.
On 03-04-19 15:37, Ramachandra Bhaskar via nginx wrote:
> Hello
>
> We are having a legacy system(say a.b.c.d)?? which uses http basic
> authentication (username/password)
> Currently we are using nginx ingress controller to pass all the requests
> coming to webserver(say 1.2.3.4)? using kubernetes "auth-url" annotation
> to the legacy system and if successful we are forwarding to our
> application server.(say w.s.x.c)
>
> We want to do few things
>
> we want a consolidated nginx server(/container) which can use do
> secondary authentication with legacy system and also cache successful
> requests.
> is that possible ? We want to reduce number of hits going to legacy
> system for authentication thats our end goal
--
Valery Kholodkov
Coldrift Technologies B.V.
http://coldrift.com/
Tel.: +31611223927
From hellobhaskar at yahoo.co.in Wed Apr 3 14:08:33 2019
From: hellobhaskar at yahoo.co.in (Ramachandra Bhaskar)
Date: Wed, 3 Apr 2019 14:08:33 +0000 (UTC)
Subject: Secondary auth - guidance and configuration help
In-Reply-To:
References: <126319133.16072614.1554298625482.ref@mail.yahoo.com>
<126319133.16072614.1554298625482@mail.yahoo.com>
Message-ID: <2087361569.321982.1554300513714@mail.yahoo.com>
ok any? rough configuration suggestion using redis ?I havent dirtied yet into lua.
RegardsBhaskar
On Wednesday, 3 April, 2019, 7:37:10 pm IST, Valery Kholodkov wrote:
Yes, it's achievable with scripting/separate fixture. Requires shared
storage, like memcached/redis/etc.
On 03-04-19 15:37, Ramachandra Bhaskar via nginx wrote:
> Hello
>
> We are having a legacy system(say a.b.c.d)?? which uses http basic
> authentication (username/password)
> Currently we are using nginx ingress controller to pass all the requests
> coming to webserver(say 1.2.3.4)? using kubernetes "auth-url" annotation
> to the legacy system and if successful we are forwarding to our
> application server.(say w.s.x.c)
>
> We want to do few things
>
> we want a consolidated nginx server(/container) which can use do
> secondary authentication with legacy system and also cache successful
> requests.
> is that possible ? We want to reduce number of hits going to legacy
> system for authentication thats our end goal
--
Valery Kholodkov
Coldrift Technologies B.V.
http://coldrift.com/
Tel.: +31611223927
_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From valery+nginxen at grid.net.ru Wed Apr 3 14:19:51 2019
From: valery+nginxen at grid.net.ru (Valery Kholodkov)
Date: Wed, 3 Apr 2019 16:19:51 +0200
Subject: Secondary auth - guidance and configuration help
In-Reply-To: <2087361569.321982.1554300513714@mail.yahoo.com>
References: <126319133.16072614.1554298625482.ref@mail.yahoo.com>
<126319133.16072614.1554298625482@mail.yahoo.com>
<2087361569.321982.1554300513714@mail.yahoo.com>
Message-ID:
Totally depends on your setup. Send a pm, we'll think it through!
On 03-04-19 16:08, Ramachandra Bhaskar via nginx wrote:
> ok any? rough configuration suggestion using redis ?
> I havent dirtied yet into lua.
>
> Regards
> Bhaskar
>
>
> On Wednesday, 3 April, 2019, 7:37:10 pm IST, Valery Kholodkov
> wrote:
>
>
> Yes, it's achievable with scripting/separate fixture. Requires shared
> storage, like memcached/redis/etc.
>
> On 03-04-19 15:37, Ramachandra Bhaskar via nginx wrote:
> > Hello
> >
> > We are having a legacy system(say a.b.c.d)?? which uses http basic
> > authentication (username/password)
> > Currently we are using nginx ingress controller to pass all the requests
> > coming to webserver(say 1.2.3.4)? using kubernetes "auth-url" annotation
> > to the legacy system and if successful we are forwarding to our
> > application server.(say w.s.x.c)
> >
> > We want to do few things
> >
> > we want a consolidated nginx server(/container) which can use do
> > secondary authentication with legacy system and also cache successful
> > requests.
> > is that possible ? We want to reduce number of hits going to legacy
> > system for authentication thats our end goal
--
Valery Kholodkov
Coldrift Technologies B.V.
http://coldrift.com/
Tel.: +31611223927
From nginx-forum at forum.nginx.org Thu Apr 4 18:10:16 2019
From: nginx-forum at forum.nginx.org (krionz)
Date: Thu, 04 Apr 2019 14:10:16 -0400
Subject: Only receiving an empty response with 200 status code when using
CORS on Nginx 1.14
In-Reply-To: <2f1665c9876bd58dfc22346ee8e37a00.NginxMailingListEnglish@forum.nginx.org>
References: <2f1665c9876bd58dfc22346ee8e37a00.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
I have been trying to do this without success since December 20, 2018. And i
still haven't got any answer. Am i missing something on my question? How
could i improve it ?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282474,283622#msg-283622
From francis at daoine.org Thu Apr 4 23:03:53 2019
From: francis at daoine.org (Francis Daly)
Date: Fri, 5 Apr 2019 00:03:53 +0100
Subject: Nginx FCGIWrap Perl: File Upload not working.
In-Reply-To: <990382769.13578415.1554111246068@mail.yahoo.com>
References: <990382769.13578415.1554111246068.ref@mail.yahoo.com>
<990382769.13578415.1554111246068@mail.yahoo.com>
Message-ID: <20190404230353.fq3h5cmyztzp6n43@daoine.org>
On Mon, Apr 01, 2019 at 09:34:06AM +0000, Alexander Jaiboy via nginx wrote:
Hi there,
> I am trying to up host my websites using NGINX webserver. The backend is
> Perl CGI scripts (i am using fcgiwrap for the handling the CGI requests).I
> am finding difficulty with file upload.
> location ~ /bin/diag/upload {
> # Pass altered request body to this location
> upload_pass @fastcgi;
The "upload" module is not part of stock nginx -- it is a third party
module. It does some useful things, including making it easier for you
to write your own CGI-like script to handle file uploads.
But it looks like you are using a "standard" file upload CGI script,
which expects things the "normal" way, rather than the way that the
upload module presents things.
I think that you can probably do without the upload module at all.
So if you remove the whole "location ~ /bin/diag/upload {}" block, and
then change your current "location @fastcgi" to instead be "location
= /bin/diag/upload" (so that it handles only this request directly),
then I think that things might Just Work.
Note: only do this as a test. If you use @fastcgi anywhere else in your
config, this change will break that. In that case, if the test succeeds,
you can change things so that everything still works.
Good luck with it,
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Thu Apr 4 23:17:16 2019
From: francis at daoine.org (Francis Daly)
Date: Fri, 5 Apr 2019 00:17:16 +0100
Subject: Only receiving an empty response with 200 status code when using
CORS on Nginx 1.14
In-Reply-To:
References: <2f1665c9876bd58dfc22346ee8e37a00.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20190404231716.yhnp46q4j5rgwews@daoine.org>
On Thu, Apr 04, 2019 at 02:10:16PM -0400, krionz wrote:
Hi there,
> I have been trying to do this without success since December 20, 2018. And i
> still haven't got any answer. Am i missing something on my question? How
> could i improve it ?
Can you show the request you make and the response you get? "curl -v"
is probably good to use.
When I try something similar, just using an "echo" fastcgi script,
I get the expected http response and content for GET, POST, and OPTIONS.
Tested against nginx/1.14.0.
f
--
Francis Daly francis at daoine.org
From nginx-forum at forum.nginx.org Fri Apr 5 04:08:51 2019
From: nginx-forum at forum.nginx.org (elkie)
Date: Fri, 05 Apr 2019 00:08:51 -0400
Subject: How fast and resource-consuming is sub_filter?
Message-ID: <0d571947eebf7420389262afefa2698d.NginxMailingListEnglish@forum.nginx.org>
Hello friends!
How bad is the idea of having tens or even hundreds of sub_filter directives
in nginx configuration?
Are the replacements by http_sub_module module time- and resource-consuming
operations, or they are light enough not to care about it?
Denis
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283628,283628#msg-283628
From truthhonesty369 at gmail.com Sun Apr 7 11:59:26 2019
From: truthhonesty369 at gmail.com (Kool Kid)
Date: Sun, 7 Apr 2019 07:59:26 -0400
Subject: Mail list
Message-ID:
Truthhonesty369 at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From francis at daoine.org Mon Apr 8 21:40:13 2019
From: francis at daoine.org (Francis Daly)
Date: Mon, 8 Apr 2019 22:40:13 +0100
Subject: map with preserving the URL
In-Reply-To: <41d1bae4261f22741a81e790e1820502.NginxMailingListEnglish@forum.nginx.org>
References: <41d1bae4261f22741a81e790e1820502.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20190408214013.mhd352ozixpsr5va@daoine.org>
On Tue, Apr 02, 2019 at 05:00:19AM -0400, DieterK wrote:
Hi there,
> I'm trying to make the URLs on my site more "friendly" with map, but I don't
> understand the right way.
> location / {
> if ($new) {
> rewrite ^ $new redirect;
"redirect" there means "send a http redirect to the client, so that
the client will make a fresh request using the new url". Which means
"the browser url bar will change", as you see.
> The rewriterules.map looks like this:
> /product/foo /product.php?id=100;
> /product/bar /product.php?id=200;
> I already tried it with break instead of redirect, unfortunately it doesn't
> seem to work. (error 500)
>
> What's the right way to do this?
In principle, replacing your "redirect" with "last" should Just Work. But
there is a separate issue, that when nginx does an internal rewrite,
only an explicit ? in the rewritten string causes the request to be
parsed as "url with query string".
Merely having a ? in the expanded-variable, is not enough.
So, one option could be to "proxy_pass http://127.0.0.1$new;" (where the
scheme://host:port refer to this nginx server -- change as necessary)
instead of "rewrite" -- that is sort-of like an "internal" version of
the redirect. But it is probably overkill, if you can use the next
option instead:
Another option could be to change your map so that you have one variable
that is the query string (id=100; id=200; etc) and another variable that
is the url (/product.php); then you can "rewrite ^ $new?$newquery
last;", and it should do what you want.
(If all of your rewrites will go to /product.php, then you can hardcode
that bit and stick with the single variable.)
With this second option, you could also choose to move the "if($new)"
stanza outside of "location /", and have it at server level. I don't
think it is necessary; but it might be more efficient, depending on the
fraction of requests that set the variable.
Good luck with it,
f
--
Francis Daly francis at daoine.org
From nginx-forum at forum.nginx.org Tue Apr 9 11:45:14 2019
From: nginx-forum at forum.nginx.org (mchtech)
Date: Tue, 09 Apr 2019 07:45:14 -0400
Subject: nginx log : $upstream_response_time is larger than $request_time
Message-ID:
I use nginx(1.15.3) as a reverse-proxy, and encounter a problem that
$upstream_response_time is larger than $request_time" in log files.
According to nginx documentation,
$upstream_response_time
keeps time spent on receiving the response from the upstream server; the
time is kept in seconds with millisecond resolution. Times of several
responses are separated by commas and colons like addresses in the
$upstream_addr variable.
$request_time
request processing time in seconds with a milliseconds resolution; time
elapsed between the first bytes were read from the client and the log write
after the last bytes were sent to the client
So, $request_time should include $upstream_response_time.
I had analyzed the total count of log records of which response code is
200:
$upstream_response_time < $request_time : 35812
$upstream_response_time = $request_time : 157043
$upstream_response_time > $request_time : 32783 {
$upstream_response_time - $request_time = 0.001 : 32558
$upstream_response_time - $request_time = 0.002 : 225
}
What's the reason?
Thanks.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283679,283679#msg-283679
From mdounin at mdounin.ru Tue Apr 9 13:13:59 2019
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 9 Apr 2019 16:13:59 +0300
Subject: nginx-1.15.11
Message-ID: <20190409131358.GP1877@mdounin.ru>
Changes with nginx 1.15.11 09 Apr 2019
*) Bugfix: in the "ssl_stapling_file" directive on Windows.
--
Maxim Dounin
http://nginx.org/
From nginx-forum at forum.nginx.org Tue Apr 9 13:17:47 2019
From: nginx-forum at forum.nginx.org (George)
Date: Tue, 09 Apr 2019 09:17:47 -0400
Subject: https://hg.nginx.org certificate error ?
Message-ID: <4ae1965bf89295cf8f8d0b1e938145ad.NginxMailingListEnglish@forum.nginx.org>
Hi when I try to clone njs repo I am getting the error below
hg clone https://hg.nginx.org/njs/
abort: hg.nginx.org certificate error: certificate is for *.nginx.com,
nginx.com
(configure hostfingerprint
bd:90:5e:95:b4:51:d8:0b:b0:36:41:6f:99:a7:80:01:4e:cf:ee:c2 or use
--insecure to connect insecurely)
and
echo -n | openssl s_client -connect hg.nginx.org:443
CONNECTED(00000003)
depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
verify return:1
depth=0 CN = *.nginx.com
verify return:1
---
Certificate chain
0 s:/CN=*.nginx.com
i:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
1 s:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
i:/O=Digital Signature Trust Co./CN=DST Root CA X3
---
but the web browser hg.nginx.org ssl cert has SAN ssl cert for
DNS Name=hg.nginx.org
DNS Name=mailman.nginx.com
DNS Name=mailman.nginx.org
DNS Name=trac.nginx.org
dig A hg.nginx.org +short
206.251.255.64
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283686,283686#msg-283686
From mdounin at mdounin.ru Tue Apr 9 13:34:59 2019
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 9 Apr 2019 16:34:59 +0300
Subject: https://hg.nginx.org certificate error ?
In-Reply-To: <4ae1965bf89295cf8f8d0b1e938145ad.NginxMailingListEnglish@forum.nginx.org>
References: <4ae1965bf89295cf8f8d0b1e938145ad.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20190409133459.GT1877@mdounin.ru>
Hello!
On Tue, Apr 09, 2019 at 09:17:47AM -0400, George wrote:
> Hi when I try to clone njs repo I am getting the error below
>
> hg clone https://hg.nginx.org/njs/
> abort: hg.nginx.org certificate error: certificate is for *.nginx.com,
> nginx.com
> (configure hostfingerprint
> bd:90:5e:95:b4:51:d8:0b:b0:36:41:6f:99:a7:80:01:4e:cf:ee:c2 or use
> --insecure to connect insecurely)
Looks like you are using an outdated hg without SNI support.
Either upgrade, or use http / --insecure / whatever.
> and
>
> echo -n | openssl s_client -connect hg.nginx.org:443
> CONNECTED(00000003)
> depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
> verify return:1
> depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
> verify return:1
> depth=0 CN = *.nginx.com
> verify return:1
> ---
> Certificate chain
> 0 s:/CN=*.nginx.com
> i:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
> 1 s:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
> i:/O=Digital Signature Trust Co./CN=DST Root CA X3
> ---
That's fine, try
openssl s_client -connect hg.nginx.org:443 -servername hg.nginx.org
instead.
--
Maxim Dounin
http://mdounin.ru/
From nginx-forum at forum.nginx.org Tue Apr 9 14:32:14 2019
From: nginx-forum at forum.nginx.org (George)
Date: Tue, 09 Apr 2019 10:32:14 -0400
Subject: https://hg.nginx.org certificate error ?
In-Reply-To: <20190409133459.GT1877@mdounin.ru>
References: <20190409133459.GT1877@mdounin.ru>
Message-ID: <21f919d438af259f90c1fd9c91d8eebc.NginxMailingListEnglish@forum.nginx.org>
for that i get
echo -n | openssl s_client -connect hg.nginx.org:443 -servername
hg.nginx.org
CONNECTED(00000003)
depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
verify return:1
depth=0 CN = mailman.nginx.org
verify return:1
---
Certificate chain
0 s:/CN=mailman.nginx.org
i:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
1 s:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
i:/O=Digital Signature Trust Co./CN=DST Root CA X3
---
and it's still a problem for hg clone command
hg clone https://hg.nginx.org/njs/
abort: hg.nginx.org certificate error: certificate is for *.nginx.com,
nginx.com
(configure hostfingerprint
bd:90:5e:95:b4:51:d8:0b:b0:36:41:6f:99:a7:80:01:4e:cf:ee:c2 or use
--insecure to connect insecurely)
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283686,283690#msg-283690
From nginx-forum at forum.nginx.org Tue Apr 9 14:42:18 2019
From: nginx-forum at forum.nginx.org (George)
Date: Tue, 09 Apr 2019 10:42:18 -0400
Subject: https://hg.nginx.org certificate error ?
In-Reply-To: <21f919d438af259f90c1fd9c91d8eebc.NginxMailingListEnglish@forum.nginx.org>
References: <20190409133459.GT1877@mdounin.ru>
<21f919d438af259f90c1fd9c91d8eebc.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <4dc9eee28adb3523620d6204f3a04040.NginxMailingListEnglish@forum.nginx.org>
testssl 3.0rc4 output for
testssl hg.nginx.org:443
Testing server defaults (Server Hello)
TLS extensions (standard) "server name/#0" "renegotiation info/#65281"
"EC point formats/#11" "session ticket/#35" "heartbeat/#15" "next
protocol/#13172" "application layer protocol negotiation/#16"
Session Ticket RFC 5077 hint 14400 seconds, session tickets keys seems to
be rotated < daily
SSL Session ID support yes
Session Resumption Tickets: yes, ID: yes
TLS clock skew Random values, no fingerprinting possible
Signature Algorithm SHA256 with RSA
Server key size RSA 2048 bits
Server key usage Digital Signature, Key Encipherment
Server extended key usage TLS Web Server Authentication, TLS Web Client
Authentication
Serial / Fingerprints 030D311281F9B8198440D9E1F99E6DCBEA36 / SHA1
FCFED1288228D3D056CD63018F453AF21F2520E7
SHA256
237EE7B9E1FD73D9462D1730F6C706E4636CE2D85B2372E4936B61EFE58C0111
Common Name (CN) mailman.nginx.org (CN in response to request
w/o SNI: *.nginx.com)
subjectAltName (SAN) hg.nginx.org mailman.nginx.com
mailman.nginx.org trac.nginx.org
Issuer Let's Encrypt Authority X3 (Let's Encrypt from
US)
Trust (hostname) Ok via SAN (SNI mandatory)
Chain of trust Ok
EV cert (experimental) no
"eTLS" (visibility info) not present
Certificate Validity (UTC) 36 >= 30 days (2019-02-14 15:18 --> 2019-05-15
15:18)
# of certificates provided 2
Certificate Revocation List --
OCSP URI http://ocsp.int-x3.letsencrypt.org
OCSP stapling not offered
OCSP must staple extension --
DNS CAA RR (experimental) not offered
Certificate Transparency yes (certificate extension)
of note
Common Name (CN) mailman.nginx.org (CN in response to request w/o SNI:
*.nginx.com)
subjectAltName (SAN) hg.nginx.org mailman.nginx.com mailman.nginx.org
trac.nginx.org
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283686,283691#msg-283691
From nginx-forum at forum.nginx.org Tue Apr 9 14:47:03 2019
From: nginx-forum at forum.nginx.org (George)
Date: Tue, 09 Apr 2019 10:47:03 -0400
Subject: https://hg.nginx.org certificate error ?
In-Reply-To: <4dc9eee28adb3523620d6204f3a04040.NginxMailingListEnglish@forum.nginx.org>
References: <20190409133459.GT1877@mdounin.ru>
<21f919d438af259f90c1fd9c91d8eebc.NginxMailingListEnglish@forum.nginx.org>
<4dc9eee28adb3523620d6204f3a04040.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
looks like hg clone is a non-SNI request so looked up pubserv.nginx.com's
SSL cert *.nginx.com common name so maybe best to add *.nginx.org as well to
pubserv.nginx.com server's SSL cert ?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283686,283692#msg-283692
From mdounin at mdounin.ru Tue Apr 9 14:59:56 2019
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 9 Apr 2019 17:59:56 +0300
Subject: https://hg.nginx.org certificate error ?
In-Reply-To: <21f919d438af259f90c1fd9c91d8eebc.NginxMailingListEnglish@forum.nginx.org>
References: <20190409133459.GT1877@mdounin.ru>
<21f919d438af259f90c1fd9c91d8eebc.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20190409145956.GV1877@mdounin.ru>
Hello!
On Tue, Apr 09, 2019 at 10:32:14AM -0400, George wrote:
> for that i get
>
> echo -n | openssl s_client -connect hg.nginx.org:443 -servername
> hg.nginx.org
> CONNECTED(00000003)
> depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
> verify return:1
> depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
> verify return:1
> depth=0 CN = mailman.nginx.org
> verify return:1
> ---
> Certificate chain
> 0 s:/CN=mailman.nginx.org
> i:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
> 1 s:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
> i:/O=Digital Signature Trust Co./CN=DST Root CA X3
> ---
That's correct certificate, it has hg.nginx.org in subjectAltNames
extension and will work correctly.
> and it's still a problem for hg clone command
>
> hg clone https://hg.nginx.org/njs/
> abort: hg.nginx.org certificate error: certificate is for *.nginx.com,
> nginx.com
> (configure hostfingerprint
> bd:90:5e:95:b4:51:d8:0b:b0:36:41:6f:99:a7:80:01:4e:cf:ee:c2 or use
> --insecure to connect insecurely)
As previously suggested, it looks like your hg cannot use SNI.
Upgrade your hg or use http/--insecure/whatever. Trying to re-run
the same command without upgrading hg to a recent version won't
help.
--
Maxim Dounin
http://mdounin.ru/
From nginx-forum at forum.nginx.org Tue Apr 9 15:02:58 2019
From: nginx-forum at forum.nginx.org (George)
Date: Tue, 09 Apr 2019 11:02:58 -0400
Subject: https://hg.nginx.org certificate error ?
In-Reply-To:
References: <20190409133459.GT1877@mdounin.ru>
<21f919d438af259f90c1fd9c91d8eebc.NginxMailingListEnglish@forum.nginx.org>
<4dc9eee28adb3523620d6204f3a04040.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <2d63c543e365504b28f188473626c962.NginxMailingListEnglish@forum.nginx.org>
okay part of the problem is centos 7 uses mercurial 2.6.2 and fix is to
update to mercurial >2.7.9 for SNI support
hg --version
Mercurial Distributed SCM (version 2.6.2)
(see http://mercurial.selenic.com for more information)
Copyright (C) 2005-2012 Matt Mackall and others
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
use mercurial own centos 7 yum repo
/etc/yum.repos.d/mercurial.selenic.com.repo
[mercurial.selenic.com]
name=mercurial.selenic.com
baseurl=https://www.mercurial-scm.org/release/centos7
enabled=1
# Temporary until we get a serious signing scheme in place,
# check https://www.mercurial-scm.org/wiki/Download again
gpgcheck=0
yum -y update mercurial
hg --version
Mercurial Distributed SCM (version 4.0-rc)
(see https://mercurial-scm.org for more information)
Copyright (C) 2005-2016 Matt Mackall and others
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
now mercurial 4.0 with SNI support works
hg clone https://hg.nginx.org/njs/
destination directory: njs
requesting all changes
adding changesets
adding manifests
adding file changes
added 874 changesets with 3131 changes to 187 files
updating to branch default
162 files updated, 0 files merged, 0 files removed, 0 files unresolved
but still best to add *.nginx.org to *.nginx.com common name for
pubserv.nginx.com server's SSL cert ?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283686,283694#msg-283694
From nginx-forum at forum.nginx.org Tue Apr 9 15:06:13 2019
From: nginx-forum at forum.nginx.org (George)
Date: Tue, 09 Apr 2019 11:06:13 -0400
Subject: https://hg.nginx.org certificate error ?
In-Reply-To: <20190409145956.GV1877@mdounin.ru>
References: <20190409145956.GV1877@mdounin.ru>
Message-ID: <637c04ba28bee5946875cbd3ec4bae83.NginxMailingListEnglish@forum.nginx.org>
yeah updated mercurial works
https://forum.nginx.org/read.php?2,283686,283694#msg-283694 though centos 7
still will use non-SNI supported mercurial 2.6.2 so folks doing hg clone for
njs repo will always have this issue.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283686,283695#msg-283695
From nginx-forum at forum.nginx.org Wed Apr 10 06:11:14 2019
From: nginx-forum at forum.nginx.org (allenhe)
Date: Wed, 10 Apr 2019 02:11:14 -0400
Subject: Worker other than the master is listening to the socket
Message-ID: <53502237a5a4a619e32852253d7fc271.NginxMailingListEnglish@forum.nginx.org>
Hi,
I understand it is the master process listen to the binding socket since
that's what I see from the netstat output in most time:
tcp 0 0 0.0.0.0:28002 0.0.0.0:* LISTEN
12990/nginx: master
while sometimes I found the worker process also doing the same thing:
tcp 0 0 0.0.0.0:28886 0.0.0.0:* LISTEN
12987/nginx: worker
So is this as expected or the nginx was just running into a bad state?
BR,
Allen
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283700,283700#msg-283700
From nginx-forum at forum.nginx.org Wed Apr 10 06:21:51 2019
From: nginx-forum at forum.nginx.org (allenhe)
Date: Wed, 10 Apr 2019 02:21:51 -0400
Subject: Nginx didn't try the next available backend server
Message-ID: <6324968ab3056c6430189f8844f3083b.NginxMailingListEnglish@forum.nginx.org>
Hi,
My Nginx is configured with:
proxy_next_upstream error timeout http_429 http_503;
But I find it won't try the next available upstream server with the
following error returned:
2019/04/05 20:11:41 [error] 85#85: *4903418 recv() failed (104: Connection
reset by peer) while reading response header from upstream....
The "error" part for the proxy_next_upstream states:
error
an error occurred while establishing a connection with the server, passing a
request to it, or reading the response header;
I understand the error above would fall into the "or reading the response
header;", but why it didn't work?
BR,
Allen
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283701,283701#msg-283701
From jfs.world at gmail.com Wed Apr 10 08:13:30 2019
From: jfs.world at gmail.com (Jeffrey 'jf' Lim)
Date: Wed, 10 Apr 2019 16:13:30 +0800
Subject: Nginx didn't try the next available backend server
In-Reply-To: <6324968ab3056c6430189f8844f3083b.NginxMailingListEnglish@forum.nginx.org>
References: <6324968ab3056c6430189f8844f3083b.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
On Wed, Apr 10, 2019 at 2:21 PM allenhe wrote:
> Hi,
>
> My Nginx is configured with:
> proxy_next_upstream error timeout http_429 http_503;
>
> But I find it won't try the next available upstream server with the
> following error returned:
>
> 2019/04/05 20:11:41 [error] 85#85: *4903418 recv() failed (104: Connection
> reset by peer) while reading response header from upstream....
>
>
Do you also see any "upstream server temporarily disabled while reading
response header from upstream" messages above this message for the other
upstreams?
The "error" part for the proxy_next_upstream states:
> error
> an error occurred while establishing a connection with the server, passing
> a
> request to it, or reading the response header;
>
> I understand the error above would fall into the "or reading the response
> header;", but why it didn't work?
>
>
Theoretically this should work, but of course, this will only work if there
are other upstreams to try.
-jf
--
He who settles on the idea of the intelligent man as a static entity only
shows himself to be a fool.
On Wed, Apr 10, 2019 at 2:21 PM allenhe wrote:
> Hi,
>
> My Nginx is configured with:
> proxy_next_upstream error timeout http_429 http_503;
>
> But I find it won't try the next available upstream server with the
> following error returned:
>
> 2019/04/05 20:11:41 [error] 85#85: *4903418 recv() failed (104: Connection
> reset by peer) while reading response header from upstream....
>
> The "error" part for the proxy_next_upstream states:
> error
> an error occurred while establishing a connection with the server, passing
> a
> request to it, or reading the response header;
>
> I understand the error above would fall into the "or reading the response
> header;", but why it didn't work?
>
> BR,
> Allen
>
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,283701,283701#msg-283701
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From yunbinliu at outlook.com Wed Apr 10 19:56:26 2019
From: yunbinliu at outlook.com (liu yunbin)
Date: Wed, 10 Apr 2019 19:56:26 +0000
Subject: Fix bug of n in function of ngx_utf8_length
Message-ID:
# HG changeset patch
# User Yunbin Liu yunbinliu at outlook.com
# Date 1554925873 -28800
# Thu Apr 11 03:51:13 2019 +0800
# Node ID 228b945cf5f8c30356fc5760f696e49545075f00
# Parent a6e23e343081b79eb924da985a414909310aa7a3
Fix bug of n in function of ngx_utf8_length
diff -r a6e23e343081 -r 228b945cf5f8 src/core/ngx_string.c
--- a/src/core/ngx_string.c Tue Apr 09 16:00:30 2019 +0300
+++ b/src/core/ngx_string.c Thu Apr 11 03:51:13 2019 +0800
@@ -1369,6 +1369,7 @@
{
u_char c, *last;
size_t len;
+ u_char *current_point;
last = p + n;
@@ -1378,13 +1379,16 @@
if (c < 0x80) {
p++;
+ n--;
continue;
}
+ current_point = p;
if (ngx_utf8_decode(&p, n) > 0x10ffff) {
/* invalid UTF-8 */
return n;
}
+ n -= p - current_point;
}
return len;
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From auchecorne at spsies.eu Thu Apr 11 11:07:53 2019
From: auchecorne at spsies.eu (=?UTF-8?Q?Andr=EF=BF=BD=20Auchecorne?=)
Date: Thu, 11 Apr 2019 13:07:53 +0200 (CEST)
Subject: Nginx reverse proxy redirection combinations
Message-ID: <1554980873.18565@spsies.eu>
Hi Every one!
Could you please help me? I'm trying to figure out if with Nginx Reverse Proxy it is possible to redirect trafic as in the following cases?
1- IP==>IP
2- IP==>Port
3- IP==>URL
4- IP==>Domaine Name
5- IP==>URI
6- Port==>IP
6- Port==>Port
8- Port==>URL
9- Port==>Domaine Name
10- Port==>URI
11- URL==>IP
12- URL==>Port
13- URL==>URL
14- URL==>Domaine Name
15- URL==>URI
16- Domaine Name==>IP
17- Domaine Name==>Port
18- Domaine Name==>URL
19- Domaine Name==>Domaine Name
20- Domaine Name==>URI
21- URI==>IP
22- URI==>Port
23- URI==>URL
24- URI==>Domaine Name
25- URI==>URI
Thank you very much for any assistance!
Kind regards!
Andr?
From nginx-forum at forum.nginx.org Thu Apr 11 12:45:46 2019
From: nginx-forum at forum.nginx.org (guy1976)
Date: Thu, 11 Apr 2019 08:45:46 -0400
Subject: Error code 500 when serving from cache
Message-ID:
Hi
I'm getting random error code 500 when serving from cache, here is an
example:
1554735637.102 10.210.44.205 57635 "GET
kCache/.../seg-101341803-s32-v1-a1.ts HTTP/1.1" 830 "" "Mozilla/5.0 (Windows
NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko" 200 897748 0.052 HIT -
https
1554735766.426 10.210.184.156 57413 "GET
kCache/.../seg-101341803-s32-v1-a1.ts HTTP/1.1" 602 "" "Mozilla/5.0 (Windows
NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko" 500 0 0.000 HIT - https
Unfortunately nginx error log doesn't show anything, any idea why?
Thanks
Guy.
I'm running nginx/1.8
here is the config:
proxy_cache_path /cache/live levels=1:2 keys_zone=one:20m inactive=1d
max_size=5512M ;
location ~ ^/kCache/(?.*(?(?<=/kCache/).*).*)$ {
proxy_cache_key $cache_key;
proxy_pass http://proxy_backend/$scheme://$url$is_args$args;
}
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283716,283716#msg-283716
From kworthington at gmail.com Thu Apr 11 14:04:54 2019
From: kworthington at gmail.com (Kevin Worthington)
Date: Thu, 11 Apr 2019 10:04:54 -0400
Subject: nginx-1.15.11
In-Reply-To: <20190409131358.GP1877@mdounin.ru>
References: <20190409131358.GP1877@mdounin.ru>
Message-ID:
Hello Nginx users,
Now available: Nginx 1.15.11 for Windows
https://kevinworthington.com/nginxwin11511 (32-bit and 64-bit versions)
These versions are to support legacy users who are already using Cygwin
based builds of Nginx. Officially supported native Windows binaries are at
nginx.org.
Announcements are also available here:
Twitter http://twitter.com/kworthington
Thank you,
Kevin
On Tue, Apr 9, 2019 at 9:14 AM Maxim Dounin wrote:
> Changes with nginx 1.15.11 09 Apr
> 2019
>
> *) Bugfix: in the "ssl_stapling_file" directive on Windows.
>
>
> --
> Maxim Dounin
> http://nginx.org/
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mdounin at mdounin.ru Thu Apr 11 14:41:28 2019
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Thu, 11 Apr 2019 17:41:28 +0300
Subject: Error code 500 when serving from cache
In-Reply-To:
References:
Message-ID: <20190411144127.GD1877@mdounin.ru>
Hello!
On Thu, Apr 11, 2019 at 08:45:46AM -0400, guy1976 wrote:
> I'm getting random error code 500 when serving from cache, here is an
> example:
>
> 1554735637.102 10.210.44.205 57635 "GET
> kCache/.../seg-101341803-s32-v1-a1.ts HTTP/1.1" 830 "" "Mozilla/5.0 (Windows
> NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko" 200 897748 0.052 HIT -
> https
> 1554735766.426 10.210.184.156 57413 "GET
> kCache/.../seg-101341803-s32-v1-a1.ts HTTP/1.1" 602 "" "Mozilla/5.0 (Windows
> NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko" 500 0 0.000 HIT - https
>
> Unfortunately nginx error log doesn't show anything, any idea why?
>From the logs provided I would assume that the 500 error is caused
by a memory allocation failure at some early stage when sending a
cached response. These are expected to be logged at the "emerg"
level though, and are likely to appear in the error log.
If you don't see anything in error log, first of all I would
recommend to make sure you have error_log properly configured.
Unfortunately, often people configure nginx to log errors to
/dev/null, or configure very high logging level. Consider "info"
level to see everything nginx writes to logs.
Also, make sure there are no interference with different
error_log's. As nginx allows configuring error_log on
per-top-level-module, per-server and even per-location basis, it
might be easy to overwrite things at different configuration
levels. Usually it is a good idea to configure error_log at
global level, and comment out everything else.
If this still doesn't work, consider configuring debugging log to
see what happens. See http://nginx.org/en/docs/debugging_log.html
for details.
Also, you may want to upgrade (nginx 1.8.x is not supported for
several years) and make sure you are not using 3rd party modules
(many of these can cause arbitrary problems).
--
Maxim Dounin
http://mdounin.ru/
From nginx-forum at forum.nginx.org Thu Apr 11 14:56:21 2019
From: nginx-forum at forum.nginx.org (guy1976)
Date: Thu, 11 Apr 2019 10:56:21 -0400
Subject: Error code 500 when serving from cache
In-Reply-To: <20190411144127.GD1877@mdounin.ru>
References: <20190411144127.GD1877@mdounin.ru>
Message-ID:
Hi
We do see error logs for other real failures...
here is the config for error_log
error_log /var/log/nginx/error.log warn;
Do we know of such bug that could have been fixed in newer versions? Upgrade
is not easy as this is an old installation our customer did for over 100
sites.
Thanks
Guy
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283716,283722#msg-283722
From pratheepg619 at yahoo.co.in Thu Apr 11 21:24:38 2019
From: pratheepg619 at yahoo.co.in (pratheep g)
Date: Thu, 11 Apr 2019 21:24:38 +0000 (UTC)
Subject: Static files do not load
References: <2010526659.1395796.1555017878873.ref@mail.yahoo.com>
Message-ID: <2010526659.1395796.1555017878873@mail.yahoo.com>
Hi,
I need to rewrite my URL like
http://www.mysite.com/abc? ? >>>? ? /abc
????????????????????????????????????????????? ?>>>? ? /abc/styles/sample.css
????????????????????????????????????????????? ? >>>? /abc/styles/sample2.css
????????????????????????????????????????????? ? >>>? /abc/styles/sample3.css
Here is my rewrite rule:
set $upstream_abc?https://internal-abc.demo.com:444;
location ^~ /abc/ {? ? ? ? ? ? ?proxy_set_header x-forwaded-for $proxy_add_x_forwarded_for;? ? ? ? ? ? ?rewrite ^/abc/(.*) /$1 break;? ? ? ? ? ? ?proxy_pass $upstream_abc$uri$is_args$args;? ? ? ? }
The page does load, however the static css files are not loading. Any help on this will be much appreciated.
Thanks,Pratheep
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Thu Apr 11 21:51:50 2019
From: nginx-forum at forum.nginx.org (samemiki)
Date: Thu, 11 Apr 2019 17:51:50 -0400
Subject: Running video
Message-ID:
Hello,
I'm not sure if this is the right forum to ask. But, hopefully it is. I've
tried google search on this issue with no helpful result.
An admin in my work has setup a VM for a webserver running nginx. I got a
sudo account and managed to setup the website we need. We want to start
adding videos and this is when trouble comes. After adding the videos in the
right directory, I'm able to access it from a web browser (Firefox, Chrome,
IE) but I have to be connected to the office network (via VPN or being
physically in the office). When I try to access the videos from outside
network, the browser seems to just get stuck. It never returns anything.
Note that accessing html pages and javascripts from inside and outside of
the network are fine.
The server setting in nginx.conf is:
server {
listen 80;
server_name orgName.org;
root /var/www/html/;
index index.php;
autoindex off;
location / {
if (!-e $request_filename) {
rewrite ^/(.+)$ /index.php?page=$1 last;
}
}
location ~ database.js {
return 403;
}
location ~ \.php(/|$) {
include fastcgi.conf;
fastcgi_pass unix:/run/php/php7-fpm.sock;
}
}
Checking the error.log does not show any error.
Any help would be greatly appreciated. Thanks!
Miki
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283731,283731#msg-283731
From nginx-forum at forum.nginx.org Fri Apr 12 01:13:42 2019
From: nginx-forum at forum.nginx.org (hadals)
Date: Thu, 11 Apr 2019 21:13:42 -0400
Subject: gRPC reverse proxy closed tcp connection after 1000 rpc calls
In-Reply-To: <383D3037-42DC-44DC-9626-7AC718E60D4B@nginx.com>
References: <383D3037-42DC-44DC-9626-7AC718E60D4B@nginx.com>
Message-ID:
server {
listen 8080 http2 reuseport;
server_name dev-status-service-svc-protocol-50051;
access_log off;
http2_max_requests 10000000;
http2_max_concurrent_streams 512;
grpc_socket_keepalive on;
location / {
grpc_pass grpc://test-status;
}
}
upstream test-status {
server 172.28.254.165:50051;
server 172.28.254.32:50051;
keepalive 32;
keepalive_requests 1000000;
}
I found that nginx can keep a connection with the client. but, there are
still a lot of connections to the backend. so many ESTAB and TIME-WAIT .
When I use the client to connect directly to the backend service, it can
maintain a connection without disconnecting.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283297,283732#msg-283732
From nginx-forum at forum.nginx.org Fri Apr 12 10:01:14 2019
From: nginx-forum at forum.nginx.org (allenhe)
Date: Fri, 12 Apr 2019 06:01:14 -0400
Subject: nginx has problem sending request body to upstream
Message-ID:
Hi,
The Nginx hangs at proxying request body to the upstream server but with no
error indicating what's happening until the client close the font-end
connection. can somebody here help give me any clue? following is the debug
log snippet:
2019/04/12 14:49:38 [debug] 92#92: *405 epoll add connection: fd:29
ev:80002005
2019/04/12 14:49:38 [debug] 92#92: *405 connect to 202.111.0.40:1084, fd:29
#406
2019/04/12 14:49:38 [debug] 92#92: *405 http upstream connect: -2
2019/04/12 14:49:38 [debug] 92#92: *405 posix_memalign: 0000000001CCA480:128
@16
2019/04/12 14:49:38 [debug] 92#92: *405 event timer add: 29:
2000:1555051780835
2019/04/12 14:49:38 [debug] 92#92: *405 http finalize request: -4,
"/inetmanager/v1/configinfo?" a:1, c:2
2019/04/12 14:49:38 [debug] 92#92: *405 http request count:2 blk:0
2019/04/12 14:49:38 [debug] 92#92: *405 http run request:
"/inetmanager/v1/configinfo?"
2019/04/12 14:49:38 [debug] 92#92: *405 http upstream check client, write
event:1, "/inetmanager/v1/configinfo"
2019/04/12 14:49:38 [debug] 92#92: *405 http upstream request:
"/inetmanager/v1/configinfo?"
2019/04/12 14:49:38 [debug] 92#92: *405 http upstream send request handler
2019/04/12 14:49:38 [debug] 92#92: *405 http upstream send request
2019/04/12 14:49:38 [debug] 92#92: *405 http upstream send request body
2019/04/12 14:49:38 [debug] 92#92: *405 chain writer buf fl:1 s:323
2019/04/12 14:49:38 [debug] 92#92: *405 chain writer in: 0000000001CB5498
2019/04/12 14:49:38 [debug] 92#92: *405 writev: 323 of 323
2019/04/12 14:49:38 [debug] 92#92: *405 chain writer out: 0000000000000000
2019/04/12 14:49:38 [debug] 92#92: *405 event timer del: 29: 1555051780835
2019/04/12 14:49:38 [debug] 92#92: *405 event timer add: 29:
600000:1555052378841
2019/04/12 14:50:17 [debug] 92#92: *405 http run request:
"/inetmanager/v1/configinfo?"
2019/04/12 14:50:17 [debug] 92#92: *405 http upstream check client, write
event:0, "/inetmanager/v1/configinfo"
2019/04/12 14:50:17 [info] 92#92: *405 epoll_wait() reported that client
prematurely closed connection, so upstream connection is closed too while
sending request to upstream, client: 202.111.0.51, server: , request: "GET
/inetmanager/v1/configinfo HTTP/1.1", upstream:
"http://202.111.0.40:1084/inetmanager/v1/configinfo", host:
"202.111.0.37:1102"
2019/04/12 14:50:17 [debug] 92#92: *405 finalize http upstream request: 499
Thanks,
Allen
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283735,283735#msg-283735
From nginx-forum at forum.nginx.org Fri Apr 12 10:10:58 2019
From: nginx-forum at forum.nginx.org (sharvadze)
Date: Fri, 12 Apr 2019 06:10:58 -0400
Subject: Nginx POST requests are slow
Message-ID:
For some reason all the POST request are delayed for about 1 min. Here is my
configuration:
/etc/nginx/nginx.conf
sendfile on;
tcp_nopush on;
tcp_nodelay off;
keepalive_timeout 65;
types_hash_max_size 2048;
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header Connection "";
/etc/nginx/sites-available/default
client_max_body_size 0;
send_timeout 300;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.php?$query_string;
}
/etc/php/7.2/fpm/pool.d/www.conf
pm = ondemand
pm.max_children = 60
pm.start_servers = 20
pm.min_spare_servers = 20
pm.max_spare_servers = 60
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283736,283736#msg-283736
From anoopalias01 at gmail.com Fri Apr 12 10:43:11 2019
From: anoopalias01 at gmail.com (Anoop Alias)
Date: Fri, 12 Apr 2019 16:13:11 +0530
Subject: Nginx POST requests are slow
In-Reply-To:
References:
Message-ID:
Most likely this is an issue with your PHP application. Try a simple
PHP code or a ready-made app like WordPress and see if you can recreate the
error.
On Fri, Apr 12, 2019 at 3:41 PM sharvadze
wrote:
> For some reason all the POST request are delayed for about 1 min. Here is
> my
> configuration:
>
> /etc/nginx/nginx.conf
>
> sendfile on;
> tcp_nopush on;
> tcp_nodelay off;
> keepalive_timeout 65;
> types_hash_max_size 2048;
> proxy_buffering off;
> proxy_http_version 1.1;
> proxy_set_header Connection "";
>
> /etc/nginx/sites-available/default
>
> client_max_body_size 0;
> send_timeout 300;
> proxy_set_header X-Real-IP $remote_addr;
> proxy_set_header Host $http_host;
>
> location / {
> # First attempt to serve request as file, then
> # as directory, then fall back to displaying a 404.
> try_files $uri $uri/ /index.php?$query_string;
> }
>
> /etc/php/7.2/fpm/pool.d/www.conf
>
> pm = ondemand
> pm.max_children = 60
> pm.start_servers = 20
> pm.min_spare_servers = 20
> pm.max_spare_servers = 60
>
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,283736,283736#msg-283736
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
--
*Anoop P Alias*
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From arut at nginx.com Fri Apr 12 12:15:13 2019
From: arut at nginx.com (Roman Arutyunyan)
Date: Fri, 12 Apr 2019 15:15:13 +0300
Subject: nginx has problem sending request body to upstream
In-Reply-To:
References:
Message-ID: <20190412121513.GB20152@Romans-MacBook-Air.local>
Hello Allen,
On Fri, Apr 12, 2019 at 06:01:14AM -0400, allenhe wrote:
> Hi,
>
> The Nginx hangs at proxying request body to the upstream server but with no
> error indicating what's happening until the client close the font-end
> connection. can somebody here help give me any clue? following is the debug
> log snippet:
>
> 2019/04/12 14:49:38 [debug] 92#92: *405 epoll add connection: fd:29
> ev:80002005
> 2019/04/12 14:49:38 [debug] 92#92: *405 connect to 202.111.0.40:1084, fd:29
> #406
> 2019/04/12 14:49:38 [debug] 92#92: *405 http upstream connect: -2
> 2019/04/12 14:49:38 [debug] 92#92: *405 posix_memalign: 0000000001CCA480:128
> @16
> 2019/04/12 14:49:38 [debug] 92#92: *405 event timer add: 29:
> 2000:1555051780835
> 2019/04/12 14:49:38 [debug] 92#92: *405 http finalize request: -4,
> "/inetmanager/v1/configinfo?" a:1, c:2
> 2019/04/12 14:49:38 [debug] 92#92: *405 http request count:2 blk:0
> 2019/04/12 14:49:38 [debug] 92#92: *405 http run request:
> "/inetmanager/v1/configinfo?"
> 2019/04/12 14:49:38 [debug] 92#92: *405 http upstream check client, write
> event:1, "/inetmanager/v1/configinfo"
> 2019/04/12 14:49:38 [debug] 92#92: *405 http upstream request:
> "/inetmanager/v1/configinfo?"
> 2019/04/12 14:49:38 [debug] 92#92: *405 http upstream send request handler
> 2019/04/12 14:49:38 [debug] 92#92: *405 http upstream send request
> 2019/04/12 14:49:38 [debug] 92#92: *405 http upstream send request body
> 2019/04/12 14:49:38 [debug] 92#92: *405 chain writer buf fl:1 s:323
> 2019/04/12 14:49:38 [debug] 92#92: *405 chain writer in: 0000000001CB5498
> 2019/04/12 14:49:38 [debug] 92#92: *405 writev: 323 of 323
> 2019/04/12 14:49:38 [debug] 92#92: *405 chain writer out: 0000000000000000
> 2019/04/12 14:49:38 [debug] 92#92: *405 event timer del: 29: 1555051780835
> 2019/04/12 14:49:38 [debug] 92#92: *405 event timer add: 29:
> 600000:1555052378841
> 2019/04/12 14:50:17 [debug] 92#92: *405 http run request:
> "/inetmanager/v1/configinfo?"
> 2019/04/12 14:50:17 [debug] 92#92: *405 http upstream check client, write
> event:0, "/inetmanager/v1/configinfo"
> 2019/04/12 14:50:17 [info] 92#92: *405 epoll_wait() reported that client
> prematurely closed connection, so upstream connection is closed too while
> sending request to upstream, client: 202.111.0.51, server: , request: "GET
> /inetmanager/v1/configinfo HTTP/1.1", upstream:
> "http://202.111.0.40:1084/inetmanager/v1/configinfo", host:
> "202.111.0.37:1102"
> 2019/04/12 14:50:17 [debug] 92#92: *405 finalize http upstream request: 499
According to the log, request (323 bytes) is sent to the upstream server, but
no response is sent back within the given time period.
--
Roman Arutyunyan
From nginx-forum at forum.nginx.org Fri Apr 12 12:51:05 2019
From: nginx-forum at forum.nginx.org (allenhe)
Date: Fri, 12 Apr 2019 08:51:05 -0400
Subject: nginx has problem sending request body to upstream
In-Reply-To: <20190412121513.GB20152@Romans-MacBook-Air.local>
References: <20190412121513.GB20152@Romans-MacBook-Air.local>
Message-ID: <565372f3c639afcaa94de066ca6136f2.NginxMailingListEnglish@forum.nginx.org>
but it looks to me the timer was set for the write not the read, also the
subsequent message isn't telling the nginx was interrupted while sending the
request?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283735,283739#msg-283739
From arut at nginx.com Fri Apr 12 13:59:47 2019
From: arut at nginx.com (Roman Arutyunyan)
Date: Fri, 12 Apr 2019 16:59:47 +0300
Subject: nginx has problem sending request body to upstream
In-Reply-To: <565372f3c639afcaa94de066ca6136f2.NginxMailingListEnglish@forum.nginx.org>
References: <20190412121513.GB20152@Romans-MacBook-Air.local>
<565372f3c639afcaa94de066ca6136f2.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20190412135947.GC20152@Romans-MacBook-Air.local>
On Fri, Apr 12, 2019 at 08:51:05AM -0400, allenhe wrote:
> but it looks to me the timer was set for the write not the read,
If the request is sent and the response is not yet received, nginx schedules a
timer for proxy_read_timeout.
> also the
> subsequent message isn't telling the nginx was interrupted while sending the
> request?
It is true this message is a bit misleading in this case. Sending the request
was the last thing that nginx did on the upstream connection. If there was any
activity on the read side after that, the message would be different.
--
Roman Arutyunyan
From mdounin at mdounin.ru Fri Apr 12 16:42:53 2019
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Fri, 12 Apr 2019 19:42:53 +0300
Subject: Fix bug of n in function of ngx_utf8_length
In-Reply-To:
References:
Message-ID: <20190412164252.GM1877@mdounin.ru>
Hello!
On Wed, Apr 10, 2019 at 07:56:26PM +0000, liu yunbin wrote:
> # HG changeset patch
> # User Yunbin Liu yunbinliu at outlook.com
> # Date 1554925873 -28800
> # Thu Apr 11 03:51:13 2019 +0800
> # Node ID 228b945cf5f8c30356fc5760f696e49545075f00
> # Parent a6e23e343081b79eb924da985a414909310aa7a3
> Fix bug of n in function of ngx_utf8_length
>
> diff -r a6e23e343081 -r 228b945cf5f8 src/core/ngx_string.c
> --- a/src/core/ngx_string.c Tue Apr 09 16:00:30 2019 +0300
> +++ b/src/core/ngx_string.c Thu Apr 11 03:51:13 2019 +0800
> @@ -1369,6 +1369,7 @@
> {
> u_char c, *last;
> size_t len;
> + u_char *current_point;
>
> last = p + n;
>
> @@ -1378,13 +1379,16 @@
>
> if (c < 0x80) {
> p++;
> + n--;
> continue;
> }
>
> + current_point = p;
> if (ngx_utf8_decode(&p, n) > 0x10ffff) {
> /* invalid UTF-8 */
> return n;
> }
> + n -= p - current_point;
> }
>
> return len;
Thanks for the report, this looks like a valid bug (though never
triggered with current code). A simplier patch should be
something like this:
# HG changeset patch
# User Maxim Dounin
# Date 1555087201 -10800
# Fri Apr 12 19:40:01 2019 +0300
# Node ID 7c02edae85e317346d5cef2d9d10d6ce23ed432c
# Parent a6e23e343081b79eb924da985a414909310aa7a3
Fixed incorrect length handling in ngx_utf8_length().
Previously, ngx_utf8_decode() was called from ngx_utf8_length() with
incorrect length, potentially resulting in out-of-bounds read when
handling invalid UTF-8 strings.
In practice out-of-bounds reads are not possible though, as autoindex, the
only user of ngx_utf8_length(), provides null-terminated strings, and
ngx_utf8_decode() anyway returns an errors when it sees a null in the
middle of an UTF-8 sequence.
Reported by Yunbin Liu.
diff --git a/src/core/ngx_string.c b/src/core/ngx_string.c
--- a/src/core/ngx_string.c
+++ b/src/core/ngx_string.c
@@ -1381,7 +1381,7 @@ ngx_utf8_length(u_char *p, size_t n)
continue;
}
- if (ngx_utf8_decode(&p, n) > 0x10ffff) {
+ if (ngx_utf8_decode(&p, last - p) > 0x10ffff) {
/* invalid UTF-8 */
return n;
}
--
Maxim Dounin
http://mdounin.ru/
From nginx-forum at forum.nginx.org Sat Apr 13 01:21:07 2019
From: nginx-forum at forum.nginx.org (allenhe)
Date: Fri, 12 Apr 2019 21:21:07 -0400
Subject: nginx has problem sending request body to upstream
In-Reply-To: <20190412135947.GC20152@Romans-MacBook-Air.local>
References: <20190412135947.GC20152@Romans-MacBook-Air.local>
Message-ID:
I understand the connection establish timer, write timer and read timer
should be setup and removed in order, but where is the write timer? are
there lines in the logs saying I'm going to send the bytes, the sending is
on-going, and the bytes has been sent out?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283735,283758#msg-283758
From satcse88 at gmail.com Sat Apr 13 01:48:42 2019
From: satcse88 at gmail.com (Sathish Kumar)
Date: Sat, 13 Apr 2019 09:48:42 +0800
Subject: Multiple Domain CORS
In-Reply-To:
References:
Message-ID:
Hi Andrey,
Thanks a lot for the solution, it working great in our Prod. You saved my
day!!!.
On Fri, Aug 10, 2018, 8:46 PM Andrey Oktyabrskiy wrote:
> On 10.08.2018 15:17, Andrey Oktyabrskiy wrote:
> > ### /etc/nginx/inc/cors_options.inc
> > if ($request_method = 'OPTIONS') {
> > add_header Access-Control-Allow-Credentials true;
> > add_header Access-Control-Allow-Origin $cors_origin;
> > add_header Access-Control-Allow-Methods OPTIONS;
>
> - add_header Access-Control-Allow-Methods OPTIONS;
> + add_header Access-Control-Allow-Methods $cors_method;
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From softwareinfojam at gmail.com Sat Apr 13 02:24:01 2019
From: softwareinfojam at gmail.com (Software Info)
Date: Fri, 12 Apr 2019 21:24:01 -0500
Subject: No subject
Message-ID: <5cb14841.1c69fb81.24a1.01d6@mx.google.com>
Hi All
I have implemented GEO IP blocking which is working just fine. I have the settings you see below.
??? map $geoip_country_code $country_access {
??????? "US"??? 0;
??????? default 1;
??? }
??? server {
???????? if ($country_access = '1') {
???????? return 403;
???????? }
I notice though that in the logs, the internal IP Addresses are not tagged with a country code so internal subnets are getting blocked. Would the correct solution be to enter the subnets manually such as this config below? Or is there a better solution? Oh by the way, I did try this below and it didn?t work. Trying to keep the Geographical blocking but allow some IP ranges. Any ideas on how to do this? Any help would be appreciated.
??
?map $geoip_country_code $country_access {
??????? "US"??? 0;
?????? ?192.168.1.0/24? 0;
??????? default 1;
??? }
Regards
SI
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lists at lazygranch.com Sat Apr 13 02:57:54 2019
From: lists at lazygranch.com (lists)
Date: Fri, 12 Apr 2019 19:57:54 -0700
Subject: [no subject]
In-Reply-To: <5cb14841.1c69fb81.24a1.01d6@mx.google.com>
Message-ID: <35gf7omev4kmr70fsklubh87.1555124274137@lazygranch.com>
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Sat Apr 13 07:16:47 2019
From: nginx-forum at forum.nginx.org (itplayer)
Date: Sat, 13 Apr 2019 03:16:47 -0400
Subject: Client Certificate OCSP validate
Message-ID: <55bbe943484f81976633e8ca2b2134c5.NginxMailingListEnglish@forum.nginx.org>
Hi,
I'm wondering that if NGINX currently(I use 1.14.1) support client
certificate OCSP validation?
The use case is when client try to login our web application, NGINX sit in
front of the application as reverse-proxy, does NGINX can verify the client
cert to make sure the client cert doesn't revoked by authority?
If yes, my configuration below is correct?
ssl_stapling on;
resolver 8.8.8.8;
ssl_stapling_responder http://10.10.10.10:2560;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/nginx/test/ca_chains.pem;
Thanks in advanced.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283763,283763#msg-283763
From gfrankliu at gmail.com Sat Apr 13 07:43:15 2019
From: gfrankliu at gmail.com (Frank Liu)
Date: Sat, 13 Apr 2019 00:43:15 -0700
Subject: Client Certificate OCSP validate
In-Reply-To: <55bbe943484f81976633e8ca2b2134c5.NginxMailingListEnglish@forum.nginx.org>
References: <55bbe943484f81976633e8ca2b2134c5.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
https://trac.nginx.org/nginx/ticket/1534
> On Apr 13, 2019, at 12:16 AM, itplayer wrote:
>
> Hi,
> I'm wondering that if NGINX currently(I use 1.14.1) support client
> certificate OCSP validation?
> The use case is when client try to login our web application, NGINX sit in
> front of the application as reverse-proxy, does NGINX can verify the client
> cert to make sure the client cert doesn't revoked by authority?
>
> If yes, my configuration below is correct?
>
> ssl_stapling on;
> resolver 8.8.8.8;
> ssl_stapling_responder http://10.10.10.10:2560;
> ssl_stapling_verify on;
> ssl_trusted_certificate /etc/nginx/test/ca_chains.pem;
>
>
> Thanks in advanced.
>
> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283763,283763#msg-283763
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Sat Apr 13 09:12:10 2019
From: nginx-forum at forum.nginx.org (itplayer)
Date: Sat, 13 Apr 2019 05:12:10 -0400
Subject: Client Certificate OCSP validate
In-Reply-To:
References:
Message-ID: <25d4259c5f3bd0bffb1475d5734b8c10.NginxMailingListEnglish@forum.nginx.org>
Hi Frank,
Yes, I see this ticket. So does it mean that NGINX still don't support this
feature?
Any alternative way to do the same thing?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283763,283765#msg-283765
From nginx-forum at forum.nginx.org Sat Apr 13 09:16:29 2019
From: nginx-forum at forum.nginx.org (itplayer)
Date: Sat, 13 Apr 2019 05:16:29 -0400
Subject: OCSP stapling for client certificates
In-Reply-To: <20150705234326.GA1656@mdounin.ru>
References: <20150705234326.GA1656@mdounin.ru>
Message-ID:
Other than CRL, any other alternative way we can do OCSP validation in the
pipeline?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,252893,283766#msg-283766
From bee.lists at gmail.com Sat Apr 13 12:09:49 2019
From: bee.lists at gmail.com (Bee.Lists)
Date: Sat, 13 Apr 2019 08:09:49 -0400
Subject: [no subject]
In-Reply-To: <5cb14841.1c69fb81.24a1.01d6@mx.google.com>
References: <5cb14841.1c69fb81.24a1.01d6@mx.google.com>
Message-ID: <2F2D6613-B051-4A1D-A385-649320B6071E@gmail.com>
>
> On Apr 12, 2019, at 10:24 PM, Software Info wrote:
>
> Any ideas on how to do this? Any help would be appreciated.
How about a subject?
From softwareinfojam at gmail.com Sat Apr 13 12:53:45 2019
From: softwareinfojam at gmail.com (Software Info)
Date: Sat, 13 Apr 2019 07:53:45 -0500
Subject: [no subject]
In-Reply-To: <35gf7omev4kmr70fsklubh87.1555124274137@lazygranch.com>
References: <5cb14841.1c69fb81.24a1.01d6@mx.google.com>
<35gf7omev4kmr70fsklubh87.1555124274137@lazygranch.com>
Message-ID: <5cb1dbd9.1c69fb81.24982.1ed2@mx.google.com>
Oops, I just noticed I don?t have a Subject. Sorry about that. The firewall that we use is really cumbersome when it comes to geo ip blocking in my opinion so I decided to do it in nginx. I forgot to mention too that when I put the IP address in the server that I don?t want to block I still get the 403. So I can?t seem to find a way to allow the 192.168.1.0/24 network while keeping geo blocking.
map $geoip_country_code $country_access {
??????? "US"??? 0;
?????? ?192.168.1.0/24? 0;
??????? default 1;
??? }
Sent from Mail for Windows 10
From: lists
Sent: Friday, April 12, 2019 9:58 PM
To: Nginx
Subject: Re: [no subject]
Perhaps a dumb question, but if all you are going to do is return a 403, why not just do this filtering in the firewall by blocking the offending IP space. Yeah I know a server should always have some response, but it isn't like you would be the first person to just block entire countries. (I don't do this on 80/443, but I do block most email ports outside the US.)?
The only reason I mention this is Nginx blocking is more CPU intensive than the firewall.? On a small VPS, you might notice the difference in loadomg.
From: softwareinfojam at gmail.com
Sent: April 12, 2019 7:24 PM
To: nginx at nginx.org
Reply-to: nginx at nginx.org
Subject:
Hi All
I have implemented GEO IP blocking which is working just fine. I have the settings you see below.
?
??? map $geoip_country_code $country_access {
??????? "US"??? 0;
??????? default 1;
??? }
?
??? server {
???????? if ($country_access = '1') {
???????? return 403;
???????? }
?
I notice though that in the logs, the internal IP Addresses are not tagged with a country code so internal subnets are getting blocked. Would the correct solution be to enter the subnets manually such as this config below? Or is there a better solution? Oh by the way, I did try this below and it didn?t work. Trying to keep the Geographical blocking but allow some IP ranges. Any ideas on how to do this? Any help would be appreciated.
??
?map $geoip_country_code $country_access {
??????? "US"??? 0;
?????? ?192.168.1.0/24? 0;
??????? default 1;
??? }
?
?
Regards
SI
?
?
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From peter_booth at me.com Sat Apr 13 16:36:27 2019
From: peter_booth at me.com (Peter Booth)
Date: Sat, 13 Apr 2019 12:36:27 -0400
Subject: [no subject]
In-Reply-To: <35gf7omev4kmr70fsklubh87.1555124274137@lazygranch.com>
References: <35gf7omev4kmr70fsklubh87.1555124274137@lazygranch.com>
Message-ID: <87E4EFDE-EFC2-40FA-8F24-229AACE1AAD4@me.com>
I don?t think it?s a dumb question at all. It?s a very astute question.
My experience of protecting a high traffic retail website from a foreign state-sponsored DDOS was that doing IP blocking on a hardware load bakancer in front of the nginx tier was the difference between the site bring available and the site being down on an unusually busy day. The economic impact of having both nginx and the load balancer working in concert saved millions of dollars revenue in one busy day. The load balancer (well it was the WAF module in an F5 BigIP) was doing what could have equally been done in a firewall. With F5?s acquisition of nginx we might see innovative ways of combining the best hardware and software ADC solutions to build rock solid websites.
Anything you can do to protect your backend helps your website stay alive, whether it?s browser caching, CDN, firewall, hardware load balancer, before getting to nginx. Then if nginx has intelligent caching rules you can build a site that sustained enormous bursts of traffic and stays up. Nginx is like a Swiss Army knife of http that can do so many different things - but that doesn?t mean it?s right to expect that it does everything.
Peter
Sent from my iPhone
> On Apr 12, 2019, at 10:57 PM, lists wrote:
>
> Perhaps a dumb question, but if all you are going to do is return a 403, why not just do this filtering in the firewall by blocking the offending IP space. Yeah I know a server should always have some response, but it isn't like you would be the first person to just block entire countries. (I don't do this on 80/443, but I do block most email ports outside the US.)
>
> The only reason I mention this is Nginx blocking is more CPU intensive than the firewall. On a small VPS, you might notice the difference in loadomg.
>
>
> From: softwareinfojam at gmail.com
> Sent: April 12, 2019 7:24 PM
> To: nginx at nginx.org
> Reply-to: nginx at nginx.org
> Subject:
>
> Hi All
> I have implemented GEO IP blocking which is working just fine. I have the settings you see below.
>
> map $geoip_country_code $country_access {
> "US" 0;
> default 1;
> }
>
> server {
> if ($country_access = '1') {
> return 403;
> }
>
> I notice though that in the logs, the internal IP Addresses are not tagged with a country code so internal subnets are getting blocked. Would the correct solution be to enter the subnets manually such as this config below? Or is there a better solution? Oh by the way, I did try this below and it didn?t work. Trying to keep the Geographical blocking but allow some IP ranges. Any ideas on how to do this? Any help would be appreciated.
>
> map $geoip_country_code $country_access {
> "US" 0;
> ?192.168.1.0/24? 0;
> default 1;
> }
>
>
> Regards
> SI
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lists at lazygranch.com Sat Apr 13 18:31:37 2019
From: lists at lazygranch.com (lists)
Date: Sat, 13 Apr 2019 11:31:37 -0700
Subject: [no subject]
In-Reply-To: <87E4EFDE-EFC2-40FA-8F24-229AACE1AAD4@me.com>
Message-ID: <2ihpeutbj7si3dogcvqne9s9.1555180297438@lazygranch.com>
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Sun Apr 14 15:36:29 2019
From: nginx-forum at forum.nginx.org (krionz)
Date: Sun, 14 Apr 2019 11:36:29 -0400
Subject: Only receiving an empty response with 200 status code when using
CORS on Nginx 1.14
In-Reply-To: <20190404231716.yhnp46q4j5rgwews@daoine.org>
References: <20190404231716.yhnp46q4j5rgwews@daoine.org>
Message-ID: <16dd7be58785fbc0f26a20c586830b57.NginxMailingListEnglish@forum.nginx.org>
The problem actually was with my understanding of the CORS policy. When i
make a PATCH the browser makes an OPTIONS to check if i can use the PATCH
method, and even if i answer this OPTIONS request saying that i accept the
PATCH method, when i receive the actual PATCH request i must again answer
this request saying that i allow the PATCH request, otherwise the browser
will not allow me to use the answer given by my server.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282474,283775#msg-283775
From nginx-forum at forum.nginx.org Sun Apr 14 15:37:03 2019
From: nginx-forum at forum.nginx.org (krionz)
Date: Sun, 14 Apr 2019 11:37:03 -0400
Subject: Only receiving an empty response with 200 status code when using
CORS on Nginx 1.14
In-Reply-To: <16dd7be58785fbc0f26a20c586830b57.NginxMailingListEnglish@forum.nginx.org>
References: <20190404231716.yhnp46q4j5rgwews@daoine.org>
<16dd7be58785fbc0f26a20c586830b57.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <43ed129d52b9ec7fae7ff579c47cec5b.NginxMailingListEnglish@forum.nginx.org>
I'm sorry to make you waste your time with a no Nginx problem.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282474,283776#msg-283776
From ayma at us.ibm.com Mon Apr 15 22:14:02 2019
From: ayma at us.ibm.com (Andrea Y Ma)
Date: Mon, 15 Apr 2019 22:14:02 +0000
Subject: websocket connection seems to cause nginx to not reload
Message-ID:
An HTML attachment was scrubbed...
URL:
From crenatovb at gmail.com Mon Apr 15 22:22:03 2019
From: crenatovb at gmail.com (Carlos Renato)
Date: Mon, 15 Apr 2019 19:22:03 -0300
Subject: SecRequestBodyAccess Modsecurity
Message-ID:
Hello everyone, can anyone help me?
I recently compiled NGINX with Modsecurity and noticed that if I keep
"On", the SecRequestBodyAccess parameter, I have problems with the
POST method.
If I keep the parameter as Off, access normally occurs.
Does anyone have NGINX with Modsecurity and the SecRequestBodyAccess
parameter as "On"?
--
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From felipe at zimmerle.org Mon Apr 15 23:22:05 2019
From: felipe at zimmerle.org (Felipe Zimmerle)
Date: Mon, 15 Apr 2019 20:22:05 -0300
Subject: SecRequestBodyAccess Modsecurity
In-Reply-To:
References:
Message-ID:
Hi Carlos
There is a known problem on ModSercurity 2.x which leads to a problem
somewhat similar to what you have reported.
If you are running 2.x I would recommend you to move forward to 3.0.3:
- https://github.com/SpiderLabs/ModSecurity
- https://github.com/SpiderLabs/ModSecurity-nginx
If that is not the case, please, tell us more about your setup.
Br.,
Z.
On Mon, Apr 15, 2019 at 7:22 PM Carlos Renato wrote:
> Hello everyone, can anyone help me?
>
> I recently compiled NGINX with Modsecurity and noticed that if I keep "On", the SecRequestBodyAccess parameter, I have problems with the POST method.
>
> If I keep the parameter as Off, access normally occurs.
>
> Does anyone have NGINX with Modsecurity and the SecRequestBodyAccess parameter as "On"?
>
>
> --
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
--
Br.,
Felipe Zimmerle
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From me at nanaya.pro Tue Apr 16 03:14:42 2019
From: me at nanaya.pro (nanaya)
Date: Mon, 15 Apr 2019 23:14:42 -0400
Subject: websocket connection seems to cause nginx to not reload
In-Reply-To:
References:
Message-ID: <299cf7ff-2ba5-4aea-8e04-5c4554cf7fc0@www.fastmail.com>
Hi,
On Tue, Apr 16, 2019, at 07:14, Andrea Y Ma wrote:
> Seeing nginx not picking up changes to its configuration. Running
> `nginx -t` does not reveal any errors and no errors generated when
> running `nginx -s reload`. However nginx access logs show that the old
> IPs from the previous configuration are still being served. Upon closer
> inspection seeing that nginx defunct process are occurring each time a
> `nginx -s reload` is run.
> Also seeing that it seems related to a websocket application as when
> that websocket application is stopped, then the nginx -s reload
> completes successfully.
> Andrea Ma
>
As websocket connections are persistent, the old processes will stay up until the last connection disconnects. It also applies to normal http (for example when someone is downloading/uploading large file) but it's way more visible with websocket.
(or at least that's how I understand it)
From crenatovb at gmail.com Tue Apr 16 09:35:59 2019
From: crenatovb at gmail.com (Carlos Renato)
Date: Tue, 16 Apr 2019 06:35:59 -0300
Subject: SecRequestBodyAccess Modsecurity
In-Reply-To:
References:
Message-ID:
Thank you my friend. I downloaded the URL (
https://github.com/SpiderLabs/ModSecurity.git) using (nginx_refactoring).
git clone -b nginx_refactoring https://github.com/SpiderLabs/ModSecurity.git
Em seg, 15 de abr de 2019 ?s 20:22, Felipe Zimmerle
escreveu:
> Hi Carlos
>
> There is a known problem on ModSercurity 2.x which leads to a problem
> somewhat similar to what you have reported.
>
> If you are running 2.x I would recommend you to move forward to 3.0.3:
>
> - https://github.com/SpiderLabs/ModSecurity
> - https://github.com/SpiderLabs/ModSecurity-nginx
>
> If that is not the case, please, tell us more about your setup.
>
> Br.,
> Z.
>
>
>
>
> On Mon, Apr 15, 2019 at 7:22 PM Carlos Renato wrote:
>
>> Hello everyone, can anyone help me?
>>
>> I recently compiled NGINX with Modsecurity and noticed that if I keep "On", the SecRequestBodyAccess parameter, I have problems with the POST method.
>>
>> If I keep the parameter as Off, access normally occurs.
>>
>> Does anyone have NGINX with Modsecurity and the SecRequestBodyAccess parameter as "On"?
>>
>>
>> --
>>
>>
>> _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>
>
>
> --
> Br.,
> Felipe Zimmerle
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
--
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From AbsonChihota at goldmedialab.com Tue Apr 16 13:10:18 2019
From: AbsonChihota at goldmedialab.com (Abson Chihota)
Date: Tue, 16 Apr 2019 13:10:18 +0000
Subject: Upstream Backend Servers Questions
Message-ID: <6AA287967247E54AB8F1693939F6E341035A627DB4@srvmail2>
Good Day;
I wish to find out how many upstream servers load balancing can nginx handle.
The last question an possibility of running these upstream backend servers on dynamic ip addresses.
Regards;
Abson
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.gif
Type: image/gif
Size: 70 bytes
Desc: image001.gif
URL:
From mdounin at mdounin.ru Tue Apr 16 15:10:04 2019
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 16 Apr 2019 18:10:04 +0300
Subject: nginx-1.15.12
Message-ID: <20190416151004.GT1877@mdounin.ru>
Changes with nginx 1.15.12 16 Apr 2019
*) Bugfix: a segmentation fault might occur in a worker process if
variables were used in the "ssl_certificate" or "ssl_certificate_key"
directives and OCSP stapling was enabled.
--
Maxim Dounin
http://nginx.org/
From xeioex at nginx.com Tue Apr 16 15:50:45 2019
From: xeioex at nginx.com (Dmitry Volyntsev)
Date: Tue, 16 Apr 2019 18:50:45 +0300
Subject: njs-0.3.1
Message-ID: <4107745a-7883-f6e8-4d04-217feb98919e@nginx.com>
Hello,
I'm glad to announce a new release of NGINX JavaScript module (njs).
This release proceeds to extend the coverage of ECMAScript
specifications and modules functionality.
- Added ES6 arrow functions support:
: > var materials = ['Hydrogen', 'Helium', 'Lithium']
: undefined
: > materials.map(material => material.length)
: [
: 8,
: 6,
: 7
: ]
: r.subrequest('/foo', rep => r.return(rep.status, rep.responseBody))
You can learn more about njs:
- Overview and introduction: http://nginx.org/en/docs/njs/
- Presentation: https://youtu.be/Jc_L6UffFOs
Feel free to try it and give us feedback on:
- Github: https://github.com/nginx/njs/issues
- Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel
Changes with njs 0.3.1 16 Apr 2019
Core:
*) Feature: added arrow functions support.
Thanks to ??? (Hong Zhi Dao) and Artem S. Povalyukhin.
*) Feature: added Object.getOwnPropertyNames().
Thanks to Artem S. Povalyukhin.
*) Feature: added Object.getOwnPropertyDescriptors().
Thanks to Artem S. Povalyukhin.
*) Feature: making __proto__ accessor descriptor of Object instances
mutable.
*) Feature: added shebang support in CLI.
*) Feature: added support for module mode execution in CLI. In module
mode global this is unavailable.
*) Bugfix: fixed editline detection.
*) Bugfix: fixed Function.prototype.bind().
Thanks to ??? (Hong Zhi Dao).
*) Bugfix: fixed checking of duplication of parameters for functions.
Thanks to ??? (Hong Zhi Dao).
*) Bugfix: fixed function declaration with the same name as a variable.
Thanks to ??? (Hong Zhi Dao).
*) Improvement: code related to parsing of objects, variables and
functions is refactored.
Thanks to ??? (Hong Zhi Dao).
*) Improvement: console.log() improved for outputting large values.
*) Improvement: console.log() improved for outputting strings in a
compliant way (without escaping and quotes).
*) Improvement: using ES6 version of ToInt32(), ToUint32(), ToLength().
From kworthington at gmail.com Tue Apr 16 15:54:32 2019
From: kworthington at gmail.com (Kevin Worthington)
Date: Tue, 16 Apr 2019 11:54:32 -0400
Subject: [nginx-announce] nginx-1.15.12
In-Reply-To: <20190416151011.GU1877@mdounin.ru>
References: <20190416151011.GU1877@mdounin.ru>
Message-ID:
Hello Nginx users,
Now available: Nginx 1.15.12 for Windows
https://kevinworthington.com/nginxwin11512 (32-bit and 64-bit versions)
These versions are to support legacy users who are already using Cygwin
based builds of Nginx. Officially supported native Windows binaries are at
nginx.org.
Announcements are also available here:
Twitter http://twitter.com/kworthington
Thank you,
Kevin
--
Kevin Worthington
kworthington *@* (gmail] [dot} {com)
https://kevinworthington.com/
https://twitter.com/kworthington
https://plus.google.com/+KevinWorthington/
On Tue, Apr 16, 2019 at 11:10 AM Maxim Dounin wrote:
> Changes with nginx 1.15.12 16 Apr
> 2019
>
> *) Bugfix: a segmentation fault might occur in a worker process if
> variables were used in the "ssl_certificate" or
> "ssl_certificate_key"
> directives and OCSP stapling was enabled.
>
>
> --
> Maxim Dounin
> http://nginx.org/
> _______________________________________________
> nginx-announce mailing list
> nginx-announce at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-announce
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Wed Apr 17 11:17:02 2019
From: nginx-forum at forum.nginx.org (sumanthjoel)
Date: Wed, 17 Apr 2019 07:17:02 -0400
Subject: Authrequest with nginx-ingress annotations
Message-ID: <1b0253588bcee20c1f0ea6d753049deb.NginxMailingListEnglish@forum.nginx.org>
Hi,
I am using nginx as the ingress controller in my cluster. My auth-request
setup looks like below:
CLIENT ---> nginx ---> auth_server ----(200)---> original request
auth_server ---- (401)---> login page
I am using nginx-ingress annotations to acheive this.
But, I have a special requirement:
when proxying the request to authentication server, nginx has to add a
custom header towards the auth_server, i.e, I must be able to read the
header inside the auth-server service.
OR
When the authentication inside the auth_server fails, auth_server will add a
custom-header to the response back to nginx-ingress. I want to read this
header and set the value of the header as a parameter for request towards
login page.
Is this possible to acheive using nginx-ingress annotations?
Thanks.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283824,283824#msg-283824
From francis at daoine.org Wed Apr 17 12:07:37 2019
From: francis at daoine.org (Francis Daly)
Date: Wed, 17 Apr 2019 13:07:37 +0100
Subject: How fast and resource-consuming is sub_filter?
In-Reply-To: <0d571947eebf7420389262afefa2698d.NginxMailingListEnglish@forum.nginx.org>
References: <0d571947eebf7420389262afefa2698d.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20190417120737.oczbvhrkkuuxrsfx@daoine.org>
On Fri, Apr 05, 2019 at 12:08:51AM -0400, elkie wrote:
Hi there,
> How bad is the idea of having tens or even hundreds of sub_filter directives
> in nginx configuration?
I suspect that the answer is "it depends".
It sounds to me like it should be inefficient -- but the question is not
"is it bad", but "is it better than another way to achieve the desired
end result".
> Are the replacements by http_sub_module module time- and resource-consuming
> operations, or they are light enough not to care about it?
If you do not measure it being too slow on your system, then it is not
too slow for your use case.
There is a particular result you want to achieve. One way is to use
sub_filter. Maybe another way is to pass content through an external
system which will make the same changes. Maybe another way is to make
a copy of the source content with the changes already made, and serve
the copy directly.
Each possible way will have an initial cost (in writing and testing),
and an ongoing cost (in processing and serving the output).
Maybe a low initial cost (of "adding a bunch of directives to nginx.conf")
is good for you, and the extra processing load of nginx running those
directives each time, is not important to you. In that case, using
sub_filter is probably the right answer for you.
But: I think that only you can measure the load on your hardware with
your input/transform/output content.
If you measure and don't see a problem, there is not a problem that you
looked for.
If you don't measure, then there is not a problem that you care about.
Good luck with it!
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Wed Apr 17 12:13:24 2019
From: francis at daoine.org (Francis Daly)
Date: Wed, 17 Apr 2019 13:13:24 +0100
Subject: Nginx reverse proxy redirection combinations
In-Reply-To: <1554980873.18565@spsies.eu>
References: <1554980873.18565@spsies.eu>
Message-ID: <20190417121324.ran7jd5bbwxuke5l@daoine.org>
On Thu, Apr 11, 2019 at 01:07:53PM +0200, Andr? Auchecorne wrote:
Hi there,
> Could you please help me? I'm trying to figure out if with Nginx Reverse Proxy it is possible to redirect trafic as in the following cases?
>
> 1- IP==>IP
...
For each of these, I do not understand what you are trying to achieve.
Could you rephrase your question, perhaps giving an example of the
intended input and the desired output?
In general: for http, nginx acts as web server (e.g. issuing http
redirects) or as a reverse proxy (e.g., presenting the content from one
web server as if it came from this web server); while for tcp or udp
("stream"), nginx acts as a proxy (packet forwarder, with rewritten
source/destination).
That summary is incorrect, but hopefully close enough to allow you to
gauge what might be doable.
Good luck with it,
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Wed Apr 17 12:38:53 2019
From: francis at daoine.org (Francis Daly)
Date: Wed, 17 Apr 2019 13:38:53 +0100
Subject: your mail (GEO IP blocking)
In-Reply-To: <5cb14841.1c69fb81.24a1.01d6@mx.google.com>
References: <5cb14841.1c69fb81.24a1.01d6@mx.google.com>
Message-ID: <20190417123853.dmvzzv3ffhir2f7s@daoine.org>
On Fri, Apr 12, 2019 at 09:24:01PM -0500, Software Info wrote:
Hi there,
> I notice though that in the logs, the internal IP Addresses are not
> tagged with a country code so internal subnets are getting blocked. Would
> the correct solution be to enter the subnets manually such as this config
> below? Or is there a better solution?
You use something to set $geoip_country_code, which compares the source
IP address with its database of locations.
You want to allow certain $geoip_country_code values, and also to allow
certain IP addresses.
One possibility:
* can you see the $geoip_country_code that is set for the addresses you
want to allow (probably blank)?
* are you happy to allow every address that gets that same value?
If so, use
map $geoip_country_code $country_access {
"US"??? 0;
"" 0;
default 1;
}
Another possibility:
* change the database that your tool uses, so that the addresses you
care about (192.168.1.0/24, but not 192.168.2.0/24, for example) set
$geoip_country_code to a value such as "LAN" or something else that it
not otherwise used.
* Then - same as above, but allow "LAN" instead of "".
And another way could be to make your own variable, based on a combination
of the things that you care about. Conceptually (but this does not work),
you want
# Do not use this
geo $my_country {
192.168.1.0/24 "LAN";
default $geoip_country_code;
}
and then use $my_country to check validity. In practice instead, you
would want something like (untested by me!):
geo $lan_ip {
192.168.1.0/24 "LAN";
default "";
}
map $geoip_country_code$lan_ip $country_access {
"US"??? 0;
"LAN" 0;
default 1;
}
which does assume that anything that has $lan_ip set, will have
$geoip_country_code blank (or will get the default value). I think that
for your case of private rfc1918 addresses, this is ok. It is not a
general solution. (It could be adapted to become one, if necessary.)
Do be aware that, depending on your config, the thing that sets
$geoip_country_code and the thing that sets $lan_ip may not be reading
from the same value. So you'll probably want to make sure that they do,
for consistency.
Good luck with it,
f
--
Francis Daly francis at daoine.org
From andre at rodier.me Thu Apr 18 09:55:24 2019
From: andre at rodier.me (=?UTF-8?Q?Andr=C3=A9_Rodier?=)
Date: Thu, 18 Apr 2019 10:55:24 +0100
Subject: nginx as an IMAP proxy
Message-ID: <91b0c083d6ecf3f2c2b6175a36208a75@rodier.me>
Hello,
I have a question for nginx masters.
I am using Debian Stretch, with Dovecot as a mail server, and two
webmails, Roundcube and SOGo. Roundcube is configured to pass the ID
command with the original IP address 'ID ( "x-originating-ip"
xx.xx.xx.xx )', so Dovecot logs the remote IP correctly, and not
127.0.0.1. So, I know my set-up us working fine.
However, there is a bug in SOGo, because the command is sent only after
authentication, and Dovecot logs the remote IP as 127.0.0.1.
Is there any way, by using nginx as an IMAP proxy, to inject the remote
IP address, from an environment variable, passed by the nginx instance
that serves the web frontend?
rip=12.34.56.78 ID (
"x-originating-ip" 12.34.56.78)
[nginx web frontend] => [Sogo backend] => [ nginx IMAP
proxy ] => [Dovecot IMAP server]
I have raised the bug on the SOGo web site, but I think it will be long
to fix, and I prefer to avoid recompiling the SOGo source code.
Thanks for your help and your ideas,
Andr?
From arut at nginx.com Thu Apr 18 15:59:56 2019
From: arut at nginx.com (Roman Arutyunyan)
Date: Thu, 18 Apr 2019 18:59:56 +0300
Subject: nginx has problem sending request body to upstream
In-Reply-To:
References: <20190412135947.GC20152@Romans-MacBook-Air.local>
Message-ID: <20190418155956.GL1271@Romans-MacBook-Air.local>
On Fri, Apr 12, 2019 at 09:21:07PM -0400, allenhe wrote:
> I understand the connection establish timer, write timer and read timer
> should be setup and removed in order, but where is the write timer?
It's not that simple. Writing and reading are done in parallel.
In fact if nginx receives the response even before sending out the full request,
it also counts as a successful proxying.
> are
> there lines in the logs saying I'm going to send the bytes, the sending is
> on-going, and the bytes has been sent out?
I'll give a short explanation of what's going on:
> 2019/04/12 14:49:38 [debug] 92#92: *405 epoll add connection: fd:29 ev:80002005
> 2019/04/12 14:49:38 [debug] 92#92: *405 connect to 202.111.0.40:1084, fd:29 #406
Calling connect() on the upstream socket fd=29
> 2019/04/12 14:49:38 [debug] 92#92: *405 http upstream connect: -2
The connect() call returned EINPROGRESS (converted to NGX_AGAIN=-2).
Now wait until the socket is connected, this will be repored on the write event.
> 2019/04/12 14:49:38 [debug] 92#92: *405 posix_memalign: 0000000001CCA480:128 @16
> 2019/04/12 14:49:38 [debug] 92#92: *405 event timer add: 29: 2000:1555051780835
Schedule a timer for proxy_connect_timeout to wait until the upstream socket is
connected. This is very much like the write timeout, and it is scheduled on the
write event.
> 2019/04/12 14:49:38 [debug] 92#92: *405 http finalize request: -4, "/inetmanager/v1/configinfo?" a:1, c:2
> 2019/04/12 14:49:38 [debug] 92#92: *405 http request count:2 blk:0 2019/04/12 14:49:38 [debug] 92#92: *405 http run request: "/inetmanager/v1/configinfo?"
> 2019/04/12 14:49:38 [debug] 92#92: *405 http upstream check client, write event:1, "/inetmanager/v1/configinfo"
This call is irrelevant to the main topic.
> 2019/04/12 14:49:38 [debug] 92#92: *405 http upstream request: "/inetmanager/v1/configinfo?"
Upstream socket write event. The socket is now connected. Code below tries to
send the request.
> 2019/04/12 14:49:38 [debug] 92#92: *405 http upstream send request handler
> 2019/04/12 14:49:38 [debug] 92#92: *405 http upstream send request
> 2019/04/12 14:49:38 [debug] 92#92: *405 http upstream send request body
> 2019/04/12 14:49:38 [debug] 92#92: *405 chain writer buf fl:1 s:323
> 2019/04/12 14:49:38 [debug] 92#92: *405 chain writer in: 0000000001CB5498
> 2019/04/12 14:49:38 [debug] 92#92: *405 writev: 323 of 323
> 2019/04/12 14:49:38 [debug] 92#92: *405 chain writer out: 0000000000000000
All 323 bytes of the request are sent. No surprise we could send 323 bytes
with a single writev().
> 2019/04/12 14:49:38 [debug] 92#92: *405 event timer del: 29: 1555051780835
Unschedule the write timer, we have nothing to write now.
> 2019/04/12 14:49:38 [debug] 92#92: *405 event timer add: 29: 600000:1555052378841
We still expect to read something from upstream. Now scheduling
proxy_read_timeout for this on the read event. Read and write can run in
parallel in nginx, but in this particular case write was fast.
> 2019/04/12 14:50:17 [debug] 92#92: *405 http run request: "/inetmanager/v1/configinfo?"
> 2019/04/12 14:50:17 [debug] 92#92: *405 http upstream check client, write event:0, "/inetmanager/v1/configinfo"
> 2019/04/12 14:50:17 [info] 92#92: *405 epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream, client: 202.111.0.51, server: , request: "GET /inetmanager/v1/configinfo HTTP/1.1", upstream: "http://202.111.0.40:1084/inetmanager/v1/configinfo", host: "202.111.0.37:1102"
> 2019/04/12 14:50:17 [debug] 92#92: *405 finalize http upstream request: 499
39 seconds later client disconnected. No events from upstream for this period.
--
Roman Arutyunyan
From nginx-forum at forum.nginx.org Fri Apr 19 00:44:19 2019
From: nginx-forum at forum.nginx.org (allenhe)
Date: Thu, 18 Apr 2019 20:44:19 -0400
Subject: nginx has problem sending request body to upstream
In-Reply-To: <20190418155956.GL1271@Romans-MacBook-Air.local>
References: <20190418155956.GL1271@Romans-MacBook-Air.local>
Message-ID:
I see. so in this case the request was completely sent in single write
without blocking that there is no need to schedule a write timer anymore,
otherwise it is necessary.
Thanks for the explanations!
b.t.w, have you ever seen the work process is listening on the socket?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283735,283855#msg-283855
From nginx-forum at forum.nginx.org Fri Apr 19 07:16:14 2019
From: nginx-forum at forum.nginx.org (allenhe)
Date: Fri, 19 Apr 2019 03:16:14 -0400
Subject: Worker other than the master is listening to the socket
In-Reply-To: <53502237a5a4a619e32852253d7fc271.NginxMailingListEnglish@forum.nginx.org>
References: <53502237a5a4a619e32852253d7fc271.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <4aa120637d882a0071e8e7d9c0b879d5.NginxMailingListEnglish@forum.nginx.org>
Nginx version: 1.13.6.1
1) In our use case, the Nginx is reloaded constantly. you will see lots
worker process hanging at "nginx: worker process is shutting down" after
couple days:
58 root 0:00 nginx: master process ./openresty/nginx/sbin/nginx
-p /opt/applicatio
1029 nobody 0:22 nginx: worker process is shutting down
1030 nobody 0:27 nginx: worker process is shutting down
1041 nobody 0:54 nginx: worker process is shutting down
1054 nobody 0:37 nginx: worker process is shutting down
1131 nobody 0:02 nginx: worker process is shutting down
1132 nobody 0:02 nginx: worker process is shutting down
1215 nobody 0:50 nginx: worker process is shutting down
1216 nobody 0:53 nginx: worker process is shutting down
1515 nobody 0:23 nginx: worker process is shutting down
1516 nobody 0:47 nginx: worker process is shutting down
1533 nobody 0:03 nginx: worker process is shutting down
1534 nobody 0:16 nginx: worker process is shutting down
1598 nobody 0:00 nginx: worker process
1599 nobody 0:00 nginx: worker process
2) And if you now check some listen port on the host using netstat, you will
see it is owned by the worker process:
[root at paas-controller-177-1-1-137:~]$ netstat -anp |grep 10080
tcp 0 0 0.0.0.0:10080 0.0.0.0:* LISTEN
6288/nginx: worker
tcp 0 0 10.47.205.136:10080 10.47.205.137:50827
ESTABLISHED 6296/nginx: worker
tcp 0 0 10.47.205.136:10080 10.47.205.137:50833
ESTABLISHED 6300/nginx: worker
tcp 0 0 10.47.205.136:10080 10.47.205.137:49411
ESTABLISHED 6296/nginx: worker
tcp 0 0 10.47.205.136:10080 10.40.157.154:54074
ESTABLISHED 6296/nginx: worker
tcp 0 0 10.47.205.136:10080 10.47.205.137:49715
ESTABLISHED 6299/nginx: worker
tcp6 0 0 :::10080 :::* LISTEN
6288/nginx: worker
3) So far, you would say it is not a big deal as long as the worker could
serve the request correctly. but it does NOT. suppose the listen port 10080
above would proxy the request to the upstream server A initially (and for
sure this port was listening by the master process), days later, it has been
changed to the server B and of course the Nginx was been reloaded at the
same time (and perhaps some worker was left in " is shutting down" state at
that time). well now, I see this port is listening by a worker process, and
it would proxy the request to the old server A.
I suspected this could be caused by the "shutting down" worker processes, so
I "kill -9" all these ones and try again, but nothing changed. even
reloading the Nginx did no help. I shut it down and rerun the binary,
finally this worked and the request can be proxyed to the server B.
I guess maybe some cache in Nginx was doing the bad, any clue?
Thanks,
Allen
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283700,283856#msg-283856
From nginx-forum at forum.nginx.org Fri Apr 19 09:24:14 2019
From: nginx-forum at forum.nginx.org (allenhe)
Date: Fri, 19 Apr 2019 05:24:14 -0400
Subject: what does "deny 0.0.0.1;" do
Message-ID: <3868a393f7d5e77f33674b0ac1bfb8d9.NginxMailingListEnglish@forum.nginx.org>
Hi,
I found this is valid, and want to know what scenario it's used for.
deny 0.0.0.1;
Thanks,
Allen
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283857,283857#msg-283857
From xserverlinux at gmail.com Fri Apr 19 22:37:47 2019
From: xserverlinux at gmail.com (Rick Gutierrez)
Date: Fri, 19 Apr 2019 16:37:47 -0600
Subject: Optimize SO for nginx
Message-ID:
Hi, some recommendation to optimize a server of 4gb ram and 4 core for
reverse proxy, with ssl activated, brotli and centos 7 SO, when I speak of
optimizing is to modify the sysctl.conf
I need to support 12k in a couple of applications with php and asp.net
some experience advice?
--
rickygm
http://gnuforever.homelinux.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From peter_booth at me.com Sat Apr 20 06:05:39 2019
From: peter_booth at me.com (Peter Booth)
Date: Sat, 20 Apr 2019 02:05:39 -0400
Subject: Optimize SO for nginx
In-Reply-To:
References:
Message-ID: <20694262-5A15-4EDB-B90C-7494572FABD2@me.com>
Where is your upstream? Where is your pho executing? Do you have a CDN?
There?s three parts to this:
1 fix the bad OS defaults:
If you are using RHEL 6 this would mean:
Enabling tuned
Disabling THP
Increasing vm.min_free_kbytes
Reducing swappiness
2 generic web server specific configuration
Increasing # ephemeral ports
Adjusting ulimit
3 tuning specific to your workload.
UDP and TCP buffet size - should match BDP
NIC tuning - irq coalescing, 10G specific tuning, see CDN, melanox,redhat, hp suggestions for lie latency tuning.
That?s a start.
Peter
Sent from my iPhone
> On Apr 19, 2019, at 6:37 PM, Rick Gutierrez wrote:
>
> Hi, some recommendation to optimize a server of 4gb ram and 4 core for reverse proxy, with ssl activated, brotli and centos 7 SO, when I speak of optimizing is to modify the sysctl.conf
>
> I need to support 12k in a couple of applications with php and asp.net
>
> some experience advice?
> --
> rickygm
>
> http://gnuforever.homelinux.com
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Sun Apr 21 07:58:19 2019
From: nginx-forum at forum.nginx.org (saggir)
Date: Sun, 21 Apr 2019 03:58:19 -0400
Subject: Ingress get into CrashLoopBackOf state
Message-ID:
Hi
We are trying to create ~4000 namespaces with ingress rules and we cannot
reach above ~2000 namespaces?
Our problematic use case is:
1. We create k8s namespace with 1 pod (3 containers) and 1 ingress (3 rules
per ingress).
2. As part of the creation of the namespace we run a few https queries and
sync some data to the containers (GET & POST)
3. After creating ~2050 namespaces the NGINX ingress controller get into
CrashLoopBackOf state and cannot restart anymore.
We use NGINX ingress controller with image 0.24.1 in AWS environment (mapped
to nlb), we run a replica of 4 pods with 4cpu and 12Gi memory
We tried a few configuration changes but it seems we cannot find the root
cause of the limitation.
In the logs of the nginx controller the problem start in the dynamic
configuration with reason "no memory":
"[error] 50#50: *165 [lua] configuration.lua:182: call():
dynamic-configuration: error updating configuration: no memory
controller.go:220] Dynamic reconfiguration failed: unexpected error code:
400
controller.go:224] Unexpected failure reconfiguring NGINX"
It seems like we are missing some key configuration limit that causes k8s to
kill the pod?
Appreciate your assistance!
Saggi Raiter.
Saggi.raiter at sap.com
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283864,283864#msg-283864
From nginx-forum at forum.nginx.org Mon Apr 22 09:21:49 2019
From: nginx-forum at forum.nginx.org (cocofuc88)
Date: Mon, 22 Apr 2019 05:21:49 -0400
Subject: Reverse Proxy with multiple IP address
Message-ID: <40c81c2240ba214934d2410f09d303ab.NginxMailingListEnglish@forum.nginx.org>
currently on my nginx.conf (development) i using the intraface ip of the
server which is 192.168.1.2 (and only have 1 interface) then forward it to
172.16.98.10 and it works fine. and now i want to use another ip on the
listen, for example 192.168.1.3 but still forward to same server
172.16.98.10, i have try to change the ip on nginx.conf from 192.168.1.2 to
192.168.1.3 but when i restart the service its give me an error like
following
[99]cannot assign requested address
is that problem appear because the 192.168.1.3 ip its not apply on any
interface of the server ?
need your advice about this,
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283865,283865#msg-283865
From mdounin at mdounin.ru Mon Apr 22 14:30:52 2019
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 22 Apr 2019 17:30:52 +0300
Subject: Worker other than the master is listening to the socket
In-Reply-To: <4aa120637d882a0071e8e7d9c0b879d5.NginxMailingListEnglish@forum.nginx.org>
References: <53502237a5a4a619e32852253d7fc271.NginxMailingListEnglish@forum.nginx.org>
<4aa120637d882a0071e8e7d9c0b879d5.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20190422143052.GE1877@mdounin.ru>
Hello!
On Fri, Apr 19, 2019 at 03:16:14AM -0400, allenhe wrote:
> Nginx version: 1.13.6.1
>
> 1) In our use case, the Nginx is reloaded constantly. you will see lots
> worker process hanging at "nginx: worker process is shutting down" after
> couple days:
>
> 58 root 0:00 nginx: master process ./openresty/nginx/sbin/nginx
> -p /opt/applicatio
> 1029 nobody 0:22 nginx: worker process is shutting down
> 1030 nobody 0:27 nginx: worker process is shutting down
> 1041 nobody 0:54 nginx: worker process is shutting down
> 1054 nobody 0:37 nginx: worker process is shutting down
> 1131 nobody 0:02 nginx: worker process is shutting down
> 1132 nobody 0:02 nginx: worker process is shutting down
> 1215 nobody 0:50 nginx: worker process is shutting down
> 1216 nobody 0:53 nginx: worker process is shutting down
> 1515 nobody 0:23 nginx: worker process is shutting down
> 1516 nobody 0:47 nginx: worker process is shutting down
> 1533 nobody 0:03 nginx: worker process is shutting down
> 1534 nobody 0:16 nginx: worker process is shutting down
> 1598 nobody 0:00 nginx: worker process
> 1599 nobody 0:00 nginx: worker process
>
>
> 2) And if you now check some listen port on the host using netstat, you will
> see it is owned by the worker process:
>
> [root at paas-controller-177-1-1-137:~]$ netstat -anp |grep 10080
> tcp 0 0 0.0.0.0:10080 0.0.0.0:* LISTEN
> 6288/nginx: worker
> tcp 0 0 10.47.205.136:10080 10.47.205.137:50827
> ESTABLISHED 6296/nginx: worker
> tcp 0 0 10.47.205.136:10080 10.47.205.137:50833
> ESTABLISHED 6300/nginx: worker
> tcp 0 0 10.47.205.136:10080 10.47.205.137:49411
> ESTABLISHED 6296/nginx: worker
> tcp 0 0 10.47.205.136:10080 10.40.157.154:54074
> ESTABLISHED 6296/nginx: worker
> tcp 0 0 10.47.205.136:10080 10.47.205.137:49715
> ESTABLISHED 6299/nginx: worker
> tcp6 0 0 :::10080 :::* LISTEN
> 6288/nginx: worker
The "netstat -p" command prints only one process - the one with
lowest PID - which has the socket open. But in fact listening
sockets are open in nginx master process and all worker processes.
After a number of reloads a worker process may get PID
lower than PID of the master, and "netstat -p" will start to
report this worker as the process the socket belongs to. This is
not something to be afraid of, it is just a result of a "netstat -p"
limited interface.
To see all processes which have the socket open, consider using
"ss -nltp" instead.
> 3) So far, you would say it is not a big deal as long as the worker could
> serve the request correctly. but it does NOT. suppose the listen port 10080
> above would proxy the request to the upstream server A initially (and for
> sure this port was listening by the master process), days later, it has been
> changed to the server B and of course the Nginx was been reloaded at the
> same time (and perhaps some worker was left in " is shutting down" state at
> that time). well now, I see this port is listening by a worker process, and
> it would proxy the request to the old server A.
This is not something that can happen, since old worker
processes, which are in the "shutting down" state, no longer
accept new connections. Such processes close all listening
sockets as soon they are asked to exit.
--
Maxim Dounin
http://mdounin.ru/
From xserverlinux at gmail.com Mon Apr 22 21:47:24 2019
From: xserverlinux at gmail.com (Rick Gutierrez)
Date: Mon, 22 Apr 2019 15:47:24 -0600
Subject: Optimize SO for nginx
In-Reply-To: <20694262-5A15-4EDB-B90C-7494572FABD2@me.com>
References:
<20694262-5A15-4EDB-B90C-7494572FABD2@me.com>
Message-ID:
El s?b., 20 abr. 2019 a las 0:05, Peter Booth via nginx
() escribi?:
>
> Where is your upstream? Where is your pho executing? Do you have a CDN?
Hi Peter , both my upstream and php application are in the same
datacenter ,different virtual machines with kvm, 1gb network , I do
not have CDN
>
> There?s three parts to this:
> 1 fix the bad OS defaults:
> If you are using RHEL 6 this would mean:
> Enabling tuned
> Disabling THP
> Increasing vm.min_free_kbytes
> Reducing swappiness
Ok , here I understand, the difference is that I'm using Centos 7
> 2 generic web server specific configuration
I do not understand, you need the specification of each server? ,
each server runs on a virtual machine with kvm, 4 cores xeon 2.1Ghz
with 4gb ram, disk 15k
> Increasing # ephemeral ports
I do not understand, could you please detail more.
> Adjusting ulimit
ok
> 3 tuning specific to your workload.
> UDP and TCP buffet size - should match BDP
please specify a little more.
> NIC tuning - irq coalescing, 10G specific tuning, see CDN, melanox,redhat, hp suggestions for lie latency tuning.
>
> That?s a start.
Ok .
--
rickygm
http://gnuforever.homelinux.com
From mdounin at mdounin.ru Tue Apr 23 14:08:52 2019
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 23 Apr 2019 17:08:52 +0300
Subject: nginx-1.16.0
Message-ID: <20190423140852.GH1877@mdounin.ru>
Changes with nginx 1.16.0 23 Apr 2019
*) 1.16.x stable branch.
--
Maxim Dounin
http://nginx.org/
From kworthington at gmail.com Tue Apr 23 14:34:35 2019
From: kworthington at gmail.com (Kevin Worthington)
Date: Tue, 23 Apr 2019 10:34:35 -0400
Subject: [nginx-announce] nginx-1.16.0
In-Reply-To: <20190423140901.GI1877@mdounin.ru>
References: <20190423140901.GI1877@mdounin.ru>
Message-ID:
Hello Nginx users,
Now available: Nginx 1.16.0 for Windows
https://kevinworthington.com/nginxwin1160 (32-bit and 64-bit versions)
These versions are to support legacy users who are already using Cygwin
based builds of Nginx. Officially supported native Windows binaries are at
nginx.org.
Announcements are also available here:
Twitter http://twitter.com/kworthington
Thank you,
Kevin
--
Kevin Worthington
kworthington *@* (gmail] [dot} {com)
https://kevinworthington.com/
https://twitter.com/kworthington
https://plus.google.com/+KevinWorthington/
On Tue, Apr 23, 2019 at 10:09 AM Maxim Dounin wrote:
> Changes with nginx 1.16.0 23 Apr
> 2019
>
> *) 1.16.x stable branch.
>
>
> --
> Maxim Dounin
> http://nginx.org/
> _______________________________________________
> nginx-announce mailing list
> nginx-announce at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-announce
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Wed Apr 24 07:14:08 2019
From: nginx-forum at forum.nginx.org (simanto604)
Date: Wed, 24 Apr 2019 03:14:08 -0400
Subject: Expect: 100-continue with the nginx proxy_pass
Message-ID: <1bce17ac3fc39cfc374a6bbe52650f0c.NginxMailingListEnglish@forum.nginx.org>
We are integrating a 3rd party payment gateway which on a successful
transaction sends a [IPN][1] request. So we have built a API endpoint to
handle the request which works fine in dev env.
Now the issue is when we use nginx infront of the API Server it gets 301
redirects. When we are using a proxy tunnel like [ngrok][2], which proxies
the http request via ngrok servers, the request passes through nginx to our
API server successfully but when the IPN POST request is directly served to
our nginx it gets a 301 redirect. To dig deeper we intercept the request
with a python [SimpleHttpServer][3] leading us to find the difference of
http headers:
- without ngrok (direct to nginx)
> ['Host: 35.154.216.72:4040\r\n', 'Accept: */*\r\n', 'Content-Length:
> 1030\r\n', 'Content-Type: application/x-www-form-urlencoded\r\n',
> '**Expect: 100-continue**\r\n']
- With ngrok
> ['Host: 543fdf1c.ngrok.io\r\n', 'Accept: */*\r\n', 'Content-Length:
> 1013\r\n', 'Content-Type: application/x-www-form-urlencoded\r\n',
> 'X-Forwarded-For: 2a01:4f8:192:22ec::2\r\n']
We are assuming the issue lies with the `Expect: 100-continue` headers. So
the question is there any way to handle this with Nginx? or what is the
industry standard solution for such scenerio?
FYI: Nginx conf:
location /api/ {
proxy_set_header Expect $http_expect;
client_body_in_file_only on;
proxy_pass_request_headers on;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection 'upgrade';
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_pass_header X-CSRFToken;
#proxy_pass_header csrftoken;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://kk-api;
}
IPN endpoint is `/api/v1/ipn`
[1]: https://en.wikipedia.org/wiki/Instant_payment_notification
[2]: http://www.ngrok.com
[3]: https://docs.python.org/2/library/simplehttpserver.html
For better have a better preview the question is also posted in
StackOverflow:
https://stackoverflow.com/questions/55810543/nginx-handle-expect-100-continue-header
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283902,283902#msg-283902
From mdounin at mdounin.ru Wed Apr 24 13:07:48 2019
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 24 Apr 2019 16:07:48 +0300
Subject: Expect: 100-continue with the nginx proxy_pass
In-Reply-To: <1bce17ac3fc39cfc374a6bbe52650f0c.NginxMailingListEnglish@forum.nginx.org>
References: <1bce17ac3fc39cfc374a6bbe52650f0c.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20190424130748.GO1877@mdounin.ru>
Hello!
On Wed, Apr 24, 2019 at 03:14:08AM -0400, simanto604 wrote:
> We are integrating a 3rd party payment gateway which on a successful
> transaction sends a [IPN][1] request. So we have built a API endpoint to
> handle the request which works fine in dev env.
>
> Now the issue is when we use nginx infront of the API Server it gets 301
> redirects. When we are using a proxy tunnel like [ngrok][2], which proxies
> the http request via ngrok servers, the request passes through nginx to our
> API server successfully but when the IPN POST request is directly served to
> our nginx it gets a 301 redirect. To dig deeper we intercept the request
> with a python [SimpleHttpServer][3] leading us to find the difference of
> http headers:
>
> - without ngrok (direct to nginx)
>
> > ['Host: 35.154.216.72:4040\r\n', 'Accept: */*\r\n', 'Content-Length:
> > 1030\r\n', 'Content-Type: application/x-www-form-urlencoded\r\n',
> > '**Expect: 100-continue**\r\n']
>
> - With ngrok
>
> > ['Host: 543fdf1c.ngrok.io\r\n', 'Accept: */*\r\n', 'Content-Length:
> > 1013\r\n', 'Content-Type: application/x-www-form-urlencoded\r\n',
> > 'X-Forwarded-For: 2a01:4f8:192:22ec::2\r\n']
>
> We are assuming the issue lies with the `Expect: 100-continue` headers. So
> the question is there any way to handle this with Nginx? or what is the
> industry standard solution for such scenerio?
Most likely, the reason is not "Expect: 100-continue", but the
Host header, which is also different.
> FYI: Nginx conf:
>
> location /api/ {
>
> proxy_set_header Expect $http_expect;
Note that this is simply wrong and likely to break things
completely as long as there is Expect header in the request.
[...]
--
Maxim Dounin
http://mdounin.ru/
From nginx-forum at forum.nginx.org Thu Apr 25 13:16:09 2019
From: nginx-forum at forum.nginx.org (mrodrigom)
Date: Thu, 25 Apr 2019 09:16:09 -0400
Subject: Reverse proxy one domain
Message-ID:
Hi.
I have one URL for all my websites, applications and so.
Let's say it's system.mydomain.com. Most of my websites is Apache + PHP and
the applications is Tomcat. So far so good, no problem there.
My nginx config that's working for everything else:
server {
listen 80;
server_name system.mydomain.com;
....................
location /app1 {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name system.mydomain.com;
....................
location /app1 {
proxy_pass http://app1.mydomain.com;
}
}
upstream app1.mydomain.com {
server SERVER_IP_HERE:8081 fail_timeout=60;
}
Ok, my problem is when I have Alfresco for example on the backend. Alfresco
responds on (fictional IP) "http://192.168.0.10/share".
I need to acess my nginx reverse proxy (another machine in front of
Alfresco) on the URL: "http://system.mydomain.com/alfresco" and it should
consume "http://192.168.0.10/share".
In my configuration above, the error is that when I acess
"http://system.mydomain.com/alfresco" it will consume
"http://192.168.0.10/alfresco".
How can I configure this to work? Rewrite?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283919,283919#msg-283919
From julian at jlbprof.com Thu Apr 25 19:27:36 2019
From: julian at jlbprof.com (Julian Brown)
Date: Thu, 25 Apr 2019 14:27:36 -0500
Subject: Weird problem cannot standup nginx on 443 ipv4
Message-ID:
Sorry this is a bit long:
On Debian Stretch 9.8, fresh install. I want to setup nginx as a load
balancer to just one node at this time just to play with it and understand
it.
I installed the apt package nginx-full, which I assume will have all there.
So I slightly modified nginx.conf, where I removed the part about
sites-available and only included the one loadbalance.conf.
nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log debug;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json
application/javascript text/xml application/xml application/xm
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/loadbalance.conf;
}
Here is
/etc/nginx/conf.d/loadbalance.conf
upstream learngigs {
server 192.168.1.250;
}
server {
server_name learngigs.com www.learngigs.com
listen 443;
listen [::]:443;
ssl on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
ssl_certificate /etc/letsencrypt/live/learngigs.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/learngigs.com/privkey.pem;
access_log /var/log/nginx/loadbalance.access.log;
error_log /var/log/nginx/loadbalance.error.log debug;
location / {
proxy_pass http://learngigs/;
}
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://learngigs.com;
}
root at loadbalance01:/etc/nginx# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
As you can see there are no syntax errors.
root at loadbalance01:/etc/nginx# netstat -anop | grep nginx
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN
1322/nginx: master off (0.00/0/0)
tcp6 0 0 :::80 :::* LISTEN
1322/nginx: master off (0.00/0/0)
tcp6 0 0 :::443 :::* LISTEN
1322/nginx: master off (0.00/0/0)
unix 3 [ ] STREAM CONNECTED 20723 1322/nginx:
master
unix 3 [ ] STREAM CONNECTED 20724 1322/nginx:
master
>From this you can see it will not bind to 0.0.0.0:443, it was able to and
did for 80, and did 443 on ipv6, but not ipv4.
There is nothing sitting on 443:
root at loadbalance01:/etc/nginx# netstat -anop | grep 443
tcp6 0 0 :::443 :::* LISTEN
1322/nginx: master off (0.00/0/0)
So there is no bind error.
Looking at the logs:
root at loadbalance01:/var/log/nginx# ls -ld *
-rw-r--r-- 1 root root 0 Apr 25 14:20 access.log
-rw-r--r-- 1 root root 265 Apr 25 14:20 error.log
-rw-r--r-- 1 root root 0 Apr 25 14:20 loadbalance.access.log
-rw-r--r-- 1 root root 78 Apr 25 14:20 loadbalance.error.log
As you can see it created loadbalance.error.log, so it understood my config
for that.
root at loadbalance01:/var/log/nginx# cat loadbalance.error.log
2019/04/25 14:20:09 [debug] 1368#1368: epoll add event: fd:8 op:1
ev:00002001
root at loadbalance01:/var/log/nginx# cat error.log
2019/04/25 14:20:09 [info] 1363#1363: Using 32768KiB of shared memory for
nchan in /etc/nginx/nginx.conf:63
2019/04/25 14:20:09 [debug] 1368#1368: epoll add event: fd:9 op:1
ev:00002001
2019/04/25 14:20:09 [debug] 1368#1368: epoll add event: fd:10 op:1
ev:00002001
And there is nothing interesting in the logs.
I put this on serverfault and someone suggested that listening on a port on
ipv6 would also work for ipv4, but if I do a telnet myip 443 from another
server it says refused connection.
There is nothing of note in syslog:
Apr 25 14:20:03 loadbalance01 systemd[1]: Stopping A high performance web
server and a reverse proxy server...
Apr 25 14:20:03 loadbalance01 systemd[1]: Stopped A high performance web
server and a reverse proxy server.
Apr 25 14:20:09 loadbalance01 systemd[1]: Starting A high performance web
server and a reverse proxy server...
Apr 25 14:20:09 loadbalance01 systemd[1]: nginx.service: Failed to read PID
from file /run/nginx.pid: Invalid argument
Apr 25 14:20:09 loadbalance01 systemd[1]: Started A high performance web
server and a reverse proxy server.
I tried to strace it, and it does not even try to bind to 443 on ipv4, it
is almost like it is compiled to ignore port 443 on ipv4.
Can someone help me?
Thank you
Julian Brown
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From rainer at ultra-secure.de Thu Apr 25 19:37:14 2019
From: rainer at ultra-secure.de (Rainer Duffner)
Date: Thu, 25 Apr 2019 21:37:14 +0200
Subject: Weird problem cannot standup nginx on 443 ipv4
In-Reply-To:
References:
Message-ID:
> Am 25.04.2019 um 21:27 schrieb Julian Brown :
>
> listen 443;
> listen [::]:443;
You most certainly want
listen 443 ssl
or
listen 443 ssl http2
Not sure if it solves your problem.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From julian at jlbprof.com Thu Apr 25 20:24:48 2019
From: julian at jlbprof.com (Julian Brown)
Date: Thu, 25 Apr 2019 15:24:48 -0500
Subject: Weird problem cannot standup nginx on 443 ipv4
In-Reply-To:
References:
Message-ID:
I just tried both, (removed the ipv6 entry) and tried both of the above.
Now nothing is on 443, not even ipv6 (which is ok).
This is totally wierd.
Thank you
Julian
On Thu, Apr 25, 2019 at 2:37 PM Rainer Duffner
wrote:
>
>
> Am 25.04.2019 um 21:27 schrieb Julian Brown :
>
> listen 443;
> listen [::]:443;
>
>
>
>
> You most certainly want
>
>
> listen 443 ssl
> or
> listen 443 ssl http2
>
>
>
> Not sure if it solves your problem.
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Thu Apr 25 23:21:59 2019
From: nginx-forum at forum.nginx.org (OiledAmoeba)
Date: Thu, 25 Apr 2019 19:21:59 -0400
Subject: Weird problem cannot standup nginx on 443 ipv4
In-Reply-To:
References:
Message-ID: <2463c88907d57e062ede28bd6bd92934.NginxMailingListEnglish@forum.nginx.org>
25. April 2019 21:27, "Julian Brown" schrieb:
> listen 443;
> listen [::]:443;
I'm lazy, so I used "listen [::]:443 ssl http2 ipv6only=off" instead of two
listen-directives. Maybe you want to try this, so nginx must use IPv4 and
IPv6 because of the ipv6only=off directive.
If I try learngigs.com my browser points me to an IPv4-SSL-site (so it looks
like it works). If I try http://learngigs.com:80/ my browser is searching
forever...
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283924,283928#msg-283928
From julian at jlbprof.com Fri Apr 26 01:08:47 2019
From: julian at jlbprof.com (Julian Brown)
Date: Thu, 25 Apr 2019 20:08:47 -0500
Subject: Weird problem cannot standup nginx on 443 ipv4
In-Reply-To: <2463c88907d57e062ede28bd6bd92934.NginxMailingListEnglish@forum.nginx.org>
References:
<2463c88907d57e062ede28bd6bd92934.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
I finally figured it out, and I cannot believe it passed a syntax checker.
This is what I had:
server {
server_name learngigs.com www.learngigs.com
listen 443;
listen [::]:443;
I did not have a semi colon after the server name directive. The syntax
checker said it was fine, so I do not know what it was trying to do.
Anyway thank you all.
Julian
On Thu, Apr 25, 2019 at 6:21 PM OiledAmoeba
wrote:
> 25. April 2019 21:27, "Julian Brown" schrieb:
>
> > listen 443;
> > listen [::]:443;
>
> I'm lazy, so I used "listen [::]:443 ssl http2 ipv6only=off" instead of two
> listen-directives. Maybe you want to try this, so nginx must use IPv4 and
> IPv6 because of the ipv6only=off directive.
>
> If I try learngigs.com my browser points me to an IPv4-SSL-site (so it
> looks
> like it works). If I try http://learngigs.com:80/ my browser is searching
> forever...
>
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,283924,283928#msg-283928
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From francis at daoine.org Fri Apr 26 15:11:59 2019
From: francis at daoine.org (Francis Daly)
Date: Fri, 26 Apr 2019 16:11:59 +0100
Subject: Weird problem cannot standup nginx on 443 ipv4
In-Reply-To:
References:
<2463c88907d57e062ede28bd6bd92934.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20190426151159.jnqrgowfsey7vb32@daoine.org>
On Thu, Apr 25, 2019 at 08:08:47PM -0500, Julian Brown wrote:
Hi there,
Well done in spotting that -- it has come up before, but obviously wasn't
something that someone noticed quickly enough this time.
> I finally figured it out, and I cannot believe it passed a syntax checker.
It passed a syntax check because it is syntactically valid.
> server {
> server_name learngigs.com www.learngigs.com
>
> listen 443;
> listen [::]:443;
It is inconvenient in this case; but "server_name" accepts a list
of whitespace-separated arguments. Your "server_name" directive has
four arguments.
It's not what you intended, but the computer (in general) does not care
what you intended; it cares what you wrote.
> I did not have a semi colon after the server name directive. The syntax
> checker said it was fine, so I do not know what it was trying to do.
I suspect that this won't be "fixed", because the amount of special-casing
required in the code to handle it is probably not worth the effort to
anyone to write.
For example: a week ago, you would have liked if someone had previously
come up with a reliable and obviously-documented way that this specific
problem could be auto-avoided or -alerted. Today, you probably don't
need that work done, because you will remember to check for semi-colon
if you ever see the same problem again.
Great that you found and fixed the problem in the config!
Cheers,
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Fri Apr 26 15:26:36 2019
From: francis at daoine.org (Francis Daly)
Date: Fri, 26 Apr 2019 16:26:36 +0100
Subject: what does "deny 0.0.0.1;" do
In-Reply-To: <3868a393f7d5e77f33674b0ac1bfb8d9.NginxMailingListEnglish@forum.nginx.org>
References: <3868a393f7d5e77f33674b0ac1bfb8d9.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20190426152636.jwfzcakcjrqdntbc@daoine.org>
On Fri, Apr 19, 2019 at 05:24:14AM -0400, allenhe wrote:
Hi there,
> I found this is valid, and want to know what scenario it's used for.
>
> deny 0.0.0.1;
http://nginx.org/r/deny
Are you asking what the directive is for; or are you asking why an
argument is accepted; or are you asking why this particular argument
is accepted?
I suspect that the answer to the last is "it looks like an IP address;
and if the administrator wants to configure that, why should nginx try
to stop them?".
I also suspect that hoping that nginx will interpret that address in
accordance with an interpretation of RFC 6890 (or RFC 1122) is ambitious,
unless you can see that in the code.
There are a bunch of "reserved" IP addresses. nginx probably doesn't care
about them. You can set your network up any way you want, and nginx will
(in general) believe what you put in your config.
Cheers,
f
--
Francis Daly francis at daoine.org
From julian at jlbprof.com Fri Apr 26 15:59:48 2019
From: julian at jlbprof.com (Julian Brown)
Date: Fri, 26 Apr 2019 10:59:48 -0500
Subject: Weird problem cannot standup nginx on 443 ipv4
In-Reply-To: <20190426151159.jnqrgowfsey7vb32@daoine.org>
References:
<2463c88907d57e062ede28bd6bd92934.NginxMailingListEnglish@forum.nginx.org>
<20190426151159.jnqrgowfsey7vb32@daoine.org>
Message-ID:
I guess that is an unfortunate accident in this case. But man it was
frustrating, I even had some co workers who are nginx experts and they
missed it to.
It is certainly embarrassing.
Thanx for your reply.
Julian
On Fri, Apr 26, 2019 at 10:11 AM Francis Daly wrote:
> On Thu, Apr 25, 2019 at 08:08:47PM -0500, Julian Brown wrote:
>
> Hi there,
>
> Well done in spotting that -- it has come up before, but obviously wasn't
> something that someone noticed quickly enough this time.
>
> > I finally figured it out, and I cannot believe it passed a syntax
> checker.
>
> It passed a syntax check because it is syntactically valid.
>
> > server {
> > server_name learngigs.com www.learngigs.com
> >
> > listen 443;
> > listen [::]:443;
>
> It is inconvenient in this case; but "server_name" accepts a list
> of whitespace-separated arguments. Your "server_name" directive has
> four arguments.
>
> It's not what you intended, but the computer (in general) does not care
> what you intended; it cares what you wrote.
>
> > I did not have a semi colon after the server name directive. The syntax
> > checker said it was fine, so I do not know what it was trying to do.
>
> I suspect that this won't be "fixed", because the amount of special-casing
> required in the code to handle it is probably not worth the effort to
> anyone to write.
>
> For example: a week ago, you would have liked if someone had previously
> come up with a reliable and obviously-documented way that this specific
> problem could be auto-avoided or -alerted. Today, you probably don't
> need that work done, because you will remember to check for semi-colon
> if you ever see the same problem again.
>
> Great that you found and fixed the problem in the config!
>
> Cheers,
>
> f
> --
> Francis Daly francis at daoine.org
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Fri Apr 26 21:23:51 2019
From: nginx-forum at forum.nginx.org (ayma)
Date: Fri, 26 Apr 2019 17:23:51 -0400
Subject: etag missing when doing a proxy_pass
Message-ID: <50637279283b8b00f822b721eef35c17.NginxMailingListEnglish@forum.nginx.org>
Hi
I'm trying to use ETags with nginx when looking at html data (Content-Type:
text/html). I don't see an issue when the content-type comes back as
Content-Type: text/plain.
To reduce noise I tried the following:
I created an nginx conf file on my local host that would serve up an html
page and created the ETag header inside
server {
listen 3000;
location / {
etag off;
proxy_http_version 1.1;
root /var/www/nginx/default;
more_set_headers "testEtag: adfaa-dfsdfasdf";
more_set_headers "ETag: adfaa-dfsdfasdf";
}
}
I tried with etag on and the resulting curl did not generate an ETag header
so I tried with etag off.
Next to simulate the proxy_pass scenario I created another nginx conf file
with the following:
server {
listen 5000;
location /echoheaders {
etag off;
#more_set_headers ETag:$upstream_http_etag;
more_set_headers x-my-e-tag:$upstream_http_etag;
more_set_headers "my-test-etag:adfasdfadfadsf";
more_set_headers "ETag:234adfl-affai9f";
proxy_pass http://127.0.0.1:3000/;
}
}
Then I did the following curl cmds:
root at public-crd0edf9d103b74bc088058b9011bc6f59-alb1-6d5979ccf5-jn5qs:/etc/nginx/conf.d#
curl -I http://127.0.0.1:3000/
HTTP/1.1 200 OK
Date: Fri, 26 Apr 2019 21:18:29 GMT
Content-Type: text/html
Connection: keep-alive
testEtag: adfaa-dfsdfasdf
ETag: adfaa-dfsdfasdf
Expires: Thu, 01 Jan 1970 00:00:01 GMT
Cache-Control: no-cache
Which shows an ETag header
curl -I http://127.0.0.1:5000/echoheaders
HTTP/1.1 200 OK
Date: Fri, 26 Apr 2019 21:18:21 GMT
Content-Type: text/html
Connection: keep-alive
testEtag: adfaa-dfsdfasdf
Expires: Thu, 01 Jan 1970 00:00:01 GMT
Cache-Control: no-cache
x-my-e-tag: adfaa-dfsdfasdf
my-test-etag: adfasdfadfadsf
Goes through a proxy pass and does not show a ETag header. removing the
etag directive or setting it to etag on; in the config files with the proxy
pass does not seem to have any affect.
Wondering if there is some known behavior with ETag and proxy_pass? Also
wondering what nginx is doing so that I can't just set the ETag header
manually. It seems like nginx is the one removing the header?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283945,283945#msg-283945
From al-nginx at none.at Sat Apr 27 16:22:42 2019
From: al-nginx at none.at (Aleksandar Lazic)
Date: Sat, 27 Apr 2019 18:22:42 +0200
Subject: Reverse proxy one domain
In-Reply-To:
References:
Message-ID:
Hi.
Am 25.04.2019 um 15:16 schrieb mrodrigom:
> Hi.
> I have one URL for all my websites, applications and so.
> Let's say it's system.mydomain.com. Most of my websites is Apache + PHP and
> the applications is Tomcat. So far so good, no problem there.
>
> My nginx config that's working for everything else:
>
> server {
> listen 80;
> server_name system.mydomain.com;
> ....................
> location /app1 {
> return 301 https://$host$request_uri;
> }
> }
> server {
> listen 443 ssl;
> server_name system.mydomain.com;
> ....................
> location /app1 {
> proxy_pass http://app1.mydomain.com;
> }
> }
> upstream app1.mydomain.com {
> server SERVER_IP_HERE:8081 fail_timeout=60;
> }
>
> Ok, my problem is when I have Alfresco for example on the backend. Alfresco
> responds on (fictional IP) "http://192.168.0.10/share".
> I need to acess my nginx reverse proxy (another machine in front of
> Alfresco) on the URL: "http://system.mydomain.com/alfresco" and it should
> consume "http://192.168.0.10/share".
> In my configuration above, the error is that when I acess
> "http://system.mydomain.com/alfresco" it will consume
> "http://192.168.0.10/alfresco".
> How can I configure this to work? Rewrite?
Looks like you would like to have ProxyPassReverse which does not exists
according to this wiki. Maybe it changed in the current nginx version.
https://www.nginx.com/resources/wiki/start/topics/examples/likeapache/
Why don't configuring Alfresco for reverse proxy setup?
http://docs.alfresco.com/6.1/tasks/configure-ssl-prod.html
In this post are some answers which links to a possible nginx solution.
https://community.alfresco.com/thread/232617-configure-reverse-proxy-for-alfresco-community-52-windows
Regards
Aleks
From tom at sleepyvalley.net Sat Apr 27 20:22:09 2019
From: tom at sleepyvalley.net (Tom Strike)
Date: Sat, 27 Apr 2019 15:22:09 -0500
Subject: I need help with 403 forbidden
Message-ID: <89fd0501-8a8f-f468-fcbb-3ecffc7bc215@sleepyvalley.net>
I have just set up nginx on my server to stream video to and retrieve it
in an http web page. I have tried everything that I can think of and all
the recommendations i could find by Googling but nothing gives me http
access th the video stream that is active on the server. I keep adding
things I run into to my nginx.conf file and now it is so bloated it's
ridiculous. The thing that concerns me with this issue is that I am
using Apache2 on this same server running php7 at the standard port 80.
I moved Nginx to port 8088. I've tried running Nginx as "user apache"
thinking that might help with a possibility that php is running under
the apache ownership but that didn't help. The following is my config file:
#user? nginx;
user apache;
worker_processes? auto;
events {
??? worker_connections? 1024;
}
# We need to setup an rmtp server to stream video from client devices
rtmp {
??? server {
??????? listen 1935;
??????? chunk_size 4096;
??????? ping 30s;
??????? notify_method get;
??????? allow play all;
??????? # rmtp handler our clients connect to for live streaming, it
runs on port 1935.
??????? # It converts the stream to HLS and stores it on our server
??????? application pb1ag {
??????????? live on;
??????????? hls on;
??????????? hls_path /var/www/nginx/pb1ag/live;
??????????? hls_nested on;? # create a new folder for each stream
??????????? record_notify off;
??????????? record_path /var/www/nginx/pb1ag/videos;
??????????? record off;
??????????? record_unique off;
??????? }
??????? application vod {
?????????? play /var/www/nginx/pb1ag/videos;
??????? }
??? }
}
http {
??? include?????? mime.types;
??? default_type? application/octet-stream;
??? log_format? main? '$remote_addr - $remote_user [$time_local]
"$request" '
????????????????????? '$status $body_bytes_sent "$http_referer" '
????????????????????? '"$http_user_agent" "$http_x_forwarded_for"';
??? sendfile??????? on;
??? tcp_nopush???? on;
??? keepalive_timeout? 65;
??? gzip? on;
??? server_tokens off;
??? server {
??????? listen 8088;
??????? server_name? www.pb1ag.us;
??????? root /var/www/nginx/pb1ag;
??????? index? index.m3u8;
??????? location / {
??????????? root?? html;
??????????? index? index.m3u8 index.html index.htm;
??????????? autoindex on;
??????????? autoindex_exact_size off;
??????? }
??????? # the http end point our web based users connect to see the
live stream
??????? location /live {
??????????? types {
??????????????? application/vnd.apple.mpegurl m3u8;
??????????? }
??????????? index? index.m3u8;
??????????? alias /var/www/nginx/pb1ag/live;
??????????? add_header Cache-Control no-cache;
#?????????? add_header 'Access-Control-Allow-Origin' '*';
??????? }
??? }
}
I am working against a deadline and this ius the last problem to solve
and I am getting desperate. Please help, anyone?
Thanks.
From nginx-forum at forum.nginx.org Sat Apr 27 21:26:02 2019
From: nginx-forum at forum.nginx.org (OiledAmoeba)
Date: Sat, 27 Apr 2019 17:26:02 -0400
Subject: I need help with 403 forbidden
In-Reply-To: <89fd0501-8a8f-f468-fcbb-3ecffc7bc215@sleepyvalley.net>
References: <89fd0501-8a8f-f468-fcbb-3ecffc7bc215@sleepyvalley.net>
Message-ID: <17b59cae05d9c90dcb8f701419c238e5.NginxMailingListEnglish@forum.nginx.org>
Hell, I "like" Thunderbird for that... Sorry for the full quote but my
first reply was send directly to Tom...
Tom, I asked for the domain. "www" is a sub-domain of pb1ag.us
(Don't be mad, that's my kind of humor..) So yes, I tried www.pb1ag.us
on port 8088. Please check the presence of your playlist and its location.
For the sake of completeness: Port 80 give me an answer of Apache: This is not the Web Page you
are looking for!
Am 27.04.2019 um 23:03 schrieb Tom Strike:
> No, you need www.pb1ag.us for dns to work.
>
>
> On 4/27/19 4:02 PM, Florian Ruhnke wrote:
>> There is no 403-forbidden for me.
>>
>> Is pb1ag.us the domain we are talking about?
>>
>> # curl www.pb1ag.us:8088 -v
>> * Trying 144.217.11.151...
>> * TCP_NODELAY set
>> * Connected to www.pb1ag.us (144.217.11.151) port 8088 (#0)
>>> GET / HTTP/1.1
>>> Host: www.pb1ag.us:8088
>>> User-Agent: curl/7.64.1
>>> Accept: */*
>>>
>> < HTTP/1.1 200 OK
>> < Server: nginx
>> < Date: Sat, 27 Apr 2019 20:47:07 GMT
>> < Content-Type: text/html
>>
>> So nginx is delivering index.html instead of your specified m3u-file. Is
>> it present in /var/www/nginx/pb1ag? nginx is trying the index-files in
>> the order they were listed in the config. From left to right. The first
>> found file will be served.
>>
>> By the way: There are no php-workers in the part of the config that you
>> presented to us. There is no reason to run nginx as a user different to
>> nginx.
>>
>> Greetz
>> Florian
>>
>> Am 27.04.2019 um 22:22 schrieb Tom Strike:
>>> I have just set up nginx on my server to stream video to and retrieve
>>> it in an http web page. I have tried everything that I can think of
>>> and all the recommendations i could find by Googling but nothing gives
>>> me http access th the video stream that is active on the server. I
>>> keep adding things I run into to my nginx.conf file and now it is so
>>> bloated it's ridiculous. The thing that concerns me with this issue is
>>> that I am using Apache2 on this same server running php7 at the
>>> standard port 80. I moved Nginx to port 8088. I've tried running Nginx
>>> as "user apache" thinking that might help with a possibility that php
>>> is running under the apache ownership but that didn't help. The
>>> following is my config file:
>>>
>>> #user nginx;
>>> user apache;
>>> worker_processes auto;
>>> events {
>>> worker_connections 1024;
>>> }
>>> # We need to setup an rmtp server to stream video from client devices
>>> rtmp {
>>> server {
>>> listen 1935;
>>> chunk_size 4096;
>>> ping 30s;
>>> notify_method get;
>>> allow play all;
>>> # rmtp handler our clients connect to for live streaming, it
>>> runs on port 1935.
>>> # It converts the stream to HLS and stores it on our server
>>> application pb1ag {
>>> live on;
>>> hls on;
>>> hls_path /var/www/nginx/pb1ag/live;
>>> hls_nested on; # create a new folder for each stream
>>> record_notify off;
>>> record_path /var/www/nginx/pb1ag/videos;
>>> record off;
>>> record_unique off;
>>> }
>>>
>>> application vod {
>>> play /var/www/nginx/pb1ag/videos;
>>> }
>>> }
>>> }
>>>
>>> http {
>>> include mime.types;
>>> default_type application/octet-stream;
>>> log_format main '$remote_addr - $remote_user [$time_local]
>>> "$request" '
>>> '$status $body_bytes_sent "$http_referer" '
>>> '"$http_user_agent" "$http_x_forwarded_for"';
>>> sendfile on;
>>> tcp_nopush on;
>>> keepalive_timeout 65;
>>> gzip on;
>>> server_tokens off;
>>> server {
>>> listen 8088;
>>> server_name www.pb1ag.us;
>>> root /var/www/nginx/pb1ag;
>>> index index.m3u8;
>>> location / {
>>> root html;
>>> index index.m3u8 index.html index.htm;
>>> autoindex on;
>>> autoindex_exact_size off;
>>> }
>>> # the http end point our web based users connect to see the
>>> live stream
>>> location /live {
>>> types {
>>> application/vnd.apple.mpegurl m3u8;
>>> }
>>> index index.m3u8;
>>> alias /var/www/nginx/pb1ag/live;
>>> add_header Cache-Control no-cache;
>>> # add_header 'Access-Control-Allow-Origin' '*';
>>> }
>>> }
>>> }
>>>
>>> I am working against a deadline and this ius the last problem to solve
>>> and I am getting desperate. Please help, anyone?
>>>
>>> Thanks.
>>>
>>> _______________________________________________
>>> nginx mailing list
>>> nginx at nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283959,283960#msg-283960
From nginx-forum at forum.nginx.org Sat Apr 27 22:03:44 2019
From: nginx-forum at forum.nginx.org (OiledAmoeba)
Date: Sat, 27 Apr 2019 18:03:44 -0400
Subject: Mailman is giving me 554 5.7.1 because of using Mail-Relay
Message-ID:
Hi,
I have to use a Mail-Relay because of the fact that my Mailserver ist behind
a dynamic-IP-address. I registered myself to this mailinglist by mail.
But every mail I'm writing to nginx at nginx.org is bounced with 554 5.7.1
because of the fact that relaying is changing the envelope-from header.
I think the mailinglist-settings are too restrictive.
Admin: Any chance to change that?
Example from today:
> Reporting-MTA: smtp; s1-b0c7.socketlabs.email-od.com
> Original-Recipient: rfc822; nginx at nginx.org
> Action: failed
> Status: 5.7.1
> Diagnostic-Code: smtp; 554 5.7.1 : Recipient address
rejected: envelope-sender address is not listed as subscribed for the
mailing list. You are either not subscribed or From: address differ from
envelope address
> Last-Attempt-Date: Sat, 27 Apr 2019 17:23:43 -0400
Send from forum at ruhnke.cloud relayed via socketlabs. The Relay changed the
envelope-from to Return-Path @bounce.ruhnke.cloud to control
bounces.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,283961,283961#msg-283961
From francis at daoine.org Sun Apr 28 07:31:23 2019
From: francis at daoine.org (Francis Daly)
Date: Sun, 28 Apr 2019 08:31:23 +0100
Subject: I need help with 403 forbidden
In-Reply-To: <89fd0501-8a8f-f468-fcbb-3ecffc7bc215@sleepyvalley.net>
References: <89fd0501-8a8f-f468-fcbb-3ecffc7bc215@sleepyvalley.net>
Message-ID: <20190428073123.3gisl6nmcp6ndln3@daoine.org>
On Sat, Apr 27, 2019 at 03:22:09PM -0500, Tom Strike wrote:
Hi there,
> I have just set up nginx on my server to stream video to and retrieve it in
> an http web page.
> http {
> ??? server {
> ??????? location /live {
> ??????????? types {
> ??????????????? application/vnd.apple.mpegurl m3u8;
> ??????????? }
> ??????????? index? index.m3u8;
> ??????????? alias /var/www/nginx/pb1ag/live;
> ??????????? add_header Cache-Control no-cache;
When you request http://www.pb1ag.us:8088/live/ and get a http 403,
what does the nginx error log say?
And if you don't request /live/, what url do you request?
Is it useful to add "autoindex on;" here? That would reveal information
about your directory structure, if there is no index.m3u8 file.
Incidentally, the "index" and "alias" lines here do nothing new; their
effects are the same as what is shown in the "server"-level configuration.
Good luck with it,
f
--
Francis Daly francis at daoine.org
From dung.trantien at axonactive.com Mon Apr 29 07:54:50 2019
From: dung.trantien at axonactive.com (Dung Tran Tien)
Date: Mon, 29 Apr 2019 07:54:50 +0000
Subject: Cannot enable CORS in nginx
Message-ID:
Hi everyone,
I'm using nginx 1.10.3 in Ubuntu 16.04. I tried to enable CORS same exactly this configuration https://enable-cors.org/server_nginx.html, but I still got the error:
POST https://abc.com/api/21/job/8c1c66b9-a736-4d28-9868-c3f2c433b0f7/executions 403
(index):1 Access to XMLHttpRequest at 'https:// abc.com/api/21/job/8c1c66b9-a736-4d28-9868-c3f2c433b0f7/executions' from origin 'https://bcd.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Please advise me how to fix the issue.
Thanks.
Dung Tran Tien
ICT Specialist
AXON ACTIVE VIETNAM Co. Ltd
www.axonactive.com
T +84.28.7109 1234, F +84.28.629 738 86, M +84 933 893 489
Ho Chi Minh Office:
Hai Au Building, 39B Truong Son, Ward 4, Tan Binh District, Ho Chi Minh City, Vietnam
106?39'51"East / 10?48'32"North
Da Nang Office:
PVcomBank Building, 30/4 Street, Hai Chau District, Da Nang, Vietnam
108?13'15"East / 16?2'27"North
Can Tho Office:
Toyota-NinhKieu Building, 57-59A Cach Mang Thang Tam Street, Can Tho, Vietnam
105?46'34"East / 10?2'57"North
San Francisco Office:
281 Ellis Str, San Francisco, CA 94102, United States
122?24'39"West / 37?47'6"North
Luzern Office:
Schl?ssli Sch?negg, Wilhelmsh?he, Luzern 6003, Switzerland
8?17'52"East / 47?3'1"North
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mdounin at mdounin.ru Mon Apr 29 12:52:53 2019
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Mon, 29 Apr 2019 15:52:53 +0300
Subject: Mailman is giving me 554 5.7.1 because of using Mail-Relay
In-Reply-To:
References:
Message-ID: <20190429125253.GT1877@mdounin.ru>
Hello!
On Sat, Apr 27, 2019 at 06:03:44PM -0400, OiledAmoeba wrote:
> I have to use a Mail-Relay because of the fact that my Mailserver ist behind
> a dynamic-IP-address. I registered myself to this mailinglist by mail.
> But every mail I'm writing to nginx at nginx.org is bounced with 554 5.7.1
> because of the fact that relaying is changing the envelope-from header.
> I think the mailinglist-settings are too restrictive.
> Admin: Any chance to change that?
I'm not an admin, but the answer is "highly unlikely". To post to
the mailing list, you have to be subscribed - because open mailing
lists are subject to lots of spam messages. And the only working
way to enforce this is to check envelope from addresses at SMTP
level. In the past, we've tried to use qrantine with manual
review of messages from non-subscribed addresses instead, but this
simply does not work.
Consider using switching to a mail relay which does not try to
change your envelope from address.
--
Maxim Dounin
http://mdounin.ru/
From gfrankliu at gmail.com Mon Apr 29 17:08:33 2019
From: gfrankliu at gmail.com (Frank Liu)
Date: Mon, 29 Apr 2019 10:08:33 -0700
Subject: client_max_body_size
Message-ID:
http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size
says
Sets the maximum allowed size of the client request body, specified in the
?Content-Length? request header field.
Can I assume "client_max_body_size" will NOT affect if chunked encoding is
used?
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From dukedougal at gmail.com Tue Apr 30 23:14:11 2019
From: dukedougal at gmail.com (Duke Dougal)
Date: Wed, 1 May 2019 09:14:11 +1000
Subject: Cannot get secure link with expires to work
Message-ID:
Hello I've tried every possible way I can think of to make secure links
work with expires. I've tried different versions of nginx, I've tried on
Ubuntu, tried on Centos, tried generating the hash using openssl, tried
using Python. I've followed every tutorial I can find. So I must be doing
something really wrong.
I am trying to use the nginx secure link module
http://nginx.org/en/docs/http/ngx_http_secure_link_module.html
I want to make secure links using expires.
No matter what I try, I cannot get it to work when I try to uses the expire
time.
It works fine when I do a simple secure link based purely on the link,
without also the expire time or the ip address.
Can anyone suggest what I am doing wrong? Or can anyone point me to
instructions that show every detail of how to do it and have been recently
tested?
thanks!
The command to generate the key:
ubuntu at ip-172-31-34-191:/var/www$ echo -n '2147483647/html/index.html
secret' | openssl md5 -binary | openssl base64 | tr +/ -_ | tr -d =
FsRb_uu5NsagF0hA_Z-OQg
The command that fails:
ubuntu at ip-172-31-34-191:/var/www$ curl
http://127.0.0.1/html/index.html?md5=FsRb_uu5NsagF0hA_Z-OQgexpires=2147483647
403 Forbidden
403 Forbidden
nginx/1.14.2
Here's the relevant part of the nginx conf file:
ubuntu at ip-172-31-34-191:/var/www$ sudo cat
/etc/nginx/sites-enabled/theapp_nginx.conf
...SNIP
location /html/ {
secure_link $arg_md5,$arg_expires;
secure_link_md5 "$secure_link_expires$uri secret";
if ($secure_link = "") {
return 403;
}
if ($secure_link = "0") {
return 410;
}
try_files $uri $uri/ =404;
}
...SNIP
Here's the nginx version info:
ubuntu at ip-172-31-34-191:/var/www$ nginx -V
nginx version: nginx/1.14.2
built with OpenSSL 1.1.0g 2 Nov 2017
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2
-fdebug-prefix-map=/build/nginx-x0ix7n/nginx-1.14.2=.
-fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time
-D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro
-Wl,-z,now -fPIC' --prefix=/usr/share/nginx
--conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log
--error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock
--pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules
--http-client-body-temp-path=/var/lib/nginx/body
--http-fastcgi-temp-path=/var/lib/nginx/fastcgi
--http-proxy-temp-path=/var/lib/nginx/proxy
--http-scgi-temp-path=/var/lib/nginx/scgi
--http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-compat --with-debug
--with-pcre-jit --with-http_ssl_module --with-http_stub_status_module
--with-http_realip_module --with-http_auth_request_module
--with-http_v2_module --with-http_dav_module --with-http_slice_module
--with-threads --with-http_addition_module --with-http_flv_module
--with-http_geoip_module=dynamic --with-http_gunzip_module
--with-http_gzip_static_module --with-http_image_filter_module=dynamic
--with-http_mp4_module --with-http_perl_module=dynamic
--with-http_random_index_module --with-http_secure_link_module
--with-http_sub_module --with-http_xslt_module=dynamic --with-mail=dynamic
--with-mail_ssl_module --with-stream=dynamic --with-stream_ssl_module
--with-stream_ssl_preread_module
--add-dynamic-module=/build/nginx-x0ix7n/nginx-1.14.2/debian/modules/http-headers-more-filter
--add-dynamic-module=/build/nginx-x0ix7n/nginx-1.14.2/debian/modules/http-auth-pam
--add-dynamic-module=/build/nginx-x0ix7n/nginx-1.14.2/debian/modules/http-cache-purge
--add-dynamic-module=/build/nginx-x0ix7n/nginx-1.14.2/debian/modules/http-dav-ext
--add-dynamic-module=/build/nginx-x0ix7n/nginx-1.14.2/debian/modules/http-ndk
--add-dynamic-module=/build/nginx-x0ix7n/nginx-1.14.2/debian/modules/http-echo
--add-dynamic-module=/build/nginx-x0ix7n/nginx-1.14.2/debian/modules/http-fancyindex
--add-dynamic-module=/build/nginx-x0ix7n/nginx-1.14.2/debian/modules/nchan
--add-dynamic-module=/build/nginx-x0ix7n/nginx-1.14.2/debian/modules/http-lua
--add-dynamic-module=/build/nginx-x0ix7n/nginx-1.14.2/debian/modules/rtmp
--add-dynamic-module=/build/nginx-x0ix7n/nginx-1.14.2/debian/modules/http-uploadprogress
--add-dynamic-module=/build/nginx-x0ix7n/nginx-1.14.2/debian/modules/http-upstream-fair
--add-dynamic-module=/build/nginx-x0ix7n/nginx-1.14.2/debian/modules/http-subs-filter
ubuntu at ip-172-31-34-191:/var/www$
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From 201904-nginx at jslf.app Tue Apr 30 23:54:14 2019
From: 201904-nginx at jslf.app (Patrick)
Date: Wed, 1 May 2019 07:54:14 +0800
Subject: Cannot get secure link with expires to work
In-Reply-To:
References:
Message-ID: <20190430235414.GA9796@haller.ws>
On 2019-05-01 09:14, Duke Dougal wrote:
> ubuntu at ip-172-31-34-191:/var/www$ curl
> http://127.0.0.1/html/index.html?md5=FsRb_uu5NsagF0hA_Z-OQgexpires=2147483647
should be:
curl http://127.0.0.1/html/index.html?md5=FsRb_uu5NsagF0hA_Z-OQg&expires=2147483647
i.e.
curl "http://127.0.0.1/html/index.html?md5=${md5}&expires=${expiry}"
Patrick