From nginx-forum at forum.nginx.org Mon Feb 1 08:11:57 2021
From: nginx-forum at forum.nginx.org (petrg)
Date: Mon, 01 Feb 2021 03:11:57 -0500
Subject: reverse proxy does not load webpage data
Message-ID: <4171a356fb45a1803b7d7c1eff8d2c3a.NginxMailingListEnglish@forum.nginx.org>
Hey,
I am quite new with nginx. And so my problem might be more on a basic
level.
I am running a html website. This is not an official hosted one but more an
internal service webpage of some device.
It is installed on a PC (PC-device) together with nginx, that supports all
the data needed for the webpage.
If I call the webpage from a connected PC (PC-browser) everything is working
fine.
But now I?d like to run a reverse-proxy (PC-proxy) in between.
So the PC-browser does not see the PC-device anymore.
PC-device is linked to PC-proxy via 192.168.5.0.
PC-browser is linked to PC-proxy via 192.168.1.0.
When I call the webpage now the index.html will be loaded via the proxy-pass
configuration on the PC-proxy (also nginx), but all the next stuff like .js
files and image files are not.
I tested a lot but I am stuck.
This structure of a reverse proxy looks very basic for me. But I don?t get
it run.
My configuration:
PC-device
/path/index.html
nginx.conf
location /images/
absolute_image_path
PC-proxy
nginx.conf
location /device/
proxy_pass 192.168.5.1/path
index.html
PC-browser
url: 192.168.1.1/device/index.html
=> index.html will be loaded correctly from PC-device, because of the
proxy-pass configuration
=> but the Logo-image, that will be loaded by the index.html, will not be
found.
So, how does a reverse-proxy load data for a webpage ?
How should nginx(PC-proxy) know, where to find the image-file
/images/logo.jpg that is asked for by the webpage ?
Or is it necessary that all data of the webpage has to be relative to the
starting point ?192.168.1.1/device/index.html? ?
I am not sure if my misunderstanding is related to the webpage structure or
to the reverse-proxy function.
Thanks for any help.
sorry, I posted this problem already on the german list, but got no answer.
As this english list seems more crowded, I'd like to give it a second try.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290609,290609#msg-290609
From halaeiv at gmail.com Mon Feb 1 09:23:23 2021
From: halaeiv at gmail.com (Hamid Alaei V.)
Date: Mon, 1 Feb 2021 12:53:23 +0330
Subject: How to increase Amplify limits
Message-ID:
Hello,
I wanted to increase limits on my Amplify panel. The panel says I have to
email to amplify-support at .... But the mail seems to be blocked because I
received an "address not found" error:
The response from the remote server was:
554 5.7.1 : Recipient address rejected:
envelope-sender address is not listed as permitted for the request tracker.
You are either do not bind email to support contract or From: address
differ from envelope address
Does anyone know how to reach support? Is there any contract that lets me
receive support? What is the pricing? Is Amplify supported at all?
Thanks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From maxim at nginx.com Mon Feb 1 09:30:14 2021
From: maxim at nginx.com (Maxim Konovalov)
Date: Mon, 1 Feb 2021 12:30:14 +0300
Subject: How to increase Amplify limits
In-Reply-To:
References:
Message-ID: <53a78a3f-e371-6455-5591-af270d5097dc@nginx.com>
Hi Hamid,
It looks like you are writing from a different to your amplify email
account.
Amplify is supported. However, it is in maintenance mode and it is not
possible to increase the server limit.
Maxim
On 01.02.2021 12:23, Hamid Alaei V. wrote:
> Hello,
> I wanted to increase limits on my Amplify panel. The panel says I have
> to email to amplify-support at .... But the mail seems to be blocked
> because I received an "address not found" error:
>
> The response from the remote server was:
>
> 554 5.7.1 : Recipient address rejected:
> envelope-sender address is not listed as permitted for the request
> tracker. You are either do not bind email to support contract or From:
> address differ from envelope address
>
>
> Does anyone know how to reach support? Is there any contract that lets
> me receive support? What is the pricing? Is Amplify supported at all?
>
> Thanks.
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
--
Maxim Konovalov
From francis at daoine.org Mon Feb 1 10:53:12 2021
From: francis at daoine.org (Francis Daly)
Date: Mon, 1 Feb 2021 10:53:12 +0000
Subject: reverse proxy does not load webpage data
In-Reply-To: <4171a356fb45a1803b7d7c1eff8d2c3a.NginxMailingListEnglish@forum.nginx.org>
References: <4171a356fb45a1803b7d7c1eff8d2c3a.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20210201105312.GC6011@daoine.org>
On Mon, Feb 01, 2021 at 03:11:57AM -0500, petrg wrote:
Hi there,
> If I call the webpage from a connected PC (PC-browser) everything is working
> fine.
> But now I?d like to run a reverse-proxy (PC-proxy) in between.
> So the PC-browser does not see the PC-device anymore.
> This structure of a reverse proxy looks very basic for me. But I don?t get
> it run.
>
> My configuration:
>
> PC-device
> /path/index.html
>
>
> nginx.conf
> location /images/
> absolute_image_path
>
> PC-proxy
> nginx.conf
> location /device/
> proxy_pass 192.168.5.1/path
> index.html
>
> PC-browser
> url: 192.168.1.1/device/index.html
>
> => index.html will be loaded correctly from PC-device, because of the
> proxy-pass configuration
> => but the Logo-image, that will be loaded by the index.html, will not be
> found.
> So, how does a reverse-proxy load data for a webpage ?
There is no magic; it does what you configure it to do. The browser makes
a request to nginx; nginx handles that according to its config. Every
request is independent.
> How should nginx(PC-proxy) know, where to find the image-file
> /images/logo.jpg that is asked for by the webpage ?
You need to configure your nginx to do what you want it to do.
PC-proxy needs to know what request to make to PC-Device, when PC-Browser
asks PC-Proxy for /images/logo.jpg.
If you look at the PC-Proxy nginx logs, they should show what that nginx
saw, and what it did.
> Or is it necessary that all data of the webpage has to be relative to the
> starting point ?192.168.1.1/device/index.html? ?
> I am not sure if my misunderstanding is related to the webpage structure or
> to the reverse-proxy function.
Possibly both? I will try to explain.
Your actual html page is on PC-device, in the file
/something/path/index.html.
It can include any or all of:
to refer to things in the same directory, or
to refer to a thing in another directory.
When the browser talks to PC-Device directly and asks for
http://PC-device/path/index.html, it gets back the content, and then
it tries to make five new requests, for http://PC-device/path/one.jpg,
http://PC-device/path/two.jpg, http://PC-device/path/three.jpg,
http://192.168.5.1/path/four.jpg, and
http://PC-device/otherpath/five.jpg. All will probably work.
When the browser talks to PC-proxy and asks for
http://192.168.1.1/device/index.html it gets the same html back,
and it makes five new requests for http://192.168.1.1/device/one.jpg,
http://192.168.1.1/path/two.jpg, http://PC-device/path/three.jpg,
http://192.168.5.1/path/four.jpg, and
http://192.168.1.1/otherpath/five.jpg.
The first will work because of your nginx config; the second and fifth
will not work because you have not configured your nginx to make it
work; and the other two will (probably) not go to nginx on PC-proxy,
so no config there can help you.
In your case, you are doing something like "five.jpg".
So: you must either change the PC-device layout and html so that
everything is below /somewhere/path *and* all links are relative (i.e. do
not start with http: or with /); or change your nginx config so that
you also have something like
location /otherpath/ { proxy_pass http://PC-Device/images/; }
location /path/ { proxy_pass http://PC-Device/path/; }
(Although that second one would usually be written
location /path/ { proxy_pass http://PC-Device; }
-- see the docs for proxy_pass at http://nginx.org/r/proxy_path for details.)
And you will also need to change the PC-device html so that all links
are at least site-relative (i.e. do not start "http:"); or if the links
do use "http:", they use the name that the world should see (something
that resolves to PC-Proxy, not something that resolves to PC-Device).
In general:
* you reverse-proxy to something that you control
* the thing you are reverse-proxying, needs not to fight being
reverse-proxied
* it is often simplest if you reverse-proxy a thing at the same point in
the url-hierarchy that it thinks it is installed at. That is: if your
content is all below http://PC-Device/path/, it is probably simplest
to reverse-proxy that at http://PC-Proxy/path/. It is not compulsory;
but it would have made "two.jpg" Just Work in the example above.
Good luck with it,
f
--
Francis Daly francis at daoine.org
From arut at nginx.com Mon Feb 1 15:07:07 2021
From: arut at nginx.com (Roman Arutyunyan)
Date: Mon, 1 Feb 2021 18:07:07 +0300
Subject: HTTP/3 with Firefox and forms
In-Reply-To:
References:
Message-ID:
Hi Ryan,
We have committed the patch:
https://hg.nginx.org/nginx-quic/rev/6bd8ed493b85
The issue should be fixed now. Please report back if it?s not.
?
Roman Arutyunyan
arut at nginx.com
> On 30 Jan 2021, at 18:40, Roman Arutyunyan wrote:
>
> Hi Ryan,
>
> We have found a problem with POSTing request body and already have a patch
> that changes body parsing and fixes the issue. It will be committed after internal review.
> Hopefully it?s the same issue. Until then you can just check out older code.
>
> ?
> Roman Arutyunyan
> arut at nginx.com
>
>> On 30 Jan 2021, at 18:00, Ryan Gould > wrote:
>>
>> no, i am not seeing a thing in the logs. i looked
>> in the php error logs and the nginx error logs, and
>> even the non-error logs. the page returns immediately
>> after the POST is attempted. it is not spending any
>> time debating or processing or round-tripping.
>>
>> i can also verify the most recently added patch
>> did not fix the problem (not that it was supposed to):
>> https://hg.nginx.org/nginx-quic/rev/dbe33ef9cd9a
>>
>>> Date: Wed, 27 Jan 2021 20:25:05 +0300
>>> From: Roman Arutyunyan
>>> To: nginx at nginx.org
>>> Subject: Re: HTTP/3 with Firefox and forms
>>> Message-ID: <64E274C9-A403-4F5C-AA02-30FBA2FEA49D at nginx.com >
>>> Content-Type: text/plain; charset="utf-8"
>>>
>>> Hi Ryan,
>>>
>>> Thanks for reporting this.
>>>
>>> Do you observe any errors in nginx error log?
>>>
>>> ?
>>> Roman Arutyunyan
>>> arut at nginx.com
>>>
>>> > On 27 Jan 2021, at 19:55, Ryan Gould wrote:
>>> >
>>> > hello all you amazing developers,
>>> >
>>> > i check https://hg.nginx.org/nginx-quic every day for new updates being
>>> > posted. on monday (Jan 25 2021) i noticed these five new updates:
>>> >
>>> > https://hg.nginx.org/nginx-quic/rev/6422455c92b4
>>> > https://hg.nginx.org/nginx-quic/rev/916a2e1d6617
>>> > https://hg.nginx.org/nginx-quic/rev/cb8185bd0507
>>> > https://hg.nginx.org/nginx-quic/rev/58acdba9b3b2
>>> > https://hg.nginx.org/nginx-quic/rev/e1eb7f4ca9f1
>>> >
>>> > the latest build seems to have a problem with submitting forms and the
>>> > latest production and developer versions of Firefox. i am not having
>>> > the same problem with Edge or Chrome.
>>> >
>>> > my backend is PHP 7.3.26 on a Debian 10.7. it doesnt actually do any
>>> > POSTing in Firefox. php is not getting any data at all. these forms
>>> > are running code thats been untouched for five years or so.
>>> >
>>> > reverting to my Jan 11 2021 build of nginx resolves the problem for
>>> > forms and Firefox.
>>> >
>>> > this is probably a problem with Mozilla, but if you have any fixes...
>>> >
>>> > thank you for your incredible work.
>>> > _______________________________________________
>>> > nginx mailing list
>>> > nginx at nginx.org
>>> > http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Tue Feb 2 08:43:27 2021
From: nginx-forum at forum.nginx.org (petrg)
Date: Tue, 02 Feb 2021 03:43:27 -0500
Subject: reverse proxy does not load webpage data
In-Reply-To: <20210201105312.GC6011@daoine.org>
References: <20210201105312.GC6011@daoine.org>
Message-ID: <4e0b51d9e7d84159e349303c5b5a9452.NginxMailingListEnglish@forum.nginx.org>
Hey Francis,
thanks a lot for your extensive answer.
Now I understand much better.
But let me discuss one misty point.
> There is no magic; it does what you configure it to do. The browser makes
> a request to nginx; nginx handles that according to its config. Every
> request is independent.
>
> > How should nginx(PC-proxy) know, where to find the image-file
> > /images/logo.jpg that is asked for by the webpage ?
>
> You need to configure your nginx to do what you want it to do.
My hope was that the proxy-pass directive creates not magic but some
?context?, wraps the whole handling of the webpage into some linked
?context?, so a bit more than just a straight location directive does. And
so all the following requests from that webpage live in that ?context?.
So when you define a proxy-pass to 192.168.5.1/path all the following
requests from that webpage starting with 192.168.5.1/path/index.html should
be directed to at least 192.168.5.1.
If every request is independent, you must know the internal structure of a
webpage when you like to reverse-proxy it. Is that true ?
My feeling is that you should be able to reverse-proxy a webpage without
knowing anything of the internal details, how data are loaded interenally.
So my hope was that the proxy directive wraps the webpage and all the
internal requests and knows where to direct the related requests to.
> When the browser talks to PC-proxy and asks for
> http://192.168.1.1/device/index.html it gets the same html back,
> and it makes five new requests for
> http://192.168.1.1/device/one.jpg,
> http://192.168.1.1/path/two.jpg,
> ...
When I call 192.168.1.1/device/index.html I hoped that the following not
absolute requests would be proxied to
http://192.168.5.1/path/one.jpg,
http://192.168.5.1/path/two.jpg, ?
as the proxy knows that one.jpg belongs to
http://192.168.1.1/device/index.html what is proxied to
http://192.168.5.1/path/.
And so it can direct the internal request for one.jpg to
http://192.168.5.1/path/one.jpg.
Is that a "slightly" wrong understanding of the proxy functionality ? and
too much of magic ?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290609,290623#msg-290623
From nginx-forum at forum.nginx.org Tue Feb 2 08:43:32 2021
From: nginx-forum at forum.nginx.org (petrg)
Date: Tue, 02 Feb 2021 03:43:32 -0500
Subject: reverse proxy does not load webpage data
In-Reply-To: <4171a356fb45a1803b7d7c1eff8d2c3a.NginxMailingListEnglish@forum.nginx.org>
References: <4171a356fb45a1803b7d7c1eff8d2c3a.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
Hey Francis,
thanks a lot for your extensive answer.
Now I understand much better.
But let me discuss one misty point.
> There is no magic; it does what you configure it to do. The browser makes
> a request to nginx; nginx handles that according to its config. Every
> request is independent.
>
> > How should nginx(PC-proxy) know, where to find the image-file
> > /images/logo.jpg that is asked for by the webpage ?
>
> You need to configure your nginx to do what you want it to do.
My hope was that the proxy-pass directive creates not magic but some
?context?, wraps the whole handling of the webpage into some linked
?context?, so a bit more than just a straight location directive does. And
so all the following requests from that webpage live in that ?context?.
So when you define a proxy-pass to 192.168.5.1/path all the following
requests from that webpage starting with 192.168.5.1/path/index.html should
be directed to at least 192.168.5.1.
If every request is independent, you must know the internal structure of a
webpage when you like to reverse-proxy it. Is that true ?
My feeling is that you should be able to reverse-proxy a webpage without
knowing anything of the internal details, how data are loaded interenally.
So my hope was that the proxy directive wraps the webpage and all the
internal requests and knows where to direct the related requests to.
> When the browser talks to PC-proxy and asks for
> http://192.168.1.1/device/index.html it gets the same html back,
> and it makes five new requests for
> http://192.168.1.1/device/one.jpg,
> http://192.168.1.1/path/two.jpg,
> ...
When I call 192.168.1.1/device/index.html I hoped that the following not
absolute requests would be proxied to
http://192.168.5.1/path/one.jpg,
http://192.168.5.1/path/two.jpg, ?
as the proxy knows that one.jpg belongs to
http://192.168.1.1/device/index.html what is proxied to
http://192.168.5.1/path/.
And so it can direct the internal request for one.jpg to
http://192.168.5.1/path/one.jpg.
Is that a "slightly" wrong understanding of the proxy functionality ? and
too much of magic ?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290609,290624#msg-290624
From francis at daoine.org Wed Feb 3 18:10:52 2021
From: francis at daoine.org (Francis Daly)
Date: Wed, 3 Feb 2021 18:10:52 +0000
Subject: reverse proxy does not load webpage data
In-Reply-To:
References: <4171a356fb45a1803b7d7c1eff8d2c3a.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20210203181052.GD6011@daoine.org>
On Tue, Feb 02, 2021 at 03:43:32AM -0500, petrg wrote:
Hi there,
> > There is no magic; it does what you configure it to do. The browser makes
> > a request to nginx; nginx handles that according to its config. Every
> > request is independent.
> My hope was that the proxy-pass directive creates not magic but some
> ?context?, wraps the whole handling of the webpage into some linked
> ?context?, so a bit more than just a straight location directive does. And
HTTP is stateless. Every HTTP request is independent of every other one.
As far as nginx is concerned the "context" that you want does not
exist. The "webpage" that you want does not exist. The browser makes
one request for index.html. The browser then might, or might not,
make requests for images. Those requests might, or might not, get to
this nginx.
Only the browser knows that those following requests are related to
the first one.
There have been many attempts to try to tie multiple HTTP requests
together; probably none are both reliable and appropriate for a
reverse-proxy to use.
> If every request is independent, you must know the internal structure of a
> webpage when you like to reverse-proxy it. Is that true ?
Every request is independent.
You must configure your nginx to do what you want it to do, or it probably
won't do what you want it to do.
> My feeling is that you should be able to reverse-proxy a webpage without
> knowing anything of the internal details, how data are loaded interenally.
You're welcome to try to design a thing that does what you want.
I suspect you will not succeed, in the general case.
> When I call 192.168.1.1/device/index.html I hoped that the following not
> absolute requests would be proxied to
> http://192.168.5.1/path/one.jpg,
> http://192.168.5.1/path/two.jpg, ?
> as the proxy knows that one.jpg belongs to
> http://192.168.1.1/device/index.html
No, the proxy does not know that.
The proxy knows that a request came in for /device/index.html, and that
a request came in for /device/one.jpg, and that a request came in for
/path/two.jpg.
You must tell your nginx how you want those requests to be handled.
> And so it can direct the internal request for one.jpg to
> http://192.168.5.1/path/one.jpg.
location ^~ /device/ { proxy_pass http://192.168.5.1/path/; }
should do that, for every request to nginx that starts "/device/".
If you want nginx to know how to handle requests that start "/path/",
you have to tell it what to do.
> Is that a "slightly" wrong understanding of the proxy functionality ? and
> too much of magic ?
I think it's a wrong understanding of HTTP.
Once HTTP is clear, then you can see what a reverse-proxy can do.
Good luck with it!
f
--
Francis Daly francis at daoine.org
From hongyi.zhao at gmail.com Thu Feb 4 01:17:47 2021
From: hongyi.zhao at gmail.com (Hongyi Zhao)
Date: Thu, 4 Feb 2021 09:17:47 +0800
Subject: Failed to publish a HLS stream via the nginx HTTPS server with ffmpeg.
Message-ID:
Hi,
On Ubuntu 20.04, I've built the docker image based my project located at
. The SSL relative
configuration has been enabled in the nginx.conf file used by the
above docker image,
see
for the detailed info. I then tried to
stream the camera using HLS, as described below.
Publish the stream:
$ docker run -it -p 1935:1935 -p 8000:80 --rm nginx-rtmp
$ ffmpeg -f pulse -i default -f v4l2 -r 30 -s 1920x1080 -i /dev/video0
-c:v libx264 -preset veryfast -b:v 3000k -maxrate 3000k -bufsize 3000k
-vf "scale=1280:-1,format=yuv420p" -g 60 -c:a aac -b:a 128k -ar 44100
-force_key_frames "expr:gte(t,n_forced*4)" -f flv
"rtmp://localhost:1935/stream/surveillance"
As for watching the stream, I tried to use ffplay with http and https
protocal respectively, but the former can
successfully play the stream, the latter failed. The used commands
are shown below:
$ ffplay http://localhost:8000/live/surveillance.m3u8
$ ffplay https://localhost:8000/live/surveillance.m3u8
ffplay version N-100814-g911ba8417e Copyright (c) 2003-2021 the FFmpeg
developers
built with gcc 9 (Ubuntu 9.3.0-17ubuntu1~20.04)
configuration: --enable-gpl --enable-nonfree --enable-version3
--enable-debug --enable-ffplay --enable-indev=sndio
--enable-outdev=sndio --enable-fontconfig --enable-frei0r
--enable-openssl --enable-gmp --enable-libgme --enable-gray
--enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf
--enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb
--enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband
--enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis
--enable-libopus --enable-libtheora --enable-libvidstab
--enable-libvo-amrwbenc --enable-libvpx --enable-libwebp
--enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d
--enable-libxvid --enable-libzvbi --enable-libzimg --enable-rpath
--enable-shared --enable-avisynth --enable-chromaprint --enable-gcrypt
--enable-ladspa --enable-libaribb24 --enable-libbluray
--enable-libbs2b --enable-libcaca --enable-libcelt --enable-libcdio
--enable-libcodec2 --enable-libdc1394 --enable-libfdk-aac
--enable-libflite --enable-libfontconfig --enable-libgsm
--enable-libiec61883 --enable-libjack --enable-libklvanc
--enable-liblensfun --enable-libmodplug --enable-libopenh264
--enable-libopenmpt --enable-libpulse --enable-librabbitmq
--enable-librsvg --enable-librtmp --enable-libshine --enable-libsnappy
--enable-libssh --enable-libtesseract --enable-libtwolame
--enable-libv4l2 --enable-libxavs2 --enable-libdavs2 --enable-libxcb
--enable-libxcb-shm --enable-libxcb-xfixes --enable-libxcb-shape
--enable-libzmq --enable-lv2 --enable-libmysofa --enable-openal
--enable-opencl --enable-opengl --enable-pocketsphinx --enable-vulkan
--enable-libdrm --enable-libmfx --enable-pic --enable-lto
--enable-hardcoded-tables --enable-memory-poisoning --enable-ftrapv
--enable-linux-perf --enable-libsvtav1
libavutil 56. 63.101 / 56. 63.101
libavcodec 58.119.100 / 58.119.100
libavformat 58. 65.101 / 58. 65.101
libavdevice 58. 11.103 / 58. 11.103
libavfilter 7. 97.100 / 7. 97.100
libswscale 5. 8.100 / 5. 8.100
libswresample 3. 8.100 / 3. 8.100
libpostproc 55. 8.100 / 55. 8.100
[tls @ 0x7f04f0004680] error:1408F10B:SSL
routines:ssl3_get_record:wrong version number
https://localhost:8000/live/surveillance.m3u8: Input/output error
Any hints/notes/comments for solving this problem are highly appreciated.
Regards
--
Assoc. Prof. Hongyi Zhao
Theory and Simulation of Materials
Hebei Polytechnic University of Science and Technology engineering
NO. 552 North Gangtie Road, Xingtai, China
From mdounin at mdounin.ru Thu Feb 4 02:02:55 2021
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Thu, 4 Feb 2021 05:02:55 +0300
Subject: Failed to publish a HLS stream via the nginx HTTPS server with
ffmpeg.
In-Reply-To:
References:
Message-ID: <20210204020255.GB77619@mdounin.ru>
Hello!
On Thu, Feb 04, 2021 at 09:17:47AM +0800, Hongyi Zhao wrote:
> As for watching the stream, I tried to use ffplay with http and https
> protocal respectively, but the former can
> successfully play the stream, the latter failed. The used commands
> are shown below:
>
> $ ffplay http://localhost:8000/live/surveillance.m3u8
> $ ffplay https://localhost:8000/live/surveillance.m3u8
[...]
> Any hints/notes/comments for solving this problem are highly appreciated.
You are using the same port for both http and https. This is not
certainly not going to work.
--
Maxim Dounin
http://mdounin.ru/
From hongyi.zhao at gmail.com Thu Feb 4 02:25:08 2021
From: hongyi.zhao at gmail.com (Hongyi Zhao)
Date: Thu, 4 Feb 2021 10:25:08 +0800
Subject: Failed to publish a HLS stream via the nginx HTTPS server with
ffmpeg.
In-Reply-To: <20210204020255.GB77619@mdounin.ru>
References:
<20210204020255.GB77619@mdounin.ru>
Message-ID:
On Thu, Feb 4, 2021 at 10:03 AM Maxim Dounin wrote:
>
> Hello!
>
> On Thu, Feb 04, 2021 at 09:17:47AM +0800, Hongyi Zhao wrote:
>
> > As for watching the stream, I tried to use ffplay with http and https
> > protocal respectively, but the former can
> > successfully play the stream, the latter failed. The used commands
> > are shown below:
> >
> > $ ffplay http://localhost:8000/live/surveillance.m3u8
> > $ ffplay https://localhost:8000/live/surveillance.m3u8
>
> [...]
>
> > Any hints/notes/comments for solving this problem are highly appreciated.
>
> You are using the same port for both http and https. This is not
> certainly not going to work.
Thanks for pointing this out, I've exposed the 443 port from container
and mapping it onto host's 8443 now as shown here:
Then I do the following testings:
# Start the container:
$ docker run -it -p 1935:1935 -p 8000:80 -p 8443:443 --rm nginx-rtmp
# Verify the https sever's status:
$ curl -I -k https://localhost:8443/stats
HTTP/1.1 200 OK
Server: nginx/1.18.0
Date: Thu, 04 Feb 2021 02:17:04 GMT
Content-Type: text/xml
Content-Length: 5568
Connection: keep-alive
# Publish the stream:
$ ffmpeg -f pulse -i default -f v4l2 -r 30 -s 1920x1080 -i /dev/video0
-c:v libx264 -preset veryfast -b:v 3000k -maxrate 3000k -bufsize 3000k
-vf "scale=1280:-1,format=yuv420p" -g 60 -c:a aac -b:a 128k -ar 44100
-force_key_frames "expr:gte(t,n_forced*4)" -f flv
"rtmp://localhost:1935/stream/surveillance"
# Watch the stream:
# http:
$ ffplay http://localhost:8000/live/surveillance.m3u8
# https:
$ ffplay https://localhost:8443/live/surveillance.m3u8
But I still failed to watch the stream with HTTP or HTTPS protocol
using VLC player as shown below:
$ cvlc https://localhost:8443/live/surveillance.m3u8
VLC media player 3.0.9.2 Vetinari (revision 3.0.9.2-0-gd4c1aefe4d)
[0000557a9a005210] dummy interface: using the dummy interface module...
[00007f55a40024b0] gnutls tls client error: Certificate verification
failure: The certificate is NOT trusted. The certificate issuer is
unknown. The certificate chain uses expired certificate. The name in
the certificate does not match the expected.
[00007f55a40024b0] main tls client error: TLS session handshake error
[00007f55a40024b0] main tls client error: connection error: Resource
temporarily unavailable
[00007f55a4001610] access stream error: HTTP connection failure
[00007f55b0000c80] main input error: Your input can't be opened
[00007f55b0000c80] main input error: VLC is unable to open the MRL
'https://localhost:8443/live/surveillance.m3u8'. Check the log for
details.
$ cvlc http://localhost:8000/live/surveillance.m3u8
VLC media player 3.0.9.2 Vetinari (revision 3.0.9.2-0-gd4c1aefe4d)
[00005645acdbefc0] dummy interface: using the dummy interface module...
[00007fd7e8001610] http stream error: cannot resolve localhost: Name
or service not known
[00007fd7e8001610] access stream error: HTTP connection failure
[00007fd7e8001610] main stream error: cannot resolve localhost port
844000 : Name or service not known
[00007fd7e8001610] http stream error: cannot connect to localhost:844000
[00007fd7f4000c80] main input error: Your input can't be opened
[00007fd7f4000c80] main input error: VLC is unable to open the MRL
'http://localhost:844000/live/surveillance.m3u8'. Check the log for
details.
Any hints for this problem?
Regards
--
Assoc. Prof. Hongyi Zhao
Theory and Simulation of Materials
Hebei Polytechnic University of Science and Technology engineering
NO. 552 North Gangtie Road, Xingtai, China
From adam at monkeez.org Thu Feb 4 07:40:35 2021
From: adam at monkeez.org (Adam)
Date: Thu, 4 Feb 2021 07:40:35 +0000
Subject: Nginx not responding to port 80 on public IP address
Message-ID:
Hi all,
nginx is running and listening on port 80:
tcp 0 0 0.0.0.0:80 0.0.0.0:*
LISTEN 0 42297 3576/nginx: master
tcp6 0 0 :::80 :::*
LISTEN 0 42298 3576/nginx: master
The server responds fine to requests on port 443, serving traffic exactly
as expected:
tcp 0 0 0.0.0.0:443 0.0.0.0:*
LISTEN 0 42299 3576/nginx: master
However, it will not respond to traffic on port 80. I have included this
line in my server block to listen to port 80:
listen 80 default_server;
listen [::]:80 default_server;
My full config can be seen at https://pastebin.com/VzY4mJpt
I have been testing by sshing to an external machine and trying telnet
my.host.name 80 - which times out, compared to telnet my.host.name 443,
which connects immediately.
The port is open on my router to allow port 80 traffic. This machine is
hosted on my home network, serving personal traffic (services which I use,
but not for general internet use). It does respond to port 80 internally,
if I use the internal ip address (http://192.168.178.43).
I've kind of run out of ideas, so thought I would post here.
Thanks in advance for any support.
Adam
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lists at lazygranch.com Thu Feb 4 08:04:33 2021
From: lists at lazygranch.com (lists at lazygranch.com)
Date: Thu, 4 Feb 2021 00:04:33 -0800
Subject: Nginx not responding to port 80 on public IP address
In-Reply-To:
References:
Message-ID: <20210204000433.3b122898.lists@lazygranch.com>
I insist on encryption so this is what I use:
server {
listen 80;
server_name yourdomain.com www.yourdomain.com ;
if ($request_method !~ ^(GET|HEAD)$ ) {
return 444;
}
return 301 https://$host$request_uri;
}
I only serve static pages so I use that filter. Obviously that is
optional. But basically every unencrypted request to 80 is mapped to an
encrypted request to 443.
On Thu, 4 Feb 2021 07:40:35 +0000
Adam wrote:
> Hi all,
>
> nginx is running and listening on port 80:
> tcp 0 0 0.0.0.0:80 0.0.0.0:*
> LISTEN 0 42297 3576/nginx: master
> tcp6 0 0 :::80 :::*
> LISTEN 0 42298 3576/nginx: master
>
> The server responds fine to requests on port 443, serving traffic
> exactly as expected:
> tcp 0 0 0.0.0.0:443 0.0.0.0:*
> LISTEN 0 42299 3576/nginx: master
>
> However, it will not respond to traffic on port 80. I have included
> this line in my server block to listen to port 80:
> listen 80 default_server;
> listen [::]:80 default_server;
>
> My full config can be seen at https://pastebin.com/VzY4mJpt
>
> I have been testing by sshing to an external machine and trying telnet
> my.host.name 80 - which times out, compared to telnet my.host.name
> 443, which connects immediately.
>
> The port is open on my router to allow port 80 traffic. This machine
> is hosted on my home network, serving personal traffic (services
> which I use, but not for general internet use). It does respond to
> port 80 internally, if I use the internal ip address
> (http://192.168.178.43).
>
> I've kind of run out of ideas, so thought I would post here.
>
> Thanks in advance for any support.
>
> Adam
From nginx-forum at forum.nginx.org Thu Feb 4 08:07:27 2021
From: nginx-forum at forum.nginx.org (proj964)
Date: Thu, 04 Feb 2021 03:07:27 -0500
Subject: nginx virtual directory redirect to a static file
Message-ID: <6e2b6ffbb63b4f3d3b14af45152d1beb.NginxMailingListEnglish@forum.nginx.org>
Perhaps this is something that can't be done with nginx, but I would like to
have a url,
like http://localhost:nnn/abc return a file from q:/d1/qrst.txt.
I have tried many combinations of root, alias, rewrite, try_files in
location.
For everything I try, nginx inserts "abc" into the final absolute path
which, of course,
fails because there is no "abc" in q:/d1/qrst.txt
Is there any chance that there is a location directive that will do what I
would like
in nginx?
thanks, jon
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290652,290652#msg-290652
From francis at daoine.org Thu Feb 4 09:53:34 2021
From: francis at daoine.org (Francis Daly)
Date: Thu, 4 Feb 2021 09:53:34 +0000
Subject: nginx virtual directory redirect to a static file
In-Reply-To: <6e2b6ffbb63b4f3d3b14af45152d1beb.NginxMailingListEnglish@forum.nginx.org>
References: <6e2b6ffbb63b4f3d3b14af45152d1beb.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20210204095334.GE6011@daoine.org>
On Thu, Feb 04, 2021 at 03:07:27AM -0500, proj964 wrote:
Hi there,
> Perhaps this is something that can't be done with nginx, but I would like to
> have a url,
> like http://localhost:nnn/abc return a file from q:/d1/qrst.txt.
I'm not certain, from the words here and in the Subject: line, but I
think you are asking for something like:
* request /abc/ returns the content of /tmp/thisfile.txt
* request /abc/anything returns the content of /tmp/thisfile.txt
(for any value of "anything")
* request /abcd does not return the content of /tmp/thisfile.txt
* request /abc either returns the content of /tmp/thisfile.txt, or maybe
returns a redirect to /abc/ (which then gets to item #1 again)
The first three of those can be done (assuming no overriding location)
with
location ~ ^/abc/.* { alias /tmp/thisfile.txt; try_files "" =404; }
where the "=404" is just in case your file does not exist.
The fourth might be done with
location = /abc { return 301 /abc/; }
You may also want to set default_type if what is there already does not
do what you want.
> I have tried many combinations of root, alias, rewrite, try_files in
> location.
> For everything I try, nginx inserts "abc" into the final absolute path
That should probably not happen with "alias". If you show the config,
the request, and the response, maybe it can be explained.
As it happens, I first thought that just
location ~ ^/abc/.* { alias /tmp/thisfile.txt; }
would work, but the ngx_http_index_module handles requests ending in
/ before this config takes hold, so the extra "try_files" is used to
avoid that.
There may well be a better way.
Cheers,
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Thu Feb 4 10:06:13 2021
From: francis at daoine.org (Francis Daly)
Date: Thu, 4 Feb 2021 10:06:13 +0000
Subject: Nginx not responding to port 80 on public IP address
In-Reply-To:
References:
Message-ID: <20210204100613.GF6011@daoine.org>
On Thu, Feb 04, 2021 at 07:40:35AM +0000, Adam wrote:
Hi there,
It sounds like something outside of your nginx is preventing the traffic
from getting to your nginx.
In that case, no nginx config can help you; but there are other things
you can perhaps look at.
> nginx is running and listening on port 80:
> tcp 0 0 0.0.0.0:80 0.0.0.0:*
> LISTEN 0 42297 3576/nginx: master
> tcp6 0 0 :::80 :::*
> LISTEN 0 42298 3576/nginx: master
>
> The server responds fine to requests on port 443, serving traffic exactly
> as expected:
> tcp 0 0 0.0.0.0:443 0.0.0.0:*
> LISTEN 0 42299 3576/nginx: master
> I have been testing by sshing to an external machine and trying telnet
> my.host.name 80 - which times out, compared to telnet my.host.name 443,
> which connects immediately.
Do your nginx logs indicate that the 443 traffic actually gets to this
nginx, and not to a random server that allows port-443 connections?
Perhaps use "curl" to make a request, and confirm that the response is
from this nginx.
> The port is open on my router to allow port 80 traffic. This machine is
Do you have any local firewall running on the nginx machine that might
block or otherwise limit inbound traffic?
> hosted on my home network, serving personal traffic (services which I use,
> but not for general internet use). It does respond to port 80 internally,
> if I use the internal ip address (http://192.168.178.43).
If that test is "from the nginx machine itself", then a local firewall
probably won't block it. If that test is from another machine on the home
network, then a local firewall that only allows same-subnet connections
would allow this, but not allow your external test.
"iptables -L -v -n" might show things there; or whatever local firewall
command your system might use.
> I've kind of run out of ideas, so thought I would post here.
I would probably try to run "tcpdump" on the nginx server, to see what
port-80 traffic that machine sees when the connection is attempted.
(And maybe look at what is seen for port-443 traffic as well, for
comparison.)
Good luck with it,
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Thu Feb 4 10:16:42 2021
From: francis at daoine.org (Francis Daly)
Date: Thu, 4 Feb 2021 10:16:42 +0000
Subject: Failed to publish a HLS stream via the nginx HTTPS server with
ffmpeg.
In-Reply-To:
References:
<20210204020255.GB77619@mdounin.ru>
Message-ID: <20210204101642.GG6011@daoine.org>
On Thu, Feb 04, 2021 at 10:25:08AM +0800, Hongyi Zhao wrote:
> On Thu, Feb 4, 2021 at 10:03 AM Maxim Dounin wrote:
Hi there,
> # Watch the stream:
> # http:
> $ ffplay http://localhost:8000/live/surveillance.m3u8
> # https:
> $ ffplay https://localhost:8443/live/surveillance.m3u8
You report that "ffplay" successfully plays both http and https streams.
That suggests that the nginx side is fundamentally good.
> But I still failed to watch the stream with HTTP or HTTPS protocol
> using VLC player as shown below:
>
> $ cvlc https://localhost:8443/live/surveillance.m3u8
> VLC media player 3.0.9.2 Vetinari (revision 3.0.9.2-0-gd4c1aefe4d)
> [00007f55a40024b0] gnutls tls client error: Certificate verification
> failure: The certificate is NOT trusted. The certificate issuer is
> unknown. The certificate chain uses expired certificate. The name in
> the certificate does not match the expected.
Your "cvlc" client fails to play the https stream because that client
does not like this certificate.
Tell the client to accept your certificate; or change your certificate
to one that both clients will accept. No obvious nginx-side changes
needed here.
> $ cvlc http://localhost:8000/live/surveillance.m3u8
> VLC media player 3.0.9.2 Vetinari (revision 3.0.9.2-0-gd4c1aefe4d)
> [00007fd7e8001610] http stream error: cannot resolve localhost: Name
> or service not known
Here the cvlc client reports that it cannot resolve the name
localhost. Your ffplay client could resolve that name; what is different
about cvlc?
> [00007fd7e8001610] main stream error: cannot resolve localhost port
> 844000 : Name or service not known
> [00007fd7e8001610] http stream error: cannot connect to localhost:844000
And here it reports that it is trying to connect to port 844000 instead
of the 8000 that is in the request that you showed.
> [00007fd7f4000c80] main input error: VLC is unable to open the MRL
> 'http://localhost:844000/live/surveillance.m3u8'. Check the log for
> details.
What part of the system turned
"cvlc http://localhost:8000/live/surveillance.m3u8" into a request for
http://localhost:844000/live/surveillance.m3u8?
Perhaps "tcpdump" the port-8000 traffic to see if there is something
about 844000 in the response? The "ffplay" output did not indicate any
issue like that, though.
Good luck with it,
f
--
Francis Daly francis at daoine.org
From adam at monkeez.org Thu Feb 4 12:00:53 2021
From: adam at monkeez.org (Adam)
Date: Thu, 4 Feb 2021 12:00:53 +0000
Subject: Nginx not responding to port 80 on public IP address
In-Reply-To: <20210204100613.GF6011@daoine.org>
References:
<20210204100613.GF6011@daoine.org>
Message-ID:
Hi Francis,
I've tried your suggestions (inline replies below) but am still stuck.
On Thu, 4 Feb 2021 at 10:06, Francis Daly wrote:
> On Thu, Feb 04, 2021 at 07:40:35AM +0000, Adam wrote:
>
> Hi there,
>
> It sounds like something outside of your nginx is preventing the traffic
> from getting to your nginx.
>
> In that case, no nginx config can help you; but there are other things
> you can perhaps look at.
>
> > nginx is running and listening on port 80:
> > tcp 0 0 0.0.0.0:80 0.0.0.0:*
> > LISTEN 0 42297 3576/nginx: master
> > tcp6 0 0 :::80 :::*
> > LISTEN 0 42298 3576/nginx: master
> >
> > The server responds fine to requests on port 443, serving traffic exactly
> > as expected:
> > tcp 0 0 0.0.0.0:443 0.0.0.0:*
> > LISTEN 0 42299 3576/nginx: master
>
> > I have been testing by sshing to an external machine and trying telnet
> > my.host.name 80 - which times out, compared to telnet my.host.name 443,
> > which connects immediately.
>
> Do your nginx logs indicate that the 443 traffic actually gets to this
> nginx, and not to a random server that allows port-443 connections?
>
Yes - the log files are good for port 443.
> Perhaps use "curl" to make a request, and confirm that the response is
> from this nginx.
>
I have tried this on the remote machine and see the html appear in the
terminal.
> > The port is open on my router to allow port 80 traffic. This machine is
>
> Do you have any local firewall running on the nginx machine that might
> block or otherwise limit inbound traffic?
>
I do have iptables managed by fail2ban running on the nginx machine.
> > hosted on my home network, serving personal traffic (services which I
> use,
> > but not for general internet use). It does respond to port 80 internally,
> > if I use the internal ip address (http://192.168.178.43).
>
> If that test is "from the nginx machine itself", then a local firewall
> probably won't block it. If that test is from another machine on the home
> network, then a local firewall that only allows same-subnet connections
> would allow this, but not allow your external test.
>
> "iptables -L -v -n" might show things there; or whatever local firewall
> command your system might use.
>
>
This is the output:
root at home:/home/pi# iptables -L -v -n
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source
destination
52134 4559K f2b-sshd tcp -- * * 0.0.0.0/0
0.0.0.0/0 multiport dports 22
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source
destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source
destination
Chain f2b-sshd (1 references)
pkts bytes target prot opt in out source
destination
16 1280 REJECT all -- * * 123.31.41.31
0.0.0.0/0 reject-with icmp-port-unreachable
35 1677 REJECT all -- * * 103.81.13.80
0.0.0.0/0 reject-with icmp-port-unreachable
20 1616 REJECT all -- * * 67.205.181.52
0.0.0.0/0 reject-with icmp-port-unreachable
21 1668 REJECT all -- * * 51.83.128.135
0.0.0.0/0 reject-with icmp-port-unreachable
16 1224 REJECT all -- * * 14.99.117.194
0.0.0.0/0 reject-with icmp-port-unreachable
19 1332 REJECT all -- * * 185.2.140.155
0.0.0.0/0 reject-with icmp-port-unreachable
15 1160 REJECT all -- * * 110.225.122.98
0.0.0.0/0 reject-with icmp-port-unreachable
21 1672 REJECT all -- * * 112.85.42.74
0.0.0.0/0 reject-with icmp-port-unreachable
24 1840 REJECT all -- * * 161.35.161.170
0.0.0.0/0 reject-with icmp-port-unreachable
21 1668 REJECT all -- * * 198.23.228.254
0.0.0.0/0 reject-with icmp-port-unreachable
79 3720 REJECT all -- * * 189.254.227.84
0.0.0.0/0 reject-with icmp-port-unreachable
16 1312 REJECT all -- * * 81.68.228.53
0.0.0.0/0 reject-with icmp-port-unreachable
21 1616 REJECT all -- * * 101.32.116.55
0.0.0.0/0 reject-with icmp-port-unreachable
> > I've kind of run out of ideas, so thought I would post here.
>
> I would probably try to run "tcpdump" on the nginx server, to see what
> port-80 traffic that machine sees when the connection is attempted.
>
>
I'd forgotten about tcpdump - thanks for that. This is the output.
11:56:47.592217 IP home.fritz.box.http > mab.sdf.org.40180: Flags [S.], seq
1308629493, ack 3287509164, win 65160, options [mss 1460,sackOK,TS val
3744800432 ecr 1108123496,nop,wscale 7], length 0
11:56:48.597175 IP home.fritz.box.http > mab.sdf.org.40180: Flags [S.], seq
1324331976, ack 3287509164, win 65160, options [mss 1460,sackOK,TS val
3744801437 ecr 1108124499,nop,wscale 7], length 0
11:56:50.611211 IP home.fritz.box.http > mab.sdf.org.40180: Flags [S.], seq
1355801094, ack 3287509164, win 65160, options [mss 1460,sackOK,TS val
3744803451 ecr 1108126515,nop,wscale 7], length 0
11:56:54.937069 IP home.fritz.box.http > mab.sdf.org.40180: Flags [S.], seq
1423392629, ack 3287509164, win 65160, options [mss 1460,sackOK,TS val
3744807777 ecr 1108130771,nop,wscale 7], length 0
11:57:03.126721 IP home.fritz.box.http > mab.sdf.org.40180: Flags [S.], seq
1551356176, ack 3287509164, win 65160, options [mss 1460,sackOK,TS val
3744815967 ecr 1108138963,nop,wscale 7], length 0
> (And maybe look at what is seen for port-443 traffic as well, for
> comparison.)
>
> 11:58:00.144568 IP mab.sdf.org.36420 > home.fritz.box.https: Flags [.],
ack 1, win 251, options [nop,nop,TS val 1108196048 ecr 2740660288], length 0
These were run on the box that is running nginx.
> Good luck with it,
>
> f
> --
> Francis Daly francis at daoine.org
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
Thanks again,
Adam
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From francis at daoine.org Thu Feb 4 12:25:15 2021
From: francis at daoine.org (Francis Daly)
Date: Thu, 4 Feb 2021 12:25:15 +0000
Subject: Nginx not responding to port 80 on public IP address
In-Reply-To:
References:
<20210204100613.GF6011@daoine.org>
Message-ID: <20210204122515.GH6011@daoine.org>
On Thu, Feb 04, 2021 at 12:00:53PM +0000, Adam wrote:
Hi there,
> > It sounds like something outside of your nginx is preventing the traffic
> > from getting to your nginx.
> >
> > In that case, no nginx config can help you; but there are other things
> > you can perhaps look at.
> > Do your nginx logs indicate that the 443 traffic actually gets to this
> > nginx, and not to a random server that allows port-443 connections?
>
> Yes - the log files are good for port 443.
Ok, that says that inbound 443 works, and inbound 80 does not work. There
must be some device somewhere which has different rules for 443 and 80
involving your nginx machine.
> This is the output:
> root at home:/home/pi# iptables -L -v -n
This does not mention anything different between ports 80 and 443 that
I can see.
I am slightly surprised at the "policy ACCEPT 0 packets, 0 bytes" parts,
and at the lack of "RETURN" in the Chain f2b-sshd section; but maybe
different version of the tools show different output.
> I'd forgotten about tcpdump - thanks for that. This is the output.
>
> 11:56:47.592217 IP home.fritz.box.http > mab.sdf.org.40180: Flags [S.], seq
> 1308629493, ack 3287509164, win 65160, options [mss 1460,sackOK,TS val
> 3744800432 ecr 1108123496,nop,wscale 7], length 0
> 11:56:48.597175 IP home.fritz.box.http > mab.sdf.org.40180: Flags [S.], seq
> 1324331976, ack 3287509164, win 65160, options [mss 1460,sackOK,TS val
> 3744801437 ecr 1108124499,nop,wscale 7], length 0
> 11:56:50.611211 IP home.fritz.box.http > mab.sdf.org.40180: Flags [S.], seq
> 1355801094, ack 3287509164, win 65160, options [mss 1460,sackOK,TS val
> 3744803451 ecr 1108126515,nop,wscale 7], length 0
> 11:56:54.937069 IP home.fritz.box.http > mab.sdf.org.40180: Flags [S.], seq
> 1423392629, ack 3287509164, win 65160, options [mss 1460,sackOK,TS val
> 3744807777 ecr 1108130771,nop,wscale 7], length 0
> 11:57:03.126721 IP home.fritz.box.http > mab.sdf.org.40180: Flags [S.], seq
> 1551356176, ack 3287509164, win 65160, options [mss 1460,sackOK,TS val
> 3744815967 ecr 1108138963,nop,wscale 7], length 0
This shows the the nginx service is attempting to respond to the presumed
incoming syn packet from the client, and nginx is not getting back the
ack packet that it expects.
What should happen here is that you see a [S] packet from the
client-high-port to nginx-port-80 (which is not in this tcpdump
output; perhaps the tcpdump started late); and then a [S.] packet from
nginx-port-80 to the client-high-port (here you see 5 of them, because the
nginx server retries); and then a [.] packet from the client, which will
include or be followed by more [.] packets including the http request.
Something on your network path is either preventing the [S.] from getting
from nginx to the client, or is preventing the [.] from getting from the
client to nginx. But (presumably) it did allow the [S] from the client
to nginx.
Find and fix that something, and thing should Just Work.
> > (And maybe look at what is seen for port-443 traffic as well, for
> > comparison.)
> >
> > 11:58:00.144568 IP mab.sdf.org.36420 > home.fritz.box.https: Flags [.],
> ack 1, win 251, options [nop,nop,TS val 1108196048 ecr 2740660288], length 0
That is a [.] packet that will have followed the original TCP three-way
handshake. It proves that network routing between the client and the nginx
server is correct; something else is blocking some of the port-80 traffic.
Good luck with it,
f
--
Francis Daly francis at daoine.org
From nginx-forum at forum.nginx.org Thu Feb 4 14:37:44 2021
From: nginx-forum at forum.nginx.org (pmadr)
Date: Thu, 04 Feb 2021 09:37:44 -0500
Subject: Which Upload Module
Message-ID:
Hi,
There are a few versions of the upload module on github.
https://github.com/fdintino/nginx-upload-module
https://github.com/Austinb/nginx-upload-module/tree/2.2
https://github.com/hongzhidao/nginx-upload-module
Each has a different commits and issues. Which module do you recommend to
use with the latest (and future) Nginx?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290661,290661#msg-290661
From osa at freebsd.org.ru Thu Feb 4 15:01:37 2021
From: osa at freebsd.org.ru (Sergey A. Osokin)
Date: Thu, 4 Feb 2021 18:01:37 +0300
Subject: Which Upload Module
In-Reply-To:
References:
Message-ID:
Hi,
On Thu, Feb 04, 2021 at 09:37:44AM -0500, pmadr wrote:
> Hi,
>
> There are a few versions of the upload module on github.
>
> https://github.com/fdintino/nginx-upload-module
> https://github.com/Austinb/nginx-upload-module/tree/2.2
> https://github.com/hongzhidao/nginx-upload-module
>
> Each has a different commits and issues. Which module do you recommend to
> use with the latest (and future) Nginx?
FreeBSD ports tree contains the following GitHub tuple for the module:
fdintino:nginx-upload-module:aa42509
As for now there were no reports or complains about that one.
Hope that helps.
--
Sergey Osokin
From nginx-forum at forum.nginx.org Thu Feb 4 20:00:40 2021
From: nginx-forum at forum.nginx.org (bouvierh)
Date: Thu, 04 Feb 2021 15:00:40 -0500
Subject: Getting intermediate client certificates with custom module
Message-ID: <7b20ee562f6bc8ff6ea9db587878badf.NginxMailingListEnglish@forum.nginx.org>
Hello!
I am using nginx in reverse proxy mode. When nginx proxy pass the
certificates upstream, I would also like to pass the full cert chain along
with it.
This is for an industrial application and I don't have full control on why
it is done this way. The only thing I can tamper with is Nginx.
Digging around a bit, I found out that getting the cert chain through an
environment variable is not possible:
https://forum.nginx.org/read.php?2,288553,288553#msg-288553.
However, if I write a custom module, is it possible to get the intermediate
certificate? Would you know which place in the code I should look at?
Thank you!
Hugues
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290663,290663#msg-290663
From vbart at nginx.com Thu Feb 4 23:12:14 2021
From: vbart at nginx.com (Valentin V. Bartenev)
Date: Fri, 05 Feb 2021 02:12:14 +0300
Subject: Unit 1.22.0 release
Message-ID: <2517673.BddDVKsqQX@vbart-laptop>
Hi,
I'm glad to announce a new release of NGINX Unit.
This is our first release of 2021, and it focuses on improving stability.
There's an extensive list of bugfixes, although many occur in rare conditions
that have so far been observed only in our test environments. These bugs
were caught due to improvements in our continuous functional testing; our QA,
Andrei Zeliankou, is always looking to increase the testing coverage and use
new techniques to spot various race conditions and leaks, thus improving
the quality of each release. This very important work never ends.
### IMPORTANT: Changes to official Linux packages
Starting with this release, the user and group accounts that run non-privileged
Unit processes are changed in our Linux packages:
- in previous packages: nobody:nobody
- in 1.22.0 and later: unit:unit
These settings are used to serve static files and run applications if "user"
or "group" options are not explicitly specified in the app configuration.
Please take a note of the change and update your configuration appropriately
before upgrading an existing Unit installation with our official packages:
- https://unit.nginx.org/installation/#official-packages
The rationale for this change in our packages was that using "nobody" by
default was very inconvenient while serving static files. You can always
override these settings with the --user and --group daemon options in your
startup scripts. See here for more details:
- https://unit.nginx.org/installation/#installation-src-startup
### IMPORTANT 2: Changes to official Docker images
Another notable change is also related to our official distributions; in
this case, it affects our Docker images. Many asked us to provide the most
up-to-date versions of language modules in our Docker images, but there was
no maintainable way of doing this while still relying on the Debian base
image we used before.
Starting with 1.22.0, we stop maintaining images with language modules that
use the old Debian base; instead, now we rely on official Docker images for
latest language versions:
- https://unit.nginx.org/installation/#docker-images
Our images are available at both Docker Hub and Amazon ECR Public Gallery;
you can also download them at our website.
Changes with Unit 1.22.0 04 Feb 2021
*) Feature: the ServerRequest and ServerResponse objects of Node.js
module are now compliant with Stream API.
*) Feature: support for specifying multiple directories in the "path"
option of Python apps.
*) Bugfix: a memory leak occurred in the router process when serving
files larger than 128K; the bug had appeared in 1.13.0.
*) Bugfix: apps could stop processing new requests under high load; the
bug had appeared in 1.19.0.
*) Bugfix: app processes could terminate unexpectedly under high load;
the bug had appeared in 1.19.0.
*) Bugfix: invalid HTTP responses were generated for some unusual status
codes.
*) Bugfix: the PHP_AUTH_USER, PHP_AUTH_PW, and PHP_AUTH_DIGEST server
variables were missing in the PHP module.
*) Bugfix: the router process could crash with multithreaded apps under
high load.
*) Bugfix: Ruby apps with multithreading configured could crash on start
under load.
*) Bugfix: mount points weren't unmounted when the "mount" namespace
isolation was used; the bug had appeared in 1.21.0.
*) Bugfix: the router process could crash while removing or
reconfiguring an app that used WebSocket.
*) Bugfix: a memory leak occurring in the router process when removing
or reconfiguring an application; the bug had appeared in 1.19.0.
Meanwhile, we continue working on metrics and application restart APIs, SNI
support in TLS, and improvements to process isolation.
As always, we encourage you to follow our roadmap on GitHub, where your
ideas and requests are more than welcome:
- https://github.com/orgs/nginx/projects/1
Stay tuned!
wbr, Valentin V. Bartenev
From nginx-forum at forum.nginx.org Fri Feb 5 06:50:46 2021
From: nginx-forum at forum.nginx.org (pmadr)
Date: Fri, 05 Feb 2021 01:50:46 -0500
Subject: Which Upload Module
In-Reply-To:
References:
Message-ID: <42848ebc6911a2829f5966cb8a019d95.NginxMailingListEnglish@forum.nginx.org>
Thanks.
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290661,290670#msg-290670
From hongyi.zhao at gmail.com Fri Feb 5 14:44:53 2021
From: hongyi.zhao at gmail.com (Hongyi Zhao)
Date: Fri, 5 Feb 2021 22:44:53 +0800
Subject: Failed to publish a HLS stream via the nginx HTTPS server with
ffmpeg.
In-Reply-To: <20210204101642.GG6011@daoine.org>
References:
<20210204020255.GB77619@mdounin.ru>
<20210204101642.GG6011@daoine.org>
Message-ID:
On Thu, Feb 4, 2021 at 6:16 PM Francis Daly wrote:
>
> On Thu, Feb 04, 2021 at 10:25:08AM +0800, Hongyi Zhao wrote:
> > On Thu, Feb 4, 2021 at 10:03 AM Maxim Dounin wrote:
>
> Hi there,
>
> > # Watch the stream:
> > # http:
> > $ ffplay http://localhost:8000/live/surveillance.m3u8
> > # https:
> > $ ffplay https://localhost:8443/live/surveillance.m3u8
>
> You report that "ffplay" successfully plays both http and https streams.
>
> That suggests that the nginx side is fundamentally good.
>
> > But I still failed to watch the stream with HTTP or HTTPS protocol
> > using VLC player as shown below:
> >
> > $ cvlc https://localhost:8443/live/surveillance.m3u8
> > VLC media player 3.0.9.2 Vetinari (revision 3.0.9.2-0-gd4c1aefe4d)
>
> > [00007f55a40024b0] gnutls tls client error: Certificate verification
> > failure: The certificate is NOT trusted. The certificate issuer is
> > unknown. The certificate chain uses expired certificate. The name in
> > the certificate does not match the expected.
>
> Your "cvlc" client fails to play the https stream because that client
> does not like this certificate.
>
> Tell the client to accept your certificate; or change your certificate
> to one that both clients will accept. No obvious nginx-side changes
> needed here.
>
> > $ cvlc http://localhost:8000/live/surveillance.m3u8
> > VLC media player 3.0.9.2 Vetinari (revision 3.0.9.2-0-gd4c1aefe4d)
>
> > [00007fd7e8001610] http stream error: cannot resolve localhost: Name
> > or service not known
>
> Here the cvlc client reports that it cannot resolve the name
> localhost. Your ffplay client could resolve that name; what is different
> about cvlc?
>
> > [00007fd7e8001610] main stream error: cannot resolve localhost port
> > 844000 : Name or service not known
> > [00007fd7e8001610] http stream error: cannot connect to localhost:844000
>
> And here it reports that it is trying to connect to port 844000 instead
> of the 8000 that is in the request that you showed.
>
> > [00007fd7f4000c80] main input error: VLC is unable to open the MRL
> > 'http://localhost:844000/live/surveillance.m3u8'. Check the log for
> > details.
>
> What part of the system turned
>
> "cvlc http://localhost:8000/live/surveillance.m3u8" into a request for
> http://localhost:844000/live/surveillance.m3u8?
>
> Perhaps "tcpdump" the port-8000 traffic to see if there is something
> about 844000 in the response? The "ffplay" output did not indicate any
> issue like that, though.
>
> Good luck with it,
Thanks a lot for your notes and suggestions.
I've requested the free SSL certificate authorized by www.dnspod.cn
for my domain auto.hyddns.xyz, and now I can successfully play the HLS
stream with VLC through HTTP(S) published by the docker-nginx-rtmp
container. But I still see some error messages as shown below:
$ cvlc http://auto.hyddns.xyz:8000/live/surveillance.m3u8
VLC media player 4.0.0-dev Otto Chriek (revision 4.0.0-dev-14728-g63a50f5439)
[0000561991f9f3d0] dummy interface: using the dummy interface module...
[00007f1e20037dd0] main decoder error: buffer deadlock prevented
[00007f1e20130510] main decoder error: buffer deadlock prevented
[00007f1e20130be0] mpeg4audio packetizer: AAC channels: 2 samplerate: 44100
[0000561991f9f610] main input error: ES_OUT_SET_(GROUP_)PCR is called
1354 ms late (pts_delay increased to 1000 ms)
[00007f1e100048d0] ts demux error: libdvbpsi error (PSI decoder): TS
duplicate (received 0, expected 1) for PID 0
[00007f1e100048d0] ts demux error: libdvbpsi error (PSI decoder): TS
duplicate (received 0, expected 1) for PID 4097
[0000561991f9f610] main input error: ES_OUT_SET_(GROUP_)PCR is called
382 ms late (pts_delay increased to 1382 ms)
[00007f1e100048d0] ts demux error: libdvbpsi error (PSI decoder): TS
duplicate (received 0, expected 1) for PID 0
[00007f1e100048d0] ts demux error: libdvbpsi error (PSI decoder): TS
duplicate (received 0, expected 1) for PID 4097
[0000561991f9f610] main input error: ES_OUT_SET_(GROUP_)PCR is called
363 ms late (pts_delay increased to 1745 ms)
[00007f1e100048d0] ts demux error: libdvbpsi error (PSI decoder): TS
duplicate (received 0, expected 1) for PID 0
[00007f1e100048d0] ts demux error: libdvbpsi error (PSI decoder): TS
duplicate (received 0, expected 1) for PID 4097
[0000561991f9f610] main input error: ES_OUT_SET_(GROUP_)PCR is called
1371 ms late (pts_delay increased to 2354 ms)
[00007f1e100048d0] ts demux error: libdvbpsi error (PSI decoder): TS
duplicate (received 0, expected 1) for PID 0
[00007f1e100048d0] ts demux error: libdvbpsi error (PSI decoder): TS
duplicate (received 0, expected 1) for PID 4097
$ cvlc https://auto.hyddns.xyz:8443/live/surveillance.m3u8
VLC media player 4.0.0-dev Otto Chriek (revision 4.0.0-dev-14728-g63a50f5439)
[000055ac16bc13d0] dummy interface: using the dummy interface module...
[00007f6fd4336b00] adaptive demux: Changing stream format Unknown -> Unknown
[00007f6fd4336b00] adaptive demux: Encountered discontinuity
[00007f6fd46213b0] main decoder error: buffer deadlock prevented
[00007f6fd4490600] main decoder error: buffer deadlock prevented
[00007f6fd44eea60] mpeg4audio packetizer: AAC channels: 2 samplerate: 44100
[00007f6fd46213b0] main decoder error: buffer deadlock prevented
[00007f6fd4490600] main decoder error: buffer deadlock prevented
[00007f6fd4638fc0] mpeg4audio packetizer: AAC channels: 2 samplerate: 44100
[000055ac16bc1610] main input error: ES_OUT_SET_(GROUP_)PCR is called
873 ms late (pts_delay increased to 1000 ms)
[00007f6fc4005430] ts demux error: libdvbpsi error (PSI decoder): TS
duplicate (received 0, expected 1) for PID 0
[00007f6fc4005430] ts demux error: libdvbpsi error (PSI decoder): TS
duplicate (received 0, expected 1) for PID 4097
[000055ac16bc1610] main input error: ES_OUT_SET_(GROUP_)PCR is called
462 ms late (pts_delay increased to 1462 ms)
[00007f6fc4005430] ts demux error: libdvbpsi error (PSI decoder): TS
duplicate (received 0, expected 1) for PID 0
[00007f6fc4005430] ts demux error: libdvbpsi error (PSI decoder): TS
duplicate (received 0, expected 1) for PID 4097
[000055ac16bc1610] main input error: ES_OUT_SET_(GROUP_)PCR is called
489 ms late (pts_delay increased to 1873 ms)
[00007f6fc4005430] ts demux error: libdvbpsi error (PSI decoder): TS
duplicate (received 0, expected 1) for PID 0
[00007f6fc4005430] ts demux error: libdvbpsi error (PSI decoder): TS
duplicate (received 0, expected 1) for PID 4097
I'm not sure if these messages harm.
Regards
--
Assoc. Prof. Hongyi Zhao
Theory and Simulation of Materials
Hebei Polytechnic University of Science and Technology engineering
NO. 552 North Gangtie Road, Xingtai, China
From nginx-forum at forum.nginx.org Fri Feb 5 20:38:16 2021
From: nginx-forum at forum.nginx.org (MarioIshac)
Date: Fri, 05 Feb 2021 15:38:16 -0500
Subject: Using $upstream_response_time in add_header shows a dash
Message-ID: <8e8c6f22648b31d39f4a769fad3a09e6.NginxMailingListEnglish@forum.nginx.org>
Hello all,
I have this example nginx.conf:
https://gist.github.com/MarioIshac/e6971ab0b343da210de62ebb1c6e2f99 to
reproduce the behavior.
I start nginx and an example upstream with:
python3 -u -m http.server 8001 > app.log 2>&1 & sudo nginx > nginx.log 2>&1
Upon hitting nginx with `curl -i localhost:8000`, I see these response
headers:
X-Trip-Time: 0.001
X-Addr: 127.0.0.1:8001
X-Status: 200
X-Process-Time: -
`cat app.log` shows that upstream was hit successfully, and `cat nginx.log`
shows that nginx knows the $upstream_response_time at log time, as I get
this log:
127.0.0.1:8001 200 0.004
Why does nginx substitute the request time and relevant response metadata
(like $upstream_status) at add_header time successfully, yet substitutes the
upstream response time with a dash?
My goal with returning $upstream_response_time in a header is so the client
can know how much of their request latency was due to their upload speeds
vs. server processing time.
-Mario
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290674,290674#msg-290674
From nginx-forum at forum.nginx.org Sun Feb 7 10:04:54 2021
From: nginx-forum at forum.nginx.org (Kzone)
Date: Sun, 07 Feb 2021 05:04:54 -0500
Subject: proxy_pass directive don't catch sequence
Message-ID: <978360c690c53f681bfe415fc6933542.NginxMailingListEnglish@forum.nginx.org>
Hi,
The goal is to send correct response when a malformed URL leads to 404 file
not found. What happens is that, from times to times, I get a faulty URl
like this one :
https://FQDN-Server-Adress/server/rest/services/Urbanisme/ChargesUrbanistiques/jsapi/dojo/dojo.js
But the required file is located in
https://FQDN-Server-Adress:6443/arcgis/login//jsapi/dojo/dojo.js
Therefore I've set a proxy_pass directive following the location, but it
seems that the involved sequence is not correctly catched up and the
malformed url are still returned. What am I doing wrong ?
The double slashes in proxy_pass between 'login' and 'jsapi' is needed and
I'm under Windows Server 2016
Thanks.
Code chunk :
-----------------
## Redirect to tomcat when mismatch with /jsapi/dojo/ (20210130)
location /jsapi/dojo/ {
proxy_http_version 1.1;
include C:/NginX/1.19.1/proxyparams/proxy_params.conf;
proxy_pass
https://FQDN-Server-Adress:6443/arcgis/login//jsapi/dojo/;
sub_filter 'Server-IP-Adress:1543' 'FQDN-Server-Adress';
proxy_buffering on;
proxy_buffer_size 16k;
proxy_busy_buffers_size 24k;
proxy_buffers 64 4k;
client_max_body_size 12m;
}
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290676,290676#msg-290676
From mdounin at mdounin.ru Sun Feb 7 15:58:55 2021
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Sun, 7 Feb 2021 18:58:55 +0300
Subject: Using $upstream_response_time in add_header shows a dash
In-Reply-To: <8e8c6f22648b31d39f4a769fad3a09e6.NginxMailingListEnglish@forum.nginx.org>
References: <8e8c6f22648b31d39f4a769fad3a09e6.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20210207155855.GF77619@mdounin.ru>
Hello!
On Fri, Feb 05, 2021 at 03:38:16PM -0500, MarioIshac wrote:
> Hello all,
>
> I have this example nginx.conf:
> https://gist.github.com/MarioIshac/e6971ab0b343da210de62ebb1c6e2f99 to
> reproduce the behavior.
>
> I start nginx and an example upstream with:
>
> python3 -u -m http.server 8001 > app.log 2>&1 & sudo nginx > nginx.log 2>&1
>
> Upon hitting nginx with `curl -i localhost:8000`, I see these response
> headers:
>
> X-Trip-Time: 0.001
> X-Addr: 127.0.0.1:8001
> X-Status: 200
> X-Process-Time: -
>
> `cat app.log` shows that upstream was hit successfully, and `cat nginx.log`
> shows that nginx knows the $upstream_response_time at log time, as I get
> this log:
>
> 127.0.0.1:8001 200 0.004
>
> Why does nginx substitute the request time and relevant response metadata
> (like $upstream_status) at add_header time successfully, yet substitutes the
> upstream response time with a dash?
>
> My goal with returning $upstream_response_time in a header is so the client
> can know how much of their request latency was due to their upload speeds
> vs. server processing time.
That's because response headers are sent before the
$upstream_response_time is known: it is only known when the
response is fully received from the upstream server, including the
response body, and this happens after the response headers are
sent to the client. If you want to return something in the
response headers, consider the $upstream_header_time variable
instead (http://nginx.org/r/$upstream_header_time).
--
Maxim Dounin
http://mdounin.ru/
From nginx-forum at forum.nginx.org Sun Feb 7 22:51:51 2021
From: nginx-forum at forum.nginx.org (MarioIshac)
Date: Sun, 07 Feb 2021 17:51:51 -0500
Subject: Using $upstream_response_time in add_header shows a dash
In-Reply-To: <20210207155855.GF77619@mdounin.ru>
References: <20210207155855.GF77619@mdounin.ru>
Message-ID: <84988d4594d130afabd673bfc9528f2f.NginxMailingListEnglish@forum.nginx.org>
Maxim Dounin Wrote:
-------------------------------------------------------
> Hello!
>
> On Fri, Feb 05, 2021 at 03:38:16PM -0500, MarioIshac wrote:
>
> > Hello all,
> >
> > I have this example nginx.conf:
> > https://gist.github.com/MarioIshac/e6971ab0b343da210de62ebb1c6e2f99
> to
> > reproduce the behavior.
> >
> > I start nginx and an example upstream with:
> >
> > python3 -u -m http.server 8001 > app.log 2>&1 & sudo nginx >
> nginx.log 2>&1
> >
> > Upon hitting nginx with `curl -i localhost:8000`, I see these
> response
> > headers:
> >
> > X-Trip-Time: 0.001
> > X-Addr: 127.0.0.1:8001
> > X-Status: 200
> > X-Process-Time: -
> >
> > `cat app.log` shows that upstream was hit successfully, and `cat
> nginx.log`
> > shows that nginx knows the $upstream_response_time at log time, as I
> get
> > this log:
> >
> > 127.0.0.1:8001 200 0.004
> >
> > Why does nginx substitute the request time and relevant response
> metadata
> > (like $upstream_status) at add_header time successfully, yet
> substitutes the
> > upstream response time with a dash?
> >
> > My goal with returning $upstream_response_time in a header is so the
> client
> > can know how much of their request latency was due to their upload
> speeds
> > vs. server processing time.
>
> That's because response headers are sent before the
> $upstream_response_time is known: it is only known when the
> response is fully received from the upstream server, including the
> response body, and this happens after the response headers are
> sent to the client. If you want to return something in the
> response headers, consider the $upstream_header_time variable
> instead (http://nginx.org/r/$upstream_header_time).
>
> --
> Maxim Dounin
> http://mdounin.ru/
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
Thanks for the great clarification. I've tested out returning
$upstream_header_time in a response header and it is very close to the
$upstream_response_time that is logged, so I'll be going with this.
This is more out of curiosity rather than a need (since the times were
really close anyways), but does nginx have an option to buffer the whole
response (headers and body) before sending it?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290674,290679#msg-290679
From nginx-forum at forum.nginx.org Mon Feb 8 07:19:57 2021
From: nginx-forum at forum.nginx.org (petrg)
Date: Mon, 08 Feb 2021 02:19:57 -0500
Subject: reverse proxy does not load webpage data
In-Reply-To: <20210203181052.GD6011@daoine.org>
References: <20210203181052.GD6011@daoine.org>
Message-ID: <52f04f9c478a18a0bf1ea986547d2c3d.NginxMailingListEnglish@forum.nginx.org>
Hi Francis,
thanks a lot for taking the time to do this basic explanation.
I really appreciate that.
I think now I know that and where I have to adapt my html code so the nginx
can do its work successfully.
best regards
Peter
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290609,290680#msg-290680
From rainer at ultra-secure.de Mon Feb 8 17:16:32 2021
From: rainer at ultra-secure.de (Rainer Duffner)
Date: Mon, 8 Feb 2021 18:16:32 +0100
Subject: wordpress with Nginx + fastcgi_cache with ssl but behind haproxy
Message-ID:
Hi,
I have an interesting problem.
I have apache behind Nginx behind haproxy.
SSL is terminated with haproxy (because haproxy can load all certificates from a single directory, and because some rate-limiting stuff is easier with haproxy).
This makes using Let?s Encrypt easier.
Sometimes, I want to do Nginx + fastcgi + php-fpm directly, without apache (it?s measurably faster).
For apache, you need this in the configuration:
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
(and for good measure, also this:
SetEnvIf X-Forwarded-Proto "^https$" HTTPS=on
)
For fast-cgi, one also needs this in the configuration (fastcgi_params):
fastcgi_param HTTPS $fwd_ssl;
$fwd_ssl is generated by this map:
map $http_x_forwarded_proto $fwd_ssl {
http off;
https on;
}
in the global http section.
In wordpress, when I enable ?Really Simple SSL?, I get a redirect loop (to https) on the front-page (as an unauthenticated user) but the backend works.
I wonder what wordpress is missing so that it still thinks the connection is coming over http instead of https.
Any ideas?
Best Regards
Rainer
From aliofthemohsins at gmail.com Tue Feb 9 04:08:25 2021
From: aliofthemohsins at gmail.com (Ali Mohsin)
Date: Tue, 9 Feb 2021 09:08:25 +0500
Subject: wordpress with Nginx + fastcgi_cache with ssl but behind haproxy
In-Reply-To:
References:
Message-ID:
Hi, normally when I get infinite loop with ssl, its usually because of
redirection of http to https. Sometimes front proxy (cloudflare or haproxy)
is expecting simple http traffic and it gets https traffic and vice versa.
Also check your wordpress settings and its url. Try changing it.
And why are you using so much stuff just for wordpress? Simple nginx,
php-fpm, fcgi cache works for me. And rate limiting works in nginx too. Try
simplifying the setup so there are less variables to deal with.
On Mon, 8 Feb 2021, 10:16 PM Rainer Duffner, wrote:
> Hi,
>
> I have an interesting problem.
>
> I have apache behind Nginx behind haproxy.
>
> SSL is terminated with haproxy (because haproxy can load all certificates
> from a single directory, and because some rate-limiting stuff is easier
> with haproxy).
> This makes using Let?s Encrypt easier.
>
> Sometimes, I want to do Nginx + fastcgi + php-fpm directly, without apache
> (it?s measurably faster).
>
> For apache, you need this in the configuration:
>
> proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
>
> (and for good measure, also this:
> SetEnvIf X-Forwarded-Proto "^https$" HTTPS=on
> )
>
> For fast-cgi, one also needs this in the configuration (fastcgi_params):
>
> fastcgi_param HTTPS $fwd_ssl;
>
>
> $fwd_ssl is generated by this map:
>
> map $http_x_forwarded_proto $fwd_ssl {
> http off;
> https on;
> }
>
> in the global http section.
>
> In wordpress, when I enable ?Really Simple SSL?, I get a redirect loop (to
> https) on the front-page (as an unauthenticated user) but the backend works.
>
> I wonder what wordpress is missing so that it still thinks the connection
> is coming over http instead of https.
>
>
>
> Any ideas?
>
> Best Regards
> Rainer
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Tue Feb 9 09:43:28 2021
From: nginx-forum at forum.nginx.org (bibi93)
Date: Tue, 09 Feb 2021 04:43:28 -0500
Subject: How to configure nginx fort UDP and TCP
Message-ID: <36a6a26674245d867700470498076a67.NginxMailingListEnglish@forum.nginx.org>
Hello,
here I have a problem on an NGINX that I do not know at all. I explain to
you.
I would like to put 4 server behind one NGINX server. these servers are not
WEB sites but for deployment of packages, supervision, antivirus. therefore
we do not access sites. So these are servers that discuss with other servers
by agent bias under TCP and UDP ports.
Is it possible to make these agents call an alias to a public IP which would
be the NGINX server which would dispatch the flows according to the alias
called to the servers in question only with the defined ports
ex: my client agent calls the deployment server via UDP port 514, NGINX will
only have to let this UDP port pass to the deployment server which sends the
package back to this client.
How do we do the configuration if possible.
thanking you in advance
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290698,290698#msg-290698
From rainer at ultra-secure.de Tue Feb 9 12:32:31 2021
From: rainer at ultra-secure.de (Rainer Duffner)
Date: Tue, 9 Feb 2021 13:32:31 +0100
Subject: wordpress with Nginx + fastcgi_cache with ssl but behind haproxy
In-Reply-To:
References:
Message-ID:
It?s setup this way, because haproxy can?t really do vhosts and sometimes you need to limit access per vhost.
OTOH, haproxy can do restrictions on a per-url basis much better (IMO) than Nginx.
There are up to several hundred vhosts there and sometimes you want to limit stuff on any one of them.
Plus, as I said, haproxy?s handling of certificates is sometimes very convenient.
I run let?s encrypt on almost all of these vhosts and due to the way they are provisioned, it?s much easier than dealing with the individual Nginx configuration files.
I will try and activate SSL without the Really Simple SSL plugin, maybe it is doing something weird - though with all the SSL offloading going on these days, you?d think this isn?t a too unusual case?
> Am 09.02.2021 um 05:08 schrieb Ali Mohsin :
>
> Hi, normally when I get infinite loop with ssl, its usually because of redirection of http to https. Sometimes front proxy (cloudflare or haproxy) is expecting simple http traffic and it gets https traffic and vice versa.
> Also check your wordpress settings and its url. Try changing it.
> And why are you using so much stuff just for wordpress? Simple nginx, php-fpm, fcgi cache works for me. And rate limiting works in nginx too. Try simplifying the setup so there are less variables to deal with.
>
> On Mon, 8 Feb 2021, 10:16 PM Rainer Duffner, > wrote:
> Hi,
>
> I have an interesting problem.
>
> I have apache behind Nginx behind haproxy.
>
> SSL is terminated with haproxy (because haproxy can load all certificates from a single directory, and because some rate-limiting stuff is easier with haproxy).
> This makes using Let?s Encrypt easier.
>
> Sometimes, I want to do Nginx + fastcgi + php-fpm directly, without apache (it?s measurably faster).
>
> For apache, you need this in the configuration:
>
> proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
>
> (and for good measure, also this:
> SetEnvIf X-Forwarded-Proto "^https$" HTTPS=on
> )
>
> For fast-cgi, one also needs this in the configuration (fastcgi_params):
>
> fastcgi_param HTTPS $fwd_ssl;
>
>
> $fwd_ssl is generated by this map:
>
> map $http_x_forwarded_proto $fwd_ssl {
> http off;
> https on;
> }
>
> in the global http section.
>
> In wordpress, when I enable ?Really Simple SSL?, I get a redirect loop (to https) on the front-page (as an unauthenticated user) but the backend works.
>
> I wonder what wordpress is missing so that it still thinks the connection is coming over http instead of https.
>
>
>
> Any ideas?
>
> Best Regards
> Rainer
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From aliofthemohsins at gmail.com Tue Feb 9 12:53:55 2021
From: aliofthemohsins at gmail.com (Ali Mohsin)
Date: Tue, 9 Feb 2021 17:53:55 +0500
Subject: wordpress with Nginx + fastcgi_cache with ssl but behind haproxy
In-Reply-To:
References:
Message-ID:
Try activating ssl without the plugin. Change the url in wordpress settings.
On Tue, 9 Feb 2021, 5:32 PM Rainer Duffner, wrote:
> It?s setup this way, because haproxy can?t really do vhosts and sometimes
> you need to limit access per vhost.
>
> OTOH, haproxy can do restrictions on a per-url basis much better (IMO)
> than Nginx.
>
> There are up to several hundred vhosts there and sometimes you want to
> limit stuff on any one of them.
>
> Plus, as I said, haproxy?s handling of certificates is sometimes very
> convenient.
>
> I run let?s encrypt on almost all of these vhosts and due to the way they
> are provisioned, it?s much easier than dealing with the individual Nginx
> configuration files.
>
>
>
> I will try and activate SSL without the Really Simple SSL plugin, maybe it
> is doing something weird - though with all the SSL offloading going on
> these days, you?d think this isn?t a too unusual case?
>
>
>
>
>
>
>
>
> Am 09.02.2021 um 05:08 schrieb Ali Mohsin :
>
> Hi, normally when I get infinite loop with ssl, its usually because of
> redirection of http to https. Sometimes front proxy (cloudflare or haproxy)
> is expecting simple http traffic and it gets https traffic and vice versa.
> Also check your wordpress settings and its url. Try changing it.
> And why are you using so much stuff just for wordpress? Simple nginx,
> php-fpm, fcgi cache works for me. And rate limiting works in nginx too. Try
> simplifying the setup so there are less variables to deal with.
>
> On Mon, 8 Feb 2021, 10:16 PM Rainer Duffner,
> wrote:
>
>> Hi,
>>
>> I have an interesting problem.
>>
>> I have apache behind Nginx behind haproxy.
>>
>> SSL is terminated with haproxy (because haproxy can load all certificates
>> from a single directory, and because some rate-limiting stuff is easier
>> with haproxy).
>> This makes using Let?s Encrypt easier.
>>
>> Sometimes, I want to do Nginx + fastcgi + php-fpm directly, without
>> apache (it?s measurably faster).
>>
>> For apache, you need this in the configuration:
>>
>> proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
>>
>> (and for good measure, also this:
>> SetEnvIf X-Forwarded-Proto "^https$" HTTPS=on
>> )
>>
>> For fast-cgi, one also needs this in the configuration (fastcgi_params):
>>
>> fastcgi_param HTTPS $fwd_ssl;
>>
>>
>> $fwd_ssl is generated by this map:
>>
>> map $http_x_forwarded_proto $fwd_ssl {
>> http off;
>> https on;
>> }
>>
>> in the global http section.
>>
>> In wordpress, when I enable ?Really Simple SSL?, I get a redirect loop
>> (to https) on the front-page (as an unauthenticated user) but the backend
>> works.
>>
>> I wonder what wordpress is missing so that it still thinks the connection
>> is coming over http instead of https.
>>
>>
>>
>> Any ideas?
>>
>> Best Regards
>> Rainer
>>
>> _______________________________________________
>> nginx mailing list
>> nginx at nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From acroock at isazi.ai Wed Feb 10 14:21:20 2021
From: acroock at isazi.ai (Ari Croock)
Date: Wed, 10 Feb 2021 16:21:20 +0200
Subject: Upstream Consistent Hash Inconsistency
Message-ID:
Hi all,
Not sure if this belongs on the devel mailing list, I thought I would try
to post here first.
*Summary*
The "hash" and "hash ... consistent" upstream directives behave differently
when an upstream hostname resolves to multiple IP addresses.
*My Use-case:*
I am using nginx as a reverse proxy for load balancing inside a
docker-compose network.
In docker-compose, you can setup a service with multiple replicas and with
a common hostname. A DNS lookup returns the IP addresses for each replica
(e.g. if I have containers "dash_1" and "dash_2", "dash" could resolve to
the IP addresses for both).
The application we are hosting requires sticky sessions. We have
implemented this using the "hash ... consistent" upstream directive.
So an example upstream could look like:
upstream test_upstream {
hash $token consistent;
server dashboard_test:3838;
}
where "$token" is the unique token for hashing.
Note that "dashboard_test" can resolve to multiple IP addresses as
explained above.
As I understand it, when nginx starts up it will do a DNS lookup for
"dashboard_test" which resolves to multiple IP addresses, effectively
resulting in an upstream group with multiple "servers".
*My Problem:*
It seems that the "hash" and "hash consistent" directives behave
differently.
I had a quick look at the source (ngx_http_upstream_hash_module.c) and it
looks like the "regular" hash method determines an actual IP address from
the hash, but the "chash" *only determines the entry in the upstream group*.
Empirically, this does seem to be the case. With the "hash" directive nginx
always proxies to the same upstream server. With the "hash consistent"
directive I get a different upstream server on every request.
Can anyone comment on whether this is intended behaviour? From my point of
view this seems to be a bug. But I can imagine that changing this behaviour
might break someone else's use case.
Kind regards
--
Ari Croock | Isazi | Software Engineer | BSc (Eng) - Electrical
38 Homestead Road, Rivonia, Johannesburg, South Africa, 2128 |(: +27 11 027
6996
*: acroock at isazi.ai | www.isazi.ai
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mdounin at mdounin.ru Wed Feb 10 15:41:10 2021
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 10 Feb 2021 18:41:10 +0300
Subject: Upstream Consistent Hash Inconsistency
In-Reply-To:
References:
Message-ID: <20210210154110.GL77619@mdounin.ru>
Hello!
On Wed, Feb 10, 2021 at 04:21:20PM +0200, Ari Croock wrote:
> Hi all,
>
> Not sure if this belongs on the devel mailing list, I thought I would try
> to post here first.
>
>
> *Summary*
> The "hash" and "hash ... consistent" upstream directives behave differently
> when an upstream hostname resolves to multiple IP addresses.
>
>
> *My Use-case:*
>
> I am using nginx as a reverse proxy for load balancing inside a
> docker-compose network.
> In docker-compose, you can setup a service with multiple replicas and with
> a common hostname. A DNS lookup returns the IP addresses for each replica
> (e.g. if I have containers "dash_1" and "dash_2", "dash" could resolve to
> the IP addresses for both).
>
> The application we are hosting requires sticky sessions. We have
> implemented this using the "hash ... consistent" upstream directive.
> So an example upstream could look like:
> upstream test_upstream {
> hash $token consistent;
> server dashboard_test:3838;
> }
> where "$token" is the unique token for hashing.
> Note that "dashboard_test" can resolve to multiple IP addresses as
> explained above.
>
> As I understand it, when nginx starts up it will do a DNS lookup for
> "dashboard_test" which resolves to multiple IP addresses, effectively
> resulting in an upstream group with multiple "servers".
>
>
> *My Problem:*
>
> It seems that the "hash" and "hash consistent" directives behave
> differently.
>
> I had a quick look at the source (ngx_http_upstream_hash_module.c) and it
> looks like the "regular" hash method determines an actual IP address from
> the hash, but the "chash" *only determines the entry in the upstream group*.
>
> Empirically, this does seem to be the case. With the "hash" directive nginx
> always proxies to the same upstream server. With the "hash consistent"
> directive I get a different upstream server on every request.
>
> Can anyone comment on whether this is intended behaviour? From my point of
> view this seems to be a bug. But I can imagine that changing this behaviour
> might break someone else's use case.
It is intended behaviour. Consistent hashing depends on the name
server name written in the configuration to identify servers (and
distribute requests). Servers with the same name are effectively
identical, even if IP address of the server changes. If the name
resolves to multiple addresses, these addresses are used in
round-robin.
That is, your configuration with just one server in the upstream
block is effectively identical to round-robin configuration. If
you want consistent hashing to work, you have to list your backend
servers in the upstream block by using separate "server"
directives.
--
Maxim Dounin
http://mdounin.ru/
From acroock at isazi.ai Wed Feb 10 17:03:30 2021
From: acroock at isazi.ai (Ari Croock)
Date: Wed, 10 Feb 2021 19:03:30 +0200
Subject: Upstream Consistent Hash Inconsistency
In-Reply-To: <20210210154110.GL77619@mdounin.ru>
References:
<20210210154110.GL77619@mdounin.ru>
Message-ID:
Hi Maxim,
Thanks for your reply.
That makes sense to me, except for the fact that "hash" (without
"consistent") doesn't seem to be doing round-robin load balancing.
Is there a reason that regular "hash" keeps returning a consistent IP? I
could understand if both directives resulted in the behaviour you
described, but it seems strange that only one would.
Thanks
On Wed, 10 Feb 2021 at 17:41, Maxim Dounin wrote:
> Hello!
>
> On Wed, Feb 10, 2021 at 04:21:20PM +0200, Ari Croock wrote:
>
> > Hi all,
> >
> > Not sure if this belongs on the devel mailing list, I thought I would try
> > to post here first.
> >
> >
> > *Summary*
> > The "hash" and "hash ... consistent" upstream directives behave
> differently
> > when an upstream hostname resolves to multiple IP addresses.
> >
> >
> > *My Use-case:*
> >
> > I am using nginx as a reverse proxy for load balancing inside a
> > docker-compose network.
> > In docker-compose, you can setup a service with multiple replicas and
> with
> > a common hostname. A DNS lookup returns the IP addresses for each replica
> > (e.g. if I have containers "dash_1" and "dash_2", "dash" could resolve to
> > the IP addresses for both).
> >
> > The application we are hosting requires sticky sessions. We have
> > implemented this using the "hash ... consistent" upstream directive.
> > So an example upstream could look like:
> > upstream test_upstream {
> > hash $token consistent;
> > server dashboard_test:3838;
> > }
> > where "$token" is the unique token for hashing.
> > Note that "dashboard_test" can resolve to multiple IP addresses as
> > explained above.
> >
> > As I understand it, when nginx starts up it will do a DNS lookup for
> > "dashboard_test" which resolves to multiple IP addresses, effectively
> > resulting in an upstream group with multiple "servers".
> >
> >
> > *My Problem:*
> >
> > It seems that the "hash" and "hash consistent" directives behave
> > differently.
> >
> > I had a quick look at the source (ngx_http_upstream_hash_module.c) and it
> > looks like the "regular" hash method determines an actual IP address from
> > the hash, but the "chash" *only determines the entry in the upstream
> group*.
> >
> > Empirically, this does seem to be the case. With the "hash" directive
> nginx
> > always proxies to the same upstream server. With the "hash consistent"
> > directive I get a different upstream server on every request.
> >
> > Can anyone comment on whether this is intended behaviour? From my point
> of
> > view this seems to be a bug. But I can imagine that changing this
> behaviour
> > might break someone else's use case.
>
> It is intended behaviour. Consistent hashing depends on the name
> server name written in the configuration to identify servers (and
> distribute requests). Servers with the same name are effectively
> identical, even if IP address of the server changes. If the name
> resolves to multiple addresses, these addresses are used in
> round-robin.
>
> That is, your configuration with just one server in the upstream
> block is effectively identical to round-robin configuration. If
> you want consistent hashing to work, you have to list your backend
> servers in the upstream block by using separate "server"
> directives.
>
> --
> Maxim Dounin
> http://mdounin.ru/
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
--
Ari Croock | Isazi | Software Engineer | BSc (Eng) - Electrical
38 Homestead Road, Rivonia, Johannesburg, South Africa, 2128 |(: +27 11 027
6996
*: acroock at isazi.ai | www.isazi.ai
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Wed Feb 10 21:55:11 2021
From: nginx-forum at forum.nginx.org (mariohbrr)
Date: Wed, 10 Feb 2021 16:55:11 -0500
Subject: Push rtmp with username and password
Message-ID:
Hello,
i setup Nginx for multistream, YouTube and Facebook, but I need to add a
third platform, Dacast. This live stream requires username and password
authentication and I don't know how to configure rtmp. Does anyone have any
tips?
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290730,290730#msg-290730
From b.jeyamurugan at gmail.com Thu Feb 11 01:56:50 2021
From: b.jeyamurugan at gmail.com (Jeya Murugan)
Date: Thu, 11 Feb 2021 07:26:50 +0530
Subject: How to improve Nginx performance
Message-ID:
Hello All,
We are using Nginx as a cache server for testing.
On performing the HTTPS functional and performance test, we could see a
drastic drop in performance (in HTTPS) compared to HTTP requests.
Can you please suggest what parameters we can tweak (like connection reuse)
to get better results? Or Any other suggestions are really helpful.
Regards
Jeya Muruagan B
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mdounin at mdounin.ru Thu Feb 11 02:22:37 2021
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Thu, 11 Feb 2021 05:22:37 +0300
Subject: Upstream Consistent Hash Inconsistency
In-Reply-To:
References:
<20210210154110.GL77619@mdounin.ru>
Message-ID: <20210211022237.GM77619@mdounin.ru>
Hello!
On Wed, Feb 10, 2021 at 07:03:30PM +0200, Ari Croock wrote:
> That makes sense to me, except for the fact that "hash" (without
> "consistent") doesn't seem to be doing round-robin load balancing.
>
> Is there a reason that regular "hash" keeps returning a consistent IP? I
> could understand if both directives resulted in the behaviour you
> described, but it seems strange that only one would.
That's because the algorithms used are quite different.
Simple hashing algorithm doesn't care about being consistent, and
uses the Nth peer, where N is calculated as a hash modulo
total number of peers. This works good as long as the list of
servers is not changed. If the list is changed, however, new hashing
will result in completely different peer being used for requests
with the same hash key. Note that "the list is changed" also
implies changes in number or order of IP addresses if you use
names in the configuration.
Consistent hashing works differently: instead, it tries to
preserve mapping from hash to a particular upstream server. To do
so, it relies on names used in the configuration, so changes in
the configuration, such as order of servers, or even changes of IP
addresses used by particular servers, do not affect distribution
of the requests between servers. As a result, if a name in the
configuration maps to more than one IP address, these addresses
are equal to the algorithm, since they use the same name.
It should be possible to implement consistent hashing differently,
for example, using IP addresses of the particular peers instead of
names from the configuration. This approach will probably match
what you are trying to do somewhat better. This is not how it is
currently implemented in nginx though.
--
Maxim Dounin
http://mdounin.ru/
From acroock at isazi.ai Thu Feb 11 08:54:34 2021
From: acroock at isazi.ai (Ari Croock)
Date: Thu, 11 Feb 2021 10:54:34 +0200
Subject: Upstream Consistent Hash Inconsistency
In-Reply-To: <20210211022237.GM77619@mdounin.ru>
References:
<20210210154110.GL77619@mdounin.ru>
<20210211022237.GM77619@mdounin.ru>
Message-ID:
Thanks for the explanation.
I don't think changing the current implementation would be a good idea, but
what do you think of adding another option in addition to "consistent"?
It could use the consistent hashing algorithm with peer IP addresses
instead of hostnames, as you suggested.
Not sure what the nginx policy is for that sort of change.
If that's not possible, do you think we could at least add a line to the
documentation about this edge case?
Thanks
On Thu, 11 Feb 2021 at 04:22, Maxim Dounin wrote:
> Hello!
>
> On Wed, Feb 10, 2021 at 07:03:30PM +0200, Ari Croock wrote:
>
> > That makes sense to me, except for the fact that "hash" (without
> > "consistent") doesn't seem to be doing round-robin load balancing.
> >
> > Is there a reason that regular "hash" keeps returning a consistent IP? I
> > could understand if both directives resulted in the behaviour you
> > described, but it seems strange that only one would.
>
> That's because the algorithms used are quite different.
>
> Simple hashing algorithm doesn't care about being consistent, and
> uses the Nth peer, where N is calculated as a hash modulo
> total number of peers. This works good as long as the list of
> servers is not changed. If the list is changed, however, new hashing
> will result in completely different peer being used for requests
> with the same hash key. Note that "the list is changed" also
> implies changes in number or order of IP addresses if you use
> names in the configuration.
>
> Consistent hashing works differently: instead, it tries to
> preserve mapping from hash to a particular upstream server. To do
> so, it relies on names used in the configuration, so changes in
> the configuration, such as order of servers, or even changes of IP
> addresses used by particular servers, do not affect distribution
> of the requests between servers. As a result, if a name in the
> configuration maps to more than one IP address, these addresses
> are equal to the algorithm, since they use the same name.
>
> It should be possible to implement consistent hashing differently,
> for example, using IP addresses of the particular peers instead of
> names from the configuration. This approach will probably match
> what you are trying to do somewhat better. This is not how it is
> currently implemented in nginx though.
>
> --
> Maxim Dounin
> http://mdounin.ru/
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
--
Ari Croock | Isazi | Software Engineer | BSc (Eng) - Electrical
38 Homestead Road, Rivonia, Johannesburg, South Africa, 2128 |(: +27 11 027
6996
*: acroock at isazi.ai | www.isazi.ai
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx_list at chezphil.org Mon Feb 15 20:58:54 2021
From: nginx_list at chezphil.org (Phil Endecott)
Date: Mon, 15 Feb 2021 20:58:54 +0000
Subject: proxy_store doesn't create directories
Message-ID: <1613422734148@dmwebmail.dmwebmail.chezphil.org>
Dear Experts,
I have just tried to use proxy_store for the first time and
it seems to almost work, but it looks like it does not create
the parent directories for the things that it needs to save.
I am attempting to store things with quite a deep directory
structure.
Is this the intended behaviour? Am I expected to create a
skeleton directory structure first?
I have the Debian packages of nginx 1.14.2.
Thanks, Phil.
From francis at daoine.org Mon Feb 15 22:42:32 2021
From: francis at daoine.org (Francis Daly)
Date: Mon, 15 Feb 2021 22:42:32 +0000
Subject: proxy_store doesn't create directories
In-Reply-To: <1613422734148@dmwebmail.dmwebmail.chezphil.org>
References: <1613422734148@dmwebmail.dmwebmail.chezphil.org>
Message-ID: <20210215224232.GI6011@daoine.org>
On Mon, Feb 15, 2021 at 08:58:54PM +0000, Phil Endecott wrote:
Hi there,
> I have just tried to use proxy_store for the first time and
> it seems to almost work, but it looks like it does not create
> the parent directories for the things that it needs to save.
What user:group does your nginx run as? (I think the debian default
is www-data.)
What are the ownership and permissions on the directory you would like
nginx to write into?
Is there anything in the error log, especially related to mkdir()
or rename()?
> Is this the intended behaviour? Am I expected to create a
> skeleton directory structure first?
As far as I know: no, and no.
It should Just Work.
Cheers,
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Mon Feb 15 23:04:59 2021
From: francis at daoine.org (Francis Daly)
Date: Mon, 15 Feb 2021 23:04:59 +0000
Subject: How to improve Nginx performance
In-Reply-To:
References:
Message-ID: <20210215230459.GJ6011@daoine.org>
On Thu, Feb 11, 2021 at 07:26:50AM +0530, Jeya Murugan wrote:
Hi there,
> We are using Nginx as a cache server for testing.
>
> On performing the HTTPS functional and performance test, we could see a
> drastic drop in performance (in HTTPS) compared to HTTP requests.
>
> Can you please suggest what parameters we can tweak (like connection reuse)
> to get better results? Or Any other suggestions are really helpful.
You're probably more likely to get useful tips if you can show your
starting point, and describe what result you want.
https://www.nginx.com/blog/testing-the-performance-of-nginx-and-nginx-plus-web-servers/
is a few years old, but does include a link to the specific tests that
were done then.
It also has a link to a sizing guide with some numbers when using nginx
as a reverse proxy.
Maybe what you see as "a drastic drop", is normal, if every http request
involves a new from-scratch tls connection?
Good luck with it,
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Mon Feb 15 23:17:26 2021
From: francis at daoine.org (Francis Daly)
Date: Mon, 15 Feb 2021 23:17:26 +0000
Subject: How to configure nginx fort UDP and TCP
In-Reply-To: <36a6a26674245d867700470498076a67.NginxMailingListEnglish@forum.nginx.org>
References: <36a6a26674245d867700470498076a67.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20210215231726.GK6011@daoine.org>
On Tue, Feb 09, 2021 at 04:43:28AM -0500, bibi93 wrote:
Hi there,
> I would like to put 4 server behind one NGINX server. these servers are not
> WEB sites but for deployment of packages, supervision, antivirus. therefore
> we do not access sites. So these are servers that discuss with other servers
> by agent bias under TCP and UDP ports.
It sounds like you may want to use the nginx "stream" module, which has
documentation and some examples at
http://nginx.org/en/docs/stream/ngx_stream_core_module.html
> Is it possible to make these agents call an alias to a public IP which would
> be the NGINX server which would dispatch the flows according to the alias
> called to the servers in question only with the defined ports
Note that you will probably need to distinguish your back-end servers
by TCP or UDP port, because the nginx server will not (in general)
know which hostname the client used originally; nginx will know which
IP address and port it connected to.
Good luck with it,
f
--
Francis Daly francis at daoine.org
From lists at lazygranch.com Tue Feb 16 07:00:31 2021
From: lists at lazygranch.com (lists)
Date: Mon, 15 Feb 2021 23:00:31 -0800
Subject: How to improve Nginx performance
In-Reply-To: <20210215230459.GJ6011@daoine.org>
Message-ID:
If you follow the suggested link in the previous post you can download an O'Reilly Nginx book.
One suggestion I have to improve performance is to firewall off all the 'bots. Firewalls are extremely efficient. Start with AWS:
https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html
Bots are half the internet traffic, but that includes both bad and good bots. If you are handy with bgp.he.net, I suggest blocking OVH.
Some people block entire countries.
? Original Message ?
From: francis at daoine.org
Sent: February 15, 2021 3:05 PM
To: nginx at nginx.org
Reply-to: nginx at nginx.org
Subject: Re: How to improve Nginx performance
On Thu, Feb 11, 2021 at 07:26:50AM +0530, Jeya Murugan wrote:
Hi there,
> We are using Nginx as a cache server for testing.
>
> On performing the HTTPS functional and performance test, we could see a
> drastic drop in performance (in HTTPS) compared to HTTP requests.
>
> Can you please suggest what parameters we can tweak (like connection reuse)
> to get better results? Or Any other suggestions are really helpful.
You're probably more likely to get useful tips if you can show your
starting point, and describe what result you want.
https://www.nginx.com/blog/testing-the-performance-of-nginx-and-nginx-plus-web-servers/
is a few years old, but does include a link to the specific tests that
were done then.
It also has a link to a sizing guide with some numbers when using nginx
as a reverse proxy.
Maybe what you see as "a drastic drop", is normal, if every http request
involves a new from-scratch tls connection?
Good luck with it,
f
--
Francis Daly??????? francis at daoine.org
_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
From nginx-forum at forum.nginx.org Tue Feb 16 15:00:47 2021
From: nginx-forum at forum.nginx.org (sanflores)
Date: Tue, 16 Feb 2021 10:00:47 -0500
Subject: Proxy pass set body on if
Message-ID: <0ddc5215c38556e93f574a947532ff21.NginxMailingListEnglish@forum.nginx.org>
I have an Angular app and need to use puppeteer for SSR. In order to make it
work I need to send the request with the body but I can't figure how to make
these things work together.
if ($limit_bots = 1){
proxy_pass http://localhost:3000/puppeteer/download/html/;
proxy_method GET;
proxy_set_header content-type "application/json";
proxy_pass_request_body off;
proxy_set_body "{\"url\":\"https://$request\"}";
}
location ~ /index.html|.*\.json$ { # Don't cache index.html and
*json files
expires -1;
add_header Cache-Control 'no-store, no-cache, must-revalidate,
proxy-revalidate, max-age=0';
include /etc/nginx/security-headers.conf;
}
The idea would be to redirect bots to the static html that will be provided
by puppeteer runing in another server, and adding proxy_server inside an if
is deprecated. I should be using a rewrite statement... but I can't set the
body.
Thank you in advance!
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290773,290773#msg-290773
From nginx_list at chezphil.org Tue Feb 16 15:48:49 2021
From: nginx_list at chezphil.org (Phil Endecott)
Date: Tue, 16 Feb 2021 15:48:49 +0000
Subject: proxy_store doesn't create directories
In-Reply-To: 1613422734148@dmwebmail.dmwebmail.chezphil.org
References: 1613422734148@dmwebmail.dmwebmail.chezphil.org
Message-ID: <1613490529720@dmwebmail.dmwebmail.chezphil.org>
Phil Endecott wrote:
> I have just tried to use proxy_store for the first time and
> it seems to almost work, but it looks like it does not create
> the parent directories for the things that it needs to save.
OK, I've got this working now.
My aim is for a request to http://server.local/remote.com/path/file
to be fetched from http://remote.com/path/file and stored in
/var/cache/nginx/remote.com/path/file.
My config is now:
location /remote.com/ {
root /var/cache/nginx/;
error_page 404 = /fetch$uri;
}
location ~^/fetch/(.*)$ {
internal;
proxy_pass http://$1;
proxy_store on;
proxy_store_access user:rw group:r all:r;
alias /var/cache/nginx/$1;
}
My mistake before was that I omitted the $1 from the alias line.
(Is there a simpler way to do this?)
Thanks Francis for your reply.
Phil.
From mdounin at mdounin.ru Tue Feb 16 16:12:32 2021
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Tue, 16 Feb 2021 19:12:32 +0300
Subject: nginx-1.19.7
Message-ID: <20210216161232.GG77619@mdounin.ru>
Changes with nginx 1.19.7 16 Feb 2021
*) Change: connections handling in HTTP/2 has been changed to better
match HTTP/1.x; the "http2_recv_timeout", "http2_idle_timeout", and
"http2_max_requests" directives have been removed, the
"keepalive_timeout" and "keepalive_requests" directives should be
used instead.
*) Change: the "http2_max_field_size" and "http2_max_header_size"
directives have been removed, the "large_client_header_buffers"
directive should be used instead.
*) Feature: now, if free worker connections are exhausted, nginx starts
closing not only keepalive connections, but also connections in
lingering close.
*) Bugfix: "zero size buf in output" alerts might appear in logs if an
upstream server returned an incorrect response during unbuffered
proxying; the bug had appeared in 1.19.1.
*) Bugfix: HEAD requests were handled incorrectly if the "return"
directive was used with the "image_filter" or "xslt_stylesheet"
directives.
*) Bugfix: in the "add_trailer" directive.
--
Maxim Dounin
http://nginx.org/
From xeioex at nginx.com Tue Feb 16 18:07:58 2021
From: xeioex at nginx.com (Dmitry Volyntsev)
Date: Tue, 16 Feb 2021 21:07:58 +0300
Subject: njs-0.5.1
Message-ID: <182c1d6d-9ff4-6a96-c6f7-8e3acee3f0d9@nginx.com>
Hello,
I'm glad to announce a new release of NGINX JavaScript module (njs).
This release focuses on extending the modules functionality.
Notable new features:
- ngx.fetch() method implements a generic HTTP client which
does not depend on subrequests:
: example.js:
: function fetch(r) {
: ngx.fetch('http://nginx.org/')
: .then(reply => reply.text())
: .then(body => r.return(200, body))
: .catch(e => r.return(501, e.message));
: }
- js_header_filter directive. The directive allows changing arbitrary
header fields of a response header.
: nginx.conf:
: js_import foo.js;
:
: location / {
: js_header_filter foo.filter;
: proxy_passhttp://127.0.0.1:8081/;
: }
:
: foo.js:
: function filter(r) {
: var cookies = r.headersOut['Set-Cookie'];
: var len = r.args.len ? Number(r.args.len) : 0;
: r.headersOut['Set-Cookie'] = cookies.filter(v=>v.length > len);
: }
:
: export default {filter};
You can learn more about njs:
- Overview and introduction:http://nginx.org/en/docs/njs/
- Presentation:https://youtu.be/Jc_L6UffFOs
- Using node modules with njs:
http://nginx.org/en/docs/njs/node_modules.html
- Writing njs code using TypeScript definition files:
http://nginx.org/en/docs/njs/typescript.html
Feel free to try it and give us feedback on:
- Github:https://github.com/nginx/njs/issues
- Mailing list:http://mailman.nginx.org/mailman/listinfo/nginx-devel
Changes with njs 0.5.1 16 Feb 2021
nginx modules:
*) Feature: introduced ngx.fetch() method implementing Fetch API.
The following init options are supported:
body, headers, buffer_size (nginx specific),
max_response_body_size (nginx specific), method.
The following properties and methods of Response object are
implemented: arrayBuffer(), bodyUsed, json(), headers, ok,
redirect, status, statusText, text(), type, url.
The following properties and methods of Header object are
implemented: get(), getAll(), has().
Notable limitations: only the http:// scheme is supported,
redirects are not handled.
In collaboration with ??? (Hong Zhi Dao).
*) Feature: added the "js_header_filter" directive.
*) Bugfix: fixed processing buffered data in body filter
in stream module.
Core:
*) Bugfix: fixed safe mode bypass in Function constructor.
*) Bugfix: fixed Date.prototype.toISOString() with invalid date
values.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From grzegorz.czesnik at hotmail.com Wed Feb 17 17:05:42 2021
From: grzegorz.czesnik at hotmail.com (=?iso-8859-2?Q?Grzegorz_Cze=B6nik?=)
Date: Wed, 17 Feb 2021 17:05:42 +0000
Subject: Creating the Directory Structure - static content
Message-ID:
Hi,
I am viewing various tutorials on configuring server blocks. I am wondering about the root directory structure for a server block for some static pages. I wanted to ask what are the best practices on this topic. I found such structure proposals as:
/var/www//public_html
/var/www/html/
/var/www/
Are there any benefits to using one of these examples? Is it any freedom in what they write about it?
Grzegorz
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From gfrankliu at gmail.com Wed Feb 17 19:24:32 2021
From: gfrankliu at gmail.com (Frank Liu)
Date: Wed, 17 Feb 2021 11:24:32 -0800
Subject: client_max_body_size and chunked encoding
Message-ID:
Hi,
The doc
http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size
says size limit is on "Content-Length", but this post
https://serverfault.com/questions/871717/nginx-disconnect-when-client-sends-chunked-body-exceeding-desired-size
says it also works on chunked encoding. Is that correct? If so, can we
update the doc? If not, what is the best way to limit the request size for
chunked encoding?
Thanks,
Frank
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Thu Feb 18 04:40:49 2021
From: nginx-forum at forum.nginx.org (allenhe)
Date: Wed, 17 Feb 2021 23:40:49 -0500
Subject: Keepalived Connections Reset after reloading the configuration
(HUP Signal)
In-Reply-To: <20110516113312.GH42265@mdounin.ru>
References: <20110516113312.GH42265@mdounin.ru>
Message-ID:
Hi Maxim Dounin,
Is it possible that the nginx is closing the keepalive connection while
there is input data queued?
As we know If a stream socket is closed when there is input data queued, the
TCP connection is reset rather than being cleanly closed.
Br,
Allen
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,197927,290793#msg-290793
From nginx-forum at forum.nginx.org Thu Feb 18 08:14:08 2021
From: nginx-forum at forum.nginx.org (allenhe)
Date: Thu, 18 Feb 2021 03:14:08 -0500
Subject: Keepalived Connections Reset after reloading the configuration
(HUP Signal)
In-Reply-To:
References: <20110516113312.GH42265@mdounin.ru>
Message-ID: <4af0ef62de88d2baf7f687490fda17d5.NginxMailingListEnglish@forum.nginx.org>
Looking into the big loop code, it may happen that the worker process may
close the keepalive connection before consuming any pending read events?
for ( ;; ) {
if (ngx_exiting) {
if (ngx_event_no_timers_left() == NGX_OK) {
ngx_log_error(NGX_LOG_NOTICE, cycle->log, 0, "exiting");
ngx_worker_process_exit(cycle);
}
}
ngx_log_debug0(NGX_LOG_DEBUG_EVENT, cycle->log, 0, "worker cycle");
ngx_process_events_and_timers(cycle);
if (ngx_terminate) {
ngx_log_error(NGX_LOG_NOTICE, cycle->log, 0, "exiting");
ngx_worker_process_exit(cycle);
}
if (ngx_quit) {
ngx_quit = 0;
ngx_log_error(NGX_LOG_NOTICE, cycle->log, 0,
"gracefully shutting down");
ngx_setproctitle("worker process is shutting down");
if (!ngx_exiting) {
ngx_exiting = 1;
ngx_set_shutdown_timer(cycle);
ngx_close_listening_sockets(cycle);
ngx_close_idle_connections(cycle); <--what if a
read event is coming when goes here?
}
}
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,197927,290794#msg-290794
From majid210484 at gmail.com Fri Feb 19 11:47:28 2021
From: majid210484 at gmail.com (Majid M A)
Date: Fri, 19 Feb 2021 17:17:28 +0530
Subject: Nginx Basic Auth not working before re-directing to a subpath of the
website
Message-ID:
Dear Nginx Users,
I have a scenario that whenever a user hits the website (
https://abc.test.com), it should re-direct to the sub-path (/xxx/yyy/) of
the website (ex: https://abc.test.com/xxx/yyy).
So re-direction to a website's sub-path (https://abc.test.com/xxx/yyy) is
working.
I have implemented http basic auth, so whenever the user is re-directed to
sub-path /xxx/yyy/, the basic auth should come into effect and then the
website has to be re-directed.
The issue is basic auth is not coming into effect before re-directing to
the web-site's sub-path it just re-directs.
This is how my nginx config looks like
listen 443 ssl;
server_name aa.bb.com;
set $backend "https://abc.test.com:8080/";
location / {
proxy_pass $backend;
proxy_pass_header Authorization;
auth_basic "Access Denied or Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
return 301 https://abc.test.com:8080/xxx/yyy/;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host aa.bb.com;
client_max_body_size 10m;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 900;
}
}
If you see above in the location block i am using basic auth just
before the re-direction, but the basic auth is not in effect.
Any ideas or suggestions are highly appreciated.
nginx version: nginx/1.16.1
Thanks & Regards,
Majid M A
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From pluknet at nginx.com Fri Feb 19 15:26:06 2021
From: pluknet at nginx.com (Sergey Kandaurov)
Date: Fri, 19 Feb 2021 18:26:06 +0300
Subject: client_max_body_size and chunked encoding
In-Reply-To:
References:
Message-ID:
> On 17 Feb 2021, at 22:24, Frank Liu wrote:
>
> Hi,
>
> The doc http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size says size limit is on "Content-Length", but this post https://serverfault.com/questions/871717/nginx-disconnect-when-client-sends-chunked-body-exceeding-desired-size says it also works on chunked encoding. Is that correct? If so, can we update the doc? If not, what is the best way to limit the request size for chunked encoding?
The client_max_body_size directive limits maximum request body size
as specified in the ?Content-Length? request header.
Since chunked transfer encoding support introduced for request body
in nginx 1.3.9, the limit instead applies while processing body chunks
if the request body is signalled with "Transfer-Encoding: chunked".
In HTTP/2 (and upcoming HTTP/3) the limit also applies to the actually
processed request body chunks in the corresponding protocols,
if the "Content-Length" request header was not specified in a request.
--
Sergey Kandaurov
From francis at daoine.org Fri Feb 19 16:04:20 2021
From: francis at daoine.org (Francis Daly)
Date: Fri, 19 Feb 2021 16:04:20 +0000
Subject: Nginx Basic Auth not working before re-directing to a subpath of
the website
In-Reply-To:
References:
Message-ID: <20210219160420.GL6011@daoine.org>
On Fri, Feb 19, 2021 at 05:17:28PM +0530, Majid M A wrote:
Hi there,
> I have a scenario that whenever a user hits the website (
> https://abc.test.com), it should re-direct to the sub-path (/xxx/yyy/) of
> the website (ex: https://abc.test.com/xxx/yyy).
Your config shows both "proxy_pass" and "return". That is probably not
going to do what you want.
I suspect that if you just remove the "return" line, things may work
better for you.
But I am not certain what response you want to get, for each request,
so there may be some other things to check first.
Maybe you should add
location = / { return 301 /xxx/yyy/; }
as well?
> location / {
> proxy_pass $backend;
> proxy_pass_header Authorization;
> auth_basic "Access Denied or Restricted";
> auth_basic_user_file /etc/nginx/.htpasswd;
> return 301 https://abc.test.com:8080/xxx/yyy/;
> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> proxy_set_header Host aa.bb.com;
> client_max_body_size 10m;
> proxy_http_version 1.1;
> proxy_set_header Upgrade $http_upgrade;
> proxy_set_header Connection "upgrade";
> proxy_read_timeout 900;
> }
> }
> If you see above in the location block i am using basic auth just
> before the re-direction, but the basic auth is not in effect.
In general, the order of directives as written in nginx conf is not
important.
"return" happens before "auth_basic".
But you possibly don't need the "return" here anyway.
Good luck with it,
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Sun Feb 21 15:17:24 2021
From: francis at daoine.org (Francis Daly)
Date: Sun, 21 Feb 2021 15:17:24 +0000
Subject: Creating the Directory Structure - static content
In-Reply-To:
References:
Message-ID: <20210221151724.GM6011@daoine.org>
On Wed, Feb 17, 2021 at 05:05:42PM +0000, Grzegorz Cze?nik wrote:
Hi there,
> /var/www//public_html
> /var/www/html/
> /var/www/
>
> Are there any benefits to using one of these examples? Is it any freedom in what they write about it?
It is "whatever organisation you prefer" -- the nginx application does
not care what that organisation is, so long as it is told what it is,
probably by using the "root" directive.
>From a "belt-and-braces" security perspective, I find it good to ensure
that whatever directory is chosen as a "root", *everything* within that
directory is ok for nginx to serve as-is from the filesystem.
That is -- I don't put top-secret-password-file inside that directory and
then hope that no-one asks for it; and I don't put secret-source-code.php
inside that directory and then hope I remember to add config telling
nginx not to serve it directly.
But they are unrelated to what directory structure to use.
If you use a distribution, and that has a preferred layout, use that
unless you have a reason not to.
Cheers,
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Sun Feb 21 15:54:17 2021
From: francis at daoine.org (Francis Daly)
Date: Sun, 21 Feb 2021 15:54:17 +0000
Subject: Proxy pass set body on if
In-Reply-To: <0ddc5215c38556e93f574a947532ff21.NginxMailingListEnglish@forum.nginx.org>
References: <0ddc5215c38556e93f574a947532ff21.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20210221155417.GN6011@daoine.org>
On Tue, Feb 16, 2021 at 10:00:47AM -0500, sanflores wrote:
Hi there,
> I have an Angular app and need to use puppeteer for SSR. In order to make it
> work I need to send the request with the body but I can't figure how to make
> these things work together.
Can you see, which part is failing? Is it "you set the body, but it is
not what you expect"; or something else?
If you "tcpdump" the port-3000 traffic, do you see anything interesting?
> if ($limit_bots = 1){
> proxy_pass http://localhost:3000/puppeteer/download/html/;
I get:
nginx: [emerg] "proxy_pass" cannot have URI part in location given by
regular expression, or inside named location, or inside "if" statement,
or inside "limit_except" block
(And more failures, after changing that. Generally: there are limited
things that you can do inside an "if" block.)
If you remove that "if", do things work differently for you?
> proxy_method GET;
> proxy_set_header content-type "application/json";
> proxy_pass_request_body off;
> proxy_set_body "{\"url\":\"https://$request\"}";
> }
http://nginx.org/r/$request
That is probably not the variable that you want to use.
Maybe some combination of $server_name or $host, and $request_uri,
is more useful?
> The idea would be to redirect bots to the static html that will be provided
> by puppeteer runing in another server, and adding proxy_server inside an if
> is deprecated. I should be using a rewrite statement... but I can't set the
> body.
Reverse the test?
if ($is_a_bot) {
return / rewrite / whatever
}
proxy_pass ...
Good luck with it,
f
--
Francis Daly francis at daoine.org
From chris at cretaforce.gr Mon Feb 22 10:35:44 2021
From: chris at cretaforce.gr (Christos Chatzaras)
Date: Mon, 22 Feb 2021 12:35:44 +0200
Subject: Question about limit_req_zone and max shared memory
Message-ID:
I want to rate limit PHP requests with "client ip + vhost + same url" but on same servers sometimes I see:
[alert] 78841#0: could not allocate node in limit_req zone "req_limit_per_ip_per_uri"
which causes 429 errors in all domains.
-----
FreeBSD/amd64
-----
# Max connections
events {
worker_connections 16384;
}
# Rate limit
limit_req_zone "$binary_remote_addr$host$request_uri" zone=req_limit_per_ip_per_uri:10m rate=2r/s;
# Vhosts
server {
server_name www.example1.com;
...
location ~ [^/]\.php(/|$) {
...
limit_req zone=req_limit_per_ip_per_uri burst=20 nodelay;
...
}
...
}
server {
server_name www.example2.com;
...
location ~ [^/]\.php(/|$) {
...
limit_req zone=req_limit_per_ip_per_uri burst=20 nodelay;
...
}
...
}
----
?s it better to use different variables for key? Which is the max shared memory needed?
From nginx-forum at forum.nginx.org Mon Feb 22 12:52:04 2021
From: nginx-forum at forum.nginx.org (sanflores)
Date: Mon, 22 Feb 2021 07:52:04 -0500
Subject: Proxy pass set body on if
In-Reply-To: <20210221155417.GN6011@daoine.org>
References: <20210221155417.GN6011@daoine.org>
Message-ID:
This would be great, but I don't know how to server the context from nginx
with a rewrite, what would work is:
if ($is_NOT_a_bot) {
rewrite in order to save the content in /usr/share/nginx/html
}
proxy_pass ....
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290773,290820#msg-290820
From grzegorz.czesnik at hotmail.com Mon Feb 22 14:12:08 2021
From: grzegorz.czesnik at hotmail.com (=?utf-8?B?R3J6ZWdvcnogQ3plxZtuaWs=?=)
Date: Mon, 22 Feb 2021 14:12:08 +0000
Subject: Creating the Directory Structure - static content
In-Reply-To: <20210221151724.GM6011@daoine.org>
References:
<20210221151724.GM6011@daoine.org>
Message-ID:
My nginx distribution is from the official nginx repository for Ubuntu 20 Server
Default:
location / {
root / usr / share / nginx / html;
index index.html index.htm;
}
Therefore, I prefer to change it.
Grzegorz
-----Original Message-----
From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Francis Daly
Sent: Sunday, February 21, 2021 4:17 PM
To: nginx at nginx.org
Subject: Re: Creating the Directory Structure - static content
On Wed, Feb 17, 2021 at 05:05:42PM +0000, Grzegorz Cze?nik wrote:
Hi there,
> /var/www//public_html
> /var/www/html/
> /var/www/
>
> Are there any benefits to using one of these examples? Is it any freedom in what they write about it?
It is "whatever organisation you prefer" -- the nginx application does not care what that organisation is, so long as it is told what it is, probably by using the "root" directive.
From a "belt-and-braces" security perspective, I find it good to ensure that whatever directory is chosen as a "root", *everything* within that directory is ok for nginx to serve as-is from the filesystem.
That is -- I don't put top-secret-password-file inside that directory and then hope that no-one asks for it; and I don't put secret-source-code.php inside that directory and then hope I remember to add config telling nginx not to serve it directly.
But they are unrelated to what directory structure to use.
If you use a distribution, and that has a preferred layout, use that unless you have a reason not to.
Cheers,
f
--
Francis Daly francis at daoine.org
_______________________________________________
nginx mailing list
nginx at nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
From francis at daoine.org Mon Feb 22 18:13:42 2021
From: francis at daoine.org (Francis Daly)
Date: Mon, 22 Feb 2021 18:13:42 +0000
Subject: Proxy pass set body on if
In-Reply-To:
References: <20210221155417.GN6011@daoine.org>
Message-ID: <20210222181342.GO6011@daoine.org>
On Mon, Feb 22, 2021 at 07:52:04AM -0500, sanflores wrote:
Hi there,
> This would be great, but I don't know how to server the context from nginx
> with a rewrite, what would work is:
I'm afraid I don't fully understand what response you want to send to
what request.
Can you show some examples?
For example, if you issue the "curl" GET commands
curl -i -H X-Bot:yes http://localhost/one
curl -i -H X-Bot:no http://localhost/one
(let's pretend that you decide bot-or-not based on the X-Bot request
header), then what response do you want in each case?
http 200 with the content of a specific file on the filesystem; http 301
with a redirect to another location; the response from a proxy_pass to
an upstream server; something else?
And if you issue the "curl" POST commands
curl -i -d the_post_data -H X-Bot:yes http://localhost/two
curl -i -d the_post_data -H X-Bot:no http://localhost/two
what response do you want in each case?
(The answer to the question in the Subject: is "you don't" --
http://nginx.org/r/proxy_set_body says "Context: http, server, location",
which does not include "if" or "if in location". So now we are trying
to find your overall requirements, hopefully to make it clear what the
appropriate nginx config is.)
Cheers,
f
--
Francis Daly francis at daoine.org
From francis at daoine.org Mon Feb 22 18:17:33 2021
From: francis at daoine.org (Francis Daly)
Date: Mon, 22 Feb 2021 18:17:33 +0000
Subject: Creating the Directory Structure - static content
In-Reply-To:
References:
<20210221151724.GM6011@daoine.org>
Message-ID: <20210222181733.GP6011@daoine.org>
On Mon, Feb 22, 2021 at 02:12:08PM +0000, Grzegorz Cze?nik wrote:
Hi there,
> My nginx distribution is from the official nginx repository for Ubuntu 20 Server
>
> Default:
> location / {
> root / usr / share / nginx / html;
> index index.html index.htm;
> }
>
> Therefore, I prefer to change it.
That's fair enough.
You can safely change it to wherever makes sense to you.
One small note, though:
within the config, you will probably be happier if you have the "root"
directive set at "server" level, outside of any "location" block.
And then only override the setting within a "location" block if there
is a good reason to do that.
Cheers,
f
--
Francis Daly francis at daoine.org
From nginx-forum at forum.nginx.org Mon Feb 22 19:15:42 2021
From: nginx-forum at forum.nginx.org (sanflores)
Date: Mon, 22 Feb 2021 14:15:42 -0500
Subject: Proxy pass set body on if
In-Reply-To: <20210222181342.GO6011@daoine.org>
References: <20210222181342.GO6011@daoine.org>
Message-ID: <7190680258c55216de545c0b744b5163.NginxMailingListEnglish@forum.nginx.org>
First of all, thanks for your help.
Here is my configuration:
cat nginx.conf
-----------------------------------------------------------------------------------------------------
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request"
'
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_types application/javascript;
gzip_buffers 32 8k;
map $http_user_agent $limit_bots {
default 0;
~*(google|bing|yandex|msnbot) 1;
~*(AltaVista|Googlebot|Slurp|BlackWidow|Bot|ChinaClaw|Custo|DISCo|Download|Demon|eCatch|EirGrabber|EmailSiphon|EmailWolf|SuperHTTP|Surfbot|WebWhacker)
1;
~*(Express|WebPictures|ExtractorPro|EyeNetIE|FlashGet|GetRight|GetWeb!|Go!Zilla|Go-Ahead-Got-It|GrabNet|Grafula|HMView|Go!Zilla|Go-Ahead-Got-It)
1;
~*(rafula|HMView|HTTrack|Stripper|Sucker|Indy|InterGET|Ninja|JetCar|Spider|larbin|LeechFTP|Downloader|tool|Navroad|NearSite|NetAnts|tAkeOut|WWWOFFLE)
1;
~*(GrabNet|NetSpider|Vampire|NetZIP|Octopus|Offline|PageGrabber|Foto|pavuk|pcBrowser|RealDownload|ReGet|SiteSnagger|SmartDownload|SuperBot|WebSpider)
1;
~*(Teleport|VoidEYE|Collector|WebAuto|WebCopier|WebFetch|WebGo|WebLeacher|WebReaper|WebSauger|eXtractor|Quester|WebStripper|WebZIP|Wget|Widow|Zeus)
1;
~*(Twengabot|htmlparser|libwww|Python|perl|urllib|scan|Curl|email|PycURL|Pyth|PyQ|WebCollector|WebCopy|webcraw)
1;
}
server {
listen 8080;
server_name localhost;
root /usr/share/nginx/html;
server_tokens off;
location ~ /index.html|.*\.json$ { # Don't cache index.html and
*json files
expires -1;
add_header Cache-Control 'no-store, no-cache, must-revalidate,
proxy-revalidate, max-age=0';
include /etc/nginx/security-headers.conf;
}
location ~ .*\.css$|.*\.js$ {
add_header Cache-Control 'max-age=31449600'; # one year as we
don't care about this files because of cache boosting
include /etc/nginx/security-headers.conf;
}
location / {
try_files $uri$args $uri$args/ /index.html; # Will redirect all
non existing files to index.html. TODO: Is this what we want?
add_header Cache-Control 'max-age=86400'; # one day
include /etc/nginx/security-headers.conf;
}
}
}
-----------------------------------------------------------------------------------------------------
I need to send all crawlers on the list to a puppeteer server that will
render the webpage and return the static html. I'm able to achive that with
this configuration:
proxy_pass http://localhost:3000/puppeteer/download/html/;
proxy_method GET;
proxy_set_header content-type "application/json";
proxy_pass_request_body off;
proxy_set_body "{\"url\":\"https://example.com/$uri\"}";
What I'm not able, is to use proxy_pass with an if statement because it was
deprecated some time ago.
nginx: [emerg] "proxy_pass" cannot have URI part in location given by
regular expression, or inside named location, or inside "if" statement, or
inside "limit_except" block in /usr/local/etc/nginx/nginx.conf:
So the question would be, what configuration would be needed in order to
redirect the crawlers (based on $http_user_agent) to puppeteer modifying the
body?
Thank you very much!
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290773,290829#msg-290829
From francis at daoine.org Mon Feb 22 21:11:27 2021
From: francis at daoine.org (Francis Daly)
Date: Mon, 22 Feb 2021 21:11:27 +0000
Subject: Proxy pass set body on if
In-Reply-To: <7190680258c55216de545c0b744b5163.NginxMailingListEnglish@forum.nginx.org>
References: <20210222181342.GO6011@daoine.org>
<7190680258c55216de545c0b744b5163.NginxMailingListEnglish@forum.nginx.org>
Message-ID: <20210222211127.GQ6011@daoine.org>
On Mon, Feb 22, 2021 at 02:15:42PM -0500, sanflores wrote:
Hi there,
I suspect that if I were doing this, I would probably pick one url that
is not otherwise used on my server (in this example, "/puppet/"), and
use that as a "stepping stone".
Then, if this request should be handled specially, rewrite to that url,
ad do the proxy_pass and friends in there.
There may be better ways, but this appears to give the desired response.
> So the question would be, what configuration would be needed in order to
> redirect the crawlers (based on $http_user_agent) to puppeteer modifying the
> body?
> server {
Somewhere at "server" level, outside of other location{} blocks, add:
==
if ($limit_bots = 1) { rewrite ^ /puppet/? break; }
location = /puppet/ {
internal;
proxy_pass http://localhost:3000/puppeteer/download/html/;
proxy_method GET;
proxy_set_header content-type "application/json";
proxy_pass_request_body off;
proxy_set_body "{\"url\":\"$scheme://$host$request_uri\"}";
}
==
Note that the variables in "proxy_set_body" do matter -- they relate to
the request received by this nginx.
Cheers,
f
--
Francis Daly francis at daoine.org
From nginx-forum at forum.nginx.org Tue Feb 23 14:25:51 2021
From: nginx-forum at forum.nginx.org (sanflores)
Date: Tue, 23 Feb 2021 09:25:51 -0500
Subject: Proxy pass set body on if
In-Reply-To: <20210222211127.GQ6011@daoine.org>
References: <20210222211127.GQ6011@daoine.org>
Message-ID: <2e0d06e7816962b79899c1c09f71487e.NginxMailingListEnglish@forum.nginx.org>
Awesome, this approach totally work, thank you very much:
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request"
'
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_types application/javascript;
gzip_buffers 32 8k;
map $http_user_agent $limit_bots {
default 0;
~*(google|bing|yandex|msnbot) 1;
~*(AltaVista|Googlebot|Slurp|BlackWidow|Bot|ChinaClaw|Custo|DISCo|Download|Demon|eCatch|EirGrabber|EmailSiphon|EmailWolf|SuperHTTP|Surfbot|WebWhacker)
1;
~*(Express|WebPictures|ExtractorPro|EyeNetIE|FlashGet|GetRight|GetWeb!|Go!Zilla|Go-Ahead-Got-It|GrabNet|Grafula|HMView|Go!Zilla|Go-Ahead-Got-It)
1;
~*(rafula|HMView|HTTrack|Stripper|Sucker|Indy|InterGET|Ninja|JetCar|Spider|larbin|LeechFTP|Downloader|tool|Navroad|NearSite|NetAnts|tAkeOut|WWWOFFLE)
1;
~*(GrabNet|NetSpider|Vampire|NetZIP|Octopus|Offline|PageGrabber|Foto|pavuk|pcBrowser|RealDownload|ReGet|SiteSnagger|SmartDownload|SuperBot|WebSpider)
1;
~*(Teleport|VoidEYE|Collector|WebAuto|WebCopier|WebFetch|WebGo|WebLeacher|WebReaper|WebSauger|eXtractor|Quester|WebStripper|WebZIP|Wget|Widow|Zeus)
1;
~*(Twengabot|htmlparser|libwww|Python|perl|urllib|scan|Curl|email|PycURL|Pyth|PyQ|WebCollector|WebCopy|webcraw)
1;
}
server {
listen 8080;
server_name localhost;
root /usr/share/nginx/html;
server_tokens off;
if ($limit_bots = 1){ rewrite ^ /puppeteer/download/html/ break; }
location = /puppeteer/download/html/ {
internal;
proxy_pass http://localhost:3000;
proxy_method GET;
proxy_set_header content-type "application/json";
proxy_pass_request_body off;
proxy_set_body "{\"url\":\"https://example.com/$request_uri\"}";
}
location ~ /index.html|.*\.json$ { # Don't cache index.html and
*json files
expires -1;
add_header Cache-Control 'no-store, no-cache, must-revalidate,
proxy-revalidate, max-age=0';
include /etc/nginx/security-headers.conf;
}
location ~ .*\.css$|.*\.js$ {
add_header Cache-Control 'max-age=31449600'; # one year as we
don't care about this files because of cache boosting
include /etc/nginx/security-headers.conf;
}
location / {
try_files $uri$args $uri$args/ /index.html; # Will redirect all
non existing files to index.html. TODO: Is this what we want?
add_header Cache-Control 'max-age=86400'; # one day
include /etc/nginx/security-headers.conf;
}
}
}
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290773,290833#msg-290833
From nginx-forum at forum.nginx.org Wed Feb 24 02:54:00 2021
From: nginx-forum at forum.nginx.org (mondji)
Date: Tue, 23 Feb 2021 21:54:00 -0500
Subject: server persistance using sticky cookie
Message-ID: <7fef5e51c2131427a1c910c97e2a8a2d.NginxMailingListEnglish@forum.nginx.org>
Hi,
I have setup this module
https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng/src/master/ to
have an upstream server persistance using cookies,
I compiled nginx 1.18.0 with this module suscessfully.
Here is my loadbalancer configuration:
/etc/nginx/conf.d/loadbalance.conf
proxy_cache_path /path/to_cache/dir keys_zone=backcache:10m;
upstream backendServers {
server IP1:81;
server IP2:81;
sticky name=mysticky expire=10860 path=/;
}
server {
listen IP3:80;
server_name example.com;
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen IP3:443 ssl;
server_name example.com;
location / {
proxy_pass https://backendServers;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
...
}
}
backend servers are running iis linked with coldfusion (tomcat).
The sticky seems not working: when I stop a web site on server 1, nginx
continue to send requests to that
server. The loadbalance occures only when I clear the browser cache, or
reboot server 1.
The cookie is set (name, expiration,...) but nginx is "sticked" to the down
server.
Is someone here who has tested thi s module
(https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng/src/master/)
with recente nginx version
If yes, is it working ? ciuld you share your experience ?
Thanks
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290835,290835#msg-290835
From vbl5968 at gmail.com Wed Feb 24 09:39:57 2021
From: vbl5968 at gmail.com (Vincent Blondel)
Date: Wed, 24 Feb 2021 10:39:57 +0100
Subject: Question about IF and auth subrequest
Message-ID:
Hello all,
I have a quick question about the usage of IF and auth_request.
I would like to know if it is possible to use a IF statement to condition
the proxy behaviour of one /location depending on the response headers of
the sub auth request ...
location /subrequest/ {
proxy_pass xxx;
}
location /anyrequest/ {
auth_request /subrequest/;
if ($response_header ~ '' ) {
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_pass ...
}
if ($response_header !~ '' ) {
proxy_pass xxx;
}
}
Thank You in advance for your Support ...
Sincerely,
Vincent
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From osa at freebsd.org.ru Wed Feb 24 12:57:17 2021
From: osa at freebsd.org.ru (Sergey A. Osokin)
Date: Wed, 24 Feb 2021 15:57:17 +0300
Subject: server persistance using sticky cookie
In-Reply-To: <7fef5e51c2131427a1c910c97e2a8a2d.NginxMailingListEnglish@forum.nginx.org>
References: <7fef5e51c2131427a1c910c97e2a8a2d.NginxMailingListEnglish@forum.nginx.org>
Message-ID:
Hi,
since this module is a third-party module, I'd recommend to take a
look on GitHub, there are many forks of this module are available
there.
--
Sergey Osokin
On Tue, Feb 23, 2021 at 09:54:00PM -0500, mondji wrote:
> Hi,
> I have setup this module
> https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng/src/master/ to
> have an upstream server persistance using cookies,
> I compiled nginx 1.18.0 with this module suscessfully.
>
> Here is my loadbalancer configuration:
>
> /etc/nginx/conf.d/loadbalance.conf
>
> proxy_cache_path /path/to_cache/dir keys_zone=backcache:10m;
>
> upstream backendServers {
> server IP1:81;
> server IP2:81;
> sticky name=mysticky expire=10860 path=/;
> }
>
>
> server {
> listen IP3:80;
> server_name example.com;
>
> location / {
> return 301 https://$server_name$request_uri;
> }
> }
>
> server {
> listen IP3:443 ssl;
> server_name example.com;
>
> location / {
> proxy_pass https://backendServers;
> proxy_http_version 1.1;
> proxy_set_header Upgrade $http_upgrade;
> proxy_set_header Connection $connection_upgrade;
> ...
> }
> }
>
> backend servers are running iis linked with coldfusion (tomcat).
> The sticky seems not working: when I stop a web site on server 1, nginx
> continue to send requests to that
> server. The loadbalance occures only when I clear the browser cache, or
> reboot server 1.
> The cookie is set (name, expiration,...) but nginx is "sticked" to the down
> server.
>
> Is someone here who has tested thi s module
> (https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng/src/master/)
> with recente nginx version
> If yes, is it working ? ciuld you share your experience ?
>
> Thanks
>
> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290835,290835#msg-290835
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
From mdounin at mdounin.ru Wed Feb 24 14:01:31 2021
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 24 Feb 2021 17:01:31 +0300
Subject: Question about IF and auth subrequest
In-Reply-To:
References:
Message-ID: <20210224140131.GS77619@mdounin.ru>
Hello!
On Wed, Feb 24, 2021 at 10:39:57AM +0100, Vincent Blondel wrote:
> Hello all,
> I have a quick question about the usage of IF and auth_request.
> I would like to know if it is possible to use a IF statement to condition
> the proxy behaviour of one /location depending on the response headers of
> the sub auth request ...
>
> location /subrequest/ {
> proxy_pass xxx;
> }
> location /anyrequest/ {
> auth_request /subrequest/;
>
> if ($response_header ~ '' ) {
> proxy_pass_request_body off;
> proxy_set_header Content-Length "";
> proxy_pass ...
> }
> if ($response_header !~ '' ) {
> proxy_pass xxx;
> }
> }
>
> Thank You in advance for your Support ...
No, it is not going to work. The "if" directive and other rewrite
module directives are executed as a part of selecing a
configuration to process a request[1], and this happens before any
authentication checks.
Things that can work:
- Using variables in the configuration and map[2] to conditionally
evaluate them after auth subrequest. This might not be the best
approach in your particular case, as proxy_pass_request_body
does not support variables.
- Returning an error from auth subrequest, so you can switch to a
different location using error_page[3].
[1] http://nginx.org/en/docs/http/ngx_http_rewrite_module.html
[2] http://nginx.org/en/docs/http/ngx_http_map_module.html
[3] http://nginx.org/r/error_page
--
Maxim Dounin
http://mdounin.ru/
From vbl5968 at gmail.com Wed Feb 24 17:44:49 2021
From: vbl5968 at gmail.com (Vincent Blondel)
Date: Wed, 24 Feb 2021 18:44:49 +0100
Subject: Question about IF and auth subrequest
In-Reply-To: <20210224140131.GS77619@mdounin.ru>
References:
<20210224140131.GS77619@mdounin.ru>
Message-ID:
Thank You for the swift answer Maxim.
If I understand, you mean something like that should be going to Work ...
location /subrequest/ {
proxy_pass xxx;
}
location /anyrequest/ {
auth_request /subrequest/;
error_page 400 = @fallback;
proxy_pass xxx;
}
location @fallback {
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_pass http://backend;
}
-V.
On Wed, Feb 24, 2021 at 3:01 PM Maxim Dounin wrote:
> Hello!
>
> On Wed, Feb 24, 2021 at 10:39:57AM +0100, Vincent Blondel wrote:
>
> > Hello all,
> > I have a quick question about the usage of IF and auth_request.
> > I would like to know if it is possible to use a IF statement to condition
> > the proxy behaviour of one /location depending on the response headers of
> > the sub auth request ...
> >
> > location /subrequest/ {
> > proxy_pass xxx;
> > }
> > location /anyrequest/ {
> > auth_request /subrequest/;
> >
> > if ($response_header ~ '' ) {
> > proxy_pass_request_body off;
> > proxy_set_header Content-Length "";
> > proxy_pass ...
> > }
> > if ($response_header !~ '' ) {
> > proxy_pass xxx;
> > }
> > }
> >
> > Thank You in advance for your Support ...
>
> No, it is not going to work. The "if" directive and other rewrite
> module directives are executed as a part of selecing a
> configuration to process a request[1], and this happens before any
> authentication checks.
>
> Things that can work:
>
> - Using variables in the configuration and map[2] to conditionally
> evaluate them after auth subrequest. This might not be the best
> approach in your particular case, as proxy_pass_request_body
> does not support variables.
>
> - Returning an error from auth subrequest, so you can switch to a
> different location using error_page[3].
>
> [1] http://nginx.org/en/docs/http/ngx_http_rewrite_module.html
> [2] http://nginx.org/en/docs/http/ngx_http_map_module.html
> [3] http://nginx.org/r/error_page
>
> --
> Maxim Dounin
> http://mdounin.ru/
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mdounin at mdounin.ru Wed Feb 24 20:03:09 2021
From: mdounin at mdounin.ru (Maxim Dounin)
Date: Wed, 24 Feb 2021 23:03:09 +0300
Subject: Question about IF and auth subrequest
In-Reply-To:
References:
<20210224140131.GS77619@mdounin.ru>
Message-ID: <20210224200309.GU77619@mdounin.ru>
Hello!
On Wed, Feb 24, 2021 at 06:44:49PM +0100, Vincent Blondel wrote:
> Thank You for the swift answer Maxim.
> If I understand, you mean something like that should be going to Work ...
>
> location /subrequest/ {
> proxy_pass xxx;
> }
> location /anyrequest/ {
> auth_request /subrequest/;
> error_page 400 = @fallback;
> proxy_pass xxx;
> }
> location @fallback {
> proxy_pass_request_body off;
> proxy_set_header Content-Length "";
> proxy_pass http://backend;
> }
Sort of, but you have to use 401 or 403, see the auth_request
documentation (http://nginx.org/en/docs/http/ngx_http_auth_request_module.html).
--
Maxim Dounin
http://mdounin.ru/
From vbl5968 at gmail.com Thu Feb 25 16:38:06 2021
From: vbl5968 at gmail.com (Vincent Blondel)
Date: Thu, 25 Feb 2021 17:38:06 +0100
Subject: Question about IF and auth subrequest
In-Reply-To: <20210224200309.GU77619@mdounin.ru>
References:
<20210224140131.GS77619@mdounin.ru>
<20210224200309.GU77619@mdounin.ru>
Message-ID:
Thank You for the update,
Will try and let You know ...
-V.
On Wed, Feb 24, 2021 at 9:03 PM Maxim Dounin wrote:
> Hello!
>
> On Wed, Feb 24, 2021 at 06:44:49PM +0100, Vincent Blondel wrote:
>
> > Thank You for the swift answer Maxim.
> > If I understand, you mean something like that should be going to Work ...
> >
> > location /subrequest/ {
> > proxy_pass xxx;
> > }
> > location /anyrequest/ {
> > auth_request /subrequest/;
> > error_page 400 = @fallback;
> > proxy_pass xxx;
> > }
> > location @fallback {
> > proxy_pass_request_body off;
> > proxy_set_header Content-Length "";
> > proxy_pass http://backend;
> > }
>
> Sort of, but you have to use 401 or 403, see the auth_request
> documentation (
> http://nginx.org/en/docs/http/ngx_http_auth_request_module.html).
>
> --
> Maxim Dounin
> http://mdounin.ru/
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nginx-forum at forum.nginx.org Fri Feb 26 02:31:36 2021
From: nginx-forum at forum.nginx.org (allenhe)
Date: Thu, 25 Feb 2021 21:31:36 -0500
Subject: How nginx stream module reuse tcp connections?
Message-ID: <718db6d63522d1e64df098ff99657d26.NginxMailingListEnglish@forum.nginx.org>
Hi,
As we know there are some keepalive options in the nginx http modules to
reuse tcp connections,
But are there corresponding options in the nginx stream module to achieve
the same?
How nginx persist tcp connection with downstream?
How nginx persist tcp connection with upstream?
What is the "session" meaning in the stream context?
BR,
Allen
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290851,290851#msg-290851