From mdounin at mdounin.ru Tue Dec 1 00:27:17 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Dec 2020 03:27:17 +0300 Subject: empty variable in access log In-Reply-To: References: <20201130132750.GF1147@mdounin.ru> <20201130224606.GI1147@mdounin.ru> Message-ID: <20201201002717.GK1147@mdounin.ru> Hello! On Mon, Nov 30, 2020 at 03:26:59PM -0800, Frank Liu wrote: > ok, for testing, I removed the variable from the map, and add one line in a > 2-way SSL server config, to create a fresh new variable: > > set $test_var "test"; > > For a request without client cert (400), I see neither "test", nor "-" in > the access log for $test_var. I only see blank, as if the $test_var was set > to "". That's because variables defined due to "set" somewhere in the configuration default to "" with an optional warning (http://nginx.org/r/uninitialized_variable_warn). And in your test the variable is uninitialized, as the set directive is not executed for the request in question. The "-" special value can be seen for various builtin variables, such as non-existing headers ($http_*, $sent_http_*, $upstream_http_*), non-existing arguments ($arg_*), cookies ($cookie_*), $content_length if not available, $remote_user if not provided or empty, and so on. -- Maxim Dounin http://mdounin.ru/ From gfrankliu at gmail.com Tue Dec 1 00:37:29 2020 From: gfrankliu at gmail.com (Frank Liu) Date: Mon, 30 Nov 2020 16:37:29 -0800 Subject: empty variable in access log In-Reply-To: <20201201002717.GK1147@mdounin.ru> References: <20201130132750.GF1147@mdounin.ru> <20201130224606.GI1147@mdounin.ru> <20201201002717.GK1147@mdounin.ru> Message-ID: Thanks Maxim, If I understand correctly, the uninitialized custom variable is the same as a variable initialized as "". That's why we don't see "-", but only see "". Only internal variables will have the special "-" treatment. Frank On Mon, Nov 30, 2020 at 4:27 PM Maxim Dounin wrote: > Hello! > > On Mon, Nov 30, 2020 at 03:26:59PM -0800, Frank Liu wrote: > > > ok, for testing, I removed the variable from the map, and add one line > in a > > 2-way SSL server config, to create a fresh new variable: > > > > set $test_var "test"; > > > > For a request without client cert (400), I see neither "test", nor "-" in > > the access log for $test_var. I only see blank, as if the $test_var was > set > > to "". > > That's because variables defined due to "set" somewhere in the > configuration default to "" with an optional warning > (http://nginx.org/r/uninitialized_variable_warn). And in your > test the variable is uninitialized, as the set directive is not > executed for the request in question. > > The "-" special value can be seen for various builtin variables, > such as non-existing headers ($http_*, $sent_http_*, > $upstream_http_*), non-existing arguments ($arg_*), cookies > ($cookie_*), $content_length if not available, $remote_user if not > provided or empty, and so on. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Tue Dec 1 00:44:40 2020 From: gfrankliu at gmail.com (Frank Liu) Date: Mon, 30 Nov 2020 16:44:40 -0800 Subject: empty variable in access log In-Reply-To: References: <20201130132750.GF1147@mdounin.ru> <20201130224606.GI1147@mdounin.ru> <20201201002717.GK1147@mdounin.ru> Message-ID: BTW, if I set error_log to "warn", I do see: 2020/12/01 00:32:46 [warn] 7356#7356: *1 using uninitialized "test_var" variable while logging request, client: 127.0.0.1, server: _, request: "GET / HTTP/1.1", host: "localhost" The confusion was whether those uninitialized variables should be logged as "-", or "" (as if it is initialized with ""). I mis-read your earlier comment "If the variable value is not found, a hyphen (?-?) will be logged.". I took it as "if variable is uninitialized, a hyphen is logged". On Mon, Nov 30, 2020 at 4:37 PM Frank Liu wrote: > Thanks Maxim, > > If I understand correctly, the uninitialized custom variable is the same > as a variable initialized as "". That's why we don't see "-", but only see > "". > Only internal variables will have the special "-" treatment. > > Frank > > > On Mon, Nov 30, 2020 at 4:27 PM Maxim Dounin wrote: > >> Hello! >> >> On Mon, Nov 30, 2020 at 03:26:59PM -0800, Frank Liu wrote: >> >> > ok, for testing, I removed the variable from the map, and add one line >> in a >> > 2-way SSL server config, to create a fresh new variable: >> > >> > set $test_var "test"; >> > >> > For a request without client cert (400), I see neither "test", nor "-" >> in >> > the access log for $test_var. I only see blank, as if the $test_var was >> set >> > to "". >> >> That's because variables defined due to "set" somewhere in the >> configuration default to "" with an optional warning >> (http://nginx.org/r/uninitialized_variable_warn). And in your >> test the variable is uninitialized, as the set directive is not >> executed for the request in question. >> >> The "-" special value can be seen for various builtin variables, >> such as non-existing headers ($http_*, $sent_http_*, >> $upstream_http_*), non-existing arguments ($arg_*), cookies >> ($cookie_*), $content_length if not available, $remote_user if not >> provided or empty, and so on. >> >> -- >> Maxim Dounin >> http://mdounin.ru/ >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From notthetup at gmail.com Tue Dec 1 02:49:09 2020 From: notthetup at gmail.com (Chinmay Pendharkar) Date: Tue, 1 Dec 2020 10:49:09 +0800 Subject: Logging WebSocket Messages. Message-ID: Hello, Is there a way to log individual websocket messages going through a nginx server setup to proxy websocket as explained here https://nginx.org/en/docs/http/websocket.html ? -Chinmay -------------- next part -------------- An HTML attachment was scrubbed... URL: From 201904-nginx at planhack.com Tue Dec 1 02:52:00 2020 From: 201904-nginx at planhack.com (201904-nginx at planhack.com) Date: 30 Nov 20 21:52 EST Subject: Logging WebSocket Messages. In-Reply-To: Message-ID: <20201201025229.383A8E8E40@vps1.haller.ws> Hey Chinmay! Use tcpdump or tshark. Patrick From xiranzhang87 at gmail.com Tue Dec 1 07:03:39 2020 From: xiranzhang87 at gmail.com (Xinran Zhang) Date: Tue, 1 Dec 2020 16:03:39 +0900 Subject: monitor ssl-session-cache status Message-ID: Hello May I ask how to monitor nginx ssl-session-cache status For instance: How much ssl-session-cache has been used, how much left How many Session-ID are saving -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Dec 1 11:03:24 2020 From: nginx-forum at forum.nginx.org (Driver) Date: Tue, 01 Dec 2020 06:03:24 -0500 Subject: HTTP header size setting Message-ID: <1c48622023ddea18e06b61b78b52739e.NginxMailingListEnglish@forum.nginx.org> Hello, What is the comparable setting to tomcat maxHttpHeaderSize ? I have to set this to 20KB on nginx proxy. I tried below but they were not working: client_header_buffer_size 20k; large_client_header_buffers 4 20k; proxy_buffers 8 20k; proxy_buffer_size 20k; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290117,290117#msg-290117 From nginx at bartelt.name Tue Dec 1 11:11:50 2020 From: nginx at bartelt.name (Andreas Bartelt) Date: Tue, 1 Dec 2020 12:11:50 +0100 Subject: nginx 1.18.0 implicitly enables TLS 1.3 (with only "ssl_protocols TLSv1.2; " in nginx.conf config) In-Reply-To: <20201130223915.GH1147@mdounin.ru> References: <20201130150759.GG1147@mdounin.ru> <3bc72df5-211f-6ed8-e829-e290a70224c5@bartula.de> <20201130223915.GH1147@mdounin.ru> Message-ID: <038d18b7-865a-d90e-5179-8552e1f7c1cb@bartula.de> On 11/30/20 11:39 PM, Maxim Dounin wrote: > Hello! > > On Mon, Nov 30, 2020 at 06:41:18PM +0100, Andreas Bartelt wrote: > >> On 11/30/20 4:07 PM, Maxim Dounin wrote: >>> Hello! >>> >>> On Sun, Nov 29, 2020 at 04:01:07PM +0100, nginx at bartelt.name wrote: >>> >>>> I've noticed that nginx 1.18.0 always enables TLS 1.3 even if not >>>> configured to do so. I've observed this behavior on OpenBSD with (nginx >>>> 1.18.0 linked against LibreSSL 3.3.0) and on Ubuntu 20.04 (nginx 1.18.0 >>>> linked against OpenSSL 1.1.1f). I don't know which release of nginx >>>> introduced this bug. >>>> >>>> From nginx.conf: >>>> ssl_protocols TLSv1.2; >>>> --> in my understanding, this config statement should only enable TLS >>>> 1.2 but not TLS 1.3. However, the observed behavior is that TLS 1.3 is >>>> implicitly enabled in addition to TLS 1.2. >>> >>> As long as "ssl_protocols TLSv1.2;" is the only ssl_protocols in >>> nginx configuration, TLSv1.3 shouldn't be enabled. Much like when >>> there are no "ssl_protocols" at all, as TLSv1.3 isn't enabled by >>> default (for now, at least up to and including nginx 1.19.5). >>> >> >> I've just retested this with my Ubuntu 20.04 based nginx test instance >> from yesterday (nginx 1.18.0 linked against OpenSSL 1.1.1f) and noticed >> that it works there as intended (i.e., "ssl_protocols TLSv1.2;" only >> enables TLS 1.2 but not TLS 1.3). I don't know what I did wrong there >> yesterday -- sorry for this. >> >> However, the problem persists on OpenBSD current with nginx 1.18.0 >> (built from ports with default options which links against LibreSSL >> 3.3.0 from base). Setting "ssl_protocols TLSv1.2;" enables TLS 1.2 as >> well as TLS 1.3 there. > > I don't see any problems when testing with LibreSSL 3.3.0 as > available on libressl.org and the very same configuration. So > it's probably something specific to your system. > > Some possible reasons for the behaviour you are seeing, in no > particular order: > > - Given that OpenBSD current and LibreSSL from base implies some > arbitrary version of LibreSSL, this might be something with the > changes present on your system but not in LibreSSL 3.3.0 > release. > > - There may be something with the port you are using to compile > nginx. Consider testing nginx compiled manually. > > - You are testing the wrong server (the name resolves to a > different IP address, or the IP address is routed to a different > server). Make sure you are seeing connection on nginx side, > something like "return 200 $ssl_protocol;" in the appropriate > server block and making a "GET / HTTP/1.0" request in s_client > would be a good test. > > - The nginx version running differs from the one on disk, and you > are running an nginx version older than 1.15.6 built with an old > LibreSSL without TLSv1.3 but running with LibreSSL 3.3.0 with > TLSv1.3 enabled. Check the "Server" header in the above test. > > - There might be something wrong with headers on your system. The > behaviour observed might happen if SSL_OP_NO_TLSv1_3, TLS1_3_VERSION, > and SSL_CTX_set_min_proto_version/SSL_CTX_set_max_proto_version are > not defined, yet TLSv1.3 is present in the library. > I've just tested the same nginx.conf on two freshly installed OpenBSD based test systems: 1) release 6.8 (with nginx 1.18.0 / LibreSSL 3.2.2) 2) snapshot from today (with nginx 1.18.0 / LibreSSL 3.3.0 + more recent commits since it's a snapshot) Both instances were installed from scratch with the official OpenBSD binary tarballs and the nginx binary package from ports, respectively. Release 6.8 interprets "ssl_protocols TLSv1.2;" correctly. However, the snapshot instance enables TLS 1.2 and 1.3, i.e., this looks like a bug which has been recently introduced into OpenBSD current. Although OpenBSD 6.8 and the snapshot both use nginx 1.18.0, it's built differently on current: # cvs diff -r 1.145 -r 1.146 ports/www/nginx/Makefile Index: ports/www/nginx/Makefile =================================================================== RCS file: /cvs/ports/www/nginx/Makefile,v retrieving revision 1.145 retrieving revision 1.146 diff -u -p -r1.145 -r1.146 --- ports/www/nginx/Makefile 27 Jul 2020 14:33:15 -0000 1.145 +++ ports/www/nginx/Makefile 23 Oct 2020 15:20:30 -0000 1.146 @@ -1,4 +1,4 @@ -# $OpenBSD: Makefile,v 1.145 2020/07/27 14:33:15 sthen Exp $ +# $OpenBSD: Makefile,v 1.146 2020/10/23 15:20:30 robert Exp $ BROKEN-hppa= src/core/ngx_rwlock.c:116:2: error: \#error ngx_atomic_cmp_set() is not defined! @@ -21,7 +21,7 @@ VERSION= 1.18.0 DISTNAME= nginx-${VERSION} CATEGORIES= www -REVISION-main= 0 +REVISION-main= 1 REVISION-xslt= 0 VERSION-rtmp= 1.2.1 @@ -122,6 +122,8 @@ SUBST_VARS= NGINX_DIR .for i in ${MODULE_PACKAGES} PREFIX$i= ${NGINX_DIR}/modules .endfor + +CFLAGS+= -DTLS1_3_VERSION=0x0304 CFLAGS+= -Wall -Wpointer-arith \ -I "${LOCALBASE}/include/libxml2" \ Best regards Andreas From pluknet at nginx.com Tue Dec 1 12:40:18 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 1 Dec 2020 12:40:18 +0000 Subject: nginx 1.18.0 implicitly enables TLS 1.3 (with only "ssl_protocols TLSv1.2; " in nginx.conf config) In-Reply-To: <038d18b7-865a-d90e-5179-8552e1f7c1cb@bartula.de> References: <20201130150759.GG1147@mdounin.ru> <3bc72df5-211f-6ed8-e829-e290a70224c5@bartula.de> <20201130223915.GH1147@mdounin.ru> <038d18b7-865a-d90e-5179-8552e1f7c1cb@bartula.de> Message-ID: <87429B96-E64A-4254-AC9E-F0FB3AAC1E5B@nginx.com> > On 1 Dec 2020, at 11:11, Andreas Bartelt wrote: > > On 11/30/20 11:39 PM, Maxim Dounin wrote: >> Hello! >> On Mon, Nov 30, 2020 at 06:41:18PM +0100, Andreas Bartelt wrote: >>> On 11/30/20 4:07 PM, Maxim Dounin wrote: >>>> Hello! >>>> >>>> On Sun, Nov 29, 2020 at 04:01:07PM +0100, nginx at bartelt.name wrote: >>>> >>>>> I've noticed that nginx 1.18.0 always enables TLS 1.3 even if not >>>>> configured to do so. I've observed this behavior on OpenBSD with (nginx >>>>> 1.18.0 linked against LibreSSL 3.3.0) and on Ubuntu 20.04 (nginx 1.18.0 >>>>> linked against OpenSSL 1.1.1f). I don't know which release of nginx >>>>> introduced this bug. >>>>> >>>>> From nginx.conf: >>>>> ssl_protocols TLSv1.2; >>>>> --> in my understanding, this config statement should only enable TLS >>>>> 1.2 but not TLS 1.3. However, the observed behavior is that TLS 1.3 is >>>>> implicitly enabled in addition to TLS 1.2. >>>> >>>> As long as "ssl_protocols TLSv1.2;" is the only ssl_protocols in >>>> nginx configuration, TLSv1.3 shouldn't be enabled. Much like when >>>> there are no "ssl_protocols" at all, as TLSv1.3 isn't enabled by >>>> default (for now, at least up to and including nginx 1.19.5). >>>> >>> >>> I've just retested this with my Ubuntu 20.04 based nginx test instance >>> from yesterday (nginx 1.18.0 linked against OpenSSL 1.1.1f) and noticed >>> that it works there as intended (i.e., "ssl_protocols TLSv1.2;" only >>> enables TLS 1.2 but not TLS 1.3). I don't know what I did wrong there >>> yesterday -- sorry for this. >>> >>> However, the problem persists on OpenBSD current with nginx 1.18.0 >>> (built from ports with default options which links against LibreSSL >>> 3.3.0 from base). Setting "ssl_protocols TLSv1.2;" enables TLS 1.2 as >>> well as TLS 1.3 there. >> I don't see any problems when testing with LibreSSL 3.3.0 as >> available on libressl.org and the very same configuration. So >> it's probably something specific to your system. >> Some possible reasons for the behaviour you are seeing, in no >> particular order: >> - Given that OpenBSD current and LibreSSL from base implies some >> arbitrary version of LibreSSL, this might be something with the >> changes present on your system but not in LibreSSL 3.3.0 >> release. >> - There may be something with the port you are using to compile >> nginx. Consider testing nginx compiled manually. >> - You are testing the wrong server (the name resolves to a >> different IP address, or the IP address is routed to a different >> server). Make sure you are seeing connection on nginx side, >> something like "return 200 $ssl_protocol;" in the appropriate >> server block and making a "GET / HTTP/1.0" request in s_client >> would be a good test. >> - The nginx version running differs from the one on disk, and you >> are running an nginx version older than 1.15.6 built with an old >> LibreSSL without TLSv1.3 but running with LibreSSL 3.3.0 with >> TLSv1.3 enabled. Check the "Server" header in the above test. >> - There might be something wrong with headers on your system. The >> behaviour observed might happen if SSL_OP_NO_TLSv1_3, TLS1_3_VERSION, >> and SSL_CTX_set_min_proto_version/SSL_CTX_set_max_proto_version are >> not defined, yet TLSv1.3 is present in the library. > > I've just tested the same nginx.conf on two freshly installed OpenBSD based test systems: > 1) release 6.8 (with nginx 1.18.0 / LibreSSL 3.2.2) > 2) snapshot from today (with nginx 1.18.0 / LibreSSL 3.3.0 + more recent commits since it's a snapshot) > > Both instances were installed from scratch with the official OpenBSD binary tarballs and the nginx binary package from ports, respectively. > > Release 6.8 interprets "ssl_protocols TLSv1.2;" correctly. However, the snapshot instance enables TLS 1.2 and 1.3, i.e., this looks like a bug which has been recently introduced into OpenBSD current. > > Although OpenBSD 6.8 and the snapshot both use nginx 1.18.0, it's built differently on current: > # cvs diff -r 1.145 -r 1.146 ports/www/nginx/Makefile > Index: ports/www/nginx/Makefile > =================================================================== > RCS file: /cvs/ports/www/nginx/Makefile,v > retrieving revision 1.145 > retrieving revision 1.146 > diff -u -p -r1.145 -r1.146 > --- ports/www/nginx/Makefile 27 Jul 2020 14:33:15 -0000 1.145 > +++ ports/www/nginx/Makefile 23 Oct 2020 15:20:30 -0000 1.146 > @@ -1,4 +1,4 @@ > -# $OpenBSD: Makefile,v 1.145 2020/07/27 14:33:15 sthen Exp $ > +# $OpenBSD: Makefile,v 1.146 2020/10/23 15:20:30 robert Exp $ > > BROKEN-hppa= src/core/ngx_rwlock.c:116:2: error: \#error ngx_atomic_cmp_set() is not defined! > > @@ -21,7 +21,7 @@ VERSION= 1.18.0 > DISTNAME= nginx-${VERSION} > CATEGORIES= www > > -REVISION-main= 0 > +REVISION-main= 1 > REVISION-xslt= 0 > > VERSION-rtmp= 1.2.1 > @@ -122,6 +122,8 @@ SUBST_VARS= NGINX_DIR > .for i in ${MODULE_PACKAGES} > PREFIX$i= ${NGINX_DIR}/modules > .endfor > + > +CFLAGS+= -DTLS1_3_VERSION=0x0304 > That is a culprit. It hijacks an established API expected in nginx, don't do that. Defining TLS1_3_VERSION forces nginx to maximize a maximum supported protocol version to TLSv1.3. By default it is not set, and if libssl supports the corresponding API, then it is set to TLSv1.2. The presence of TLS1_3_VERSION implies SSL_OP_NO_TLSv1_3, which is not actually defined in this particular case. That's why disabling TLSv1.3 (or, not enabling) in the nginx configuration doesn't have an effect. Enabling TLSv1.3 this way[1] looks wrong. [1] http://cvsweb.openbsd.org/cgi-bin/cvsweb/ports/www/nginx/Makefile#rev1.146 The port maintainer should probably examine a different way to do this. For example, such as defining LIBRESSL_HAS_TLS1_3 as seen in include/openssl/opensslfeatures.h comments of LibreSSL. -- Sergey Kandaurov From nginx-forum at forum.nginx.org Tue Dec 1 12:46:18 2020 From: nginx-forum at forum.nginx.org (Driver) Date: Tue, 01 Dec 2020 07:46:18 -0500 Subject: HTTP header size setting In-Reply-To: <1c48622023ddea18e06b61b78b52739e.NginxMailingListEnglish@forum.nginx.org> References: <1c48622023ddea18e06b61b78b52739e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8ae7614c96a5c05e82bc4f6f9231e0df.NginxMailingListEnglish@forum.nginx.org> I found out that I need to increase http2_max_field_size value. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290117,290120#msg-290120 From jamesread5737 at gmail.com Tue Dec 1 17:33:47 2020 From: jamesread5737 at gmail.com (James Read) Date: Tue, 1 Dec 2020 17:33:47 +0000 Subject: nginx internals: processors versus network Message-ID: Hi, I have a question about nginx internals. How does nginx ensure high throughput? I understand that nginx uses many parallel connections by using epoll. But what about processors? Is connection handling spread amongst multiple processors to handle any processing bottleneck? The reason I ask is because I am building a web crawler using libev and libcurl and my aim is to match nginx capability for throughput. I made a web crawler that can handle 10,000+ connections but throughput is not impressive. ~16Mbps on average. It was suggested to me on stackoverflow that this could be because of a processor bottleneck. Does nginx suffer from similar limitations? James Read -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Tue Dec 1 18:37:25 2020 From: r at roze.lv (Reinis Rozitis) Date: Tue, 1 Dec 2020 20:37:25 +0200 Subject: nginx internals: processors versus network In-Reply-To: References: Message-ID: <000c01d6c811$03862310$0a926930$@roze.lv> > I have a question about nginx internals. How does nginx ensure high throughput? I understand that nginx uses many parallel connections by using epoll. But what about processors? Is connection handling spread amongst multiple processors to handle any processing bottleneck? If necessary you can assign workers to particular cores: http://nginx.org/en/docs/ngx_core_module.html#worker_cpu_affinity > I made a web crawler that can handle 10,000+ connections but throughput is not impressive. ~16Mbps on average. You have to elaborate on how nginx actually works as a crawler? Some module? Lua code? rr From nmilas at noa.gr Tue Dec 1 21:21:29 2020 From: nmilas at noa.gr (Nikolaos Milas) Date: Tue, 1 Dec 2020 23:21:29 +0200 Subject: Puzzling Log messages talking to php-fpm Message-ID: <1e6bd587-c52f-5fb6-78d8-bb0d2ac6bf41@noa.gr> Hello, To start with, I am not an nginx geek, so please be patient with me! We have a server (real name substituted by mapserver.example.com) running nginx 1.18.0 on CentOS 7 with php-fpm listening on port 9001. The server is only serving a maps application. The application is mainly called by another server (real name substituted with mainapplication.example.com) which displays the maps app through an iframe. Although I cannot see any disruptions in service of mapserver (all my requests appear successful), I constantly see in the logs, pairs of the following error (I post here two examples: one with an IPv6 client and one with an IPv4 client): 2020/12/01 20:45:42 [error] 32515#32515: *314521 connect() failed (111: Connection refused) while connecting to upstream, client: 2a02:587:da23:700:[last_4_parts_removed], server: mapserver.example.com, request: "GET /data/1 HTTP/1.1", upstream: "fastcgi://[::1]:9001", host: "mapserver.example.com", referrer: "http://mapserver.example.com/index/en/1" 2020/12/01 20:45:42 [warn] 32515#32515: *314521 upstream server temporarily disabled while connecting to upstream, client: 2a02:587:da23:700:[last_4_parts_removed], server: mapserver.example.com, request: "GET /data/1 HTTP/1.1", upstream: "fastcgi://[::1]:9001", host: "mapserver.example.com", referrer: "http://mapserver.example.com/index/en/1" 2020/12/01 20:46:15 [error] 32516#32516: *314532 connect() failed (111: Connection refused) while connecting to upstream, client: ::ffff:193.140.[last_2_octets_removed], server: mapserver.example.com, request: "GET /index/en/1 HTTP/1.1", upstream: "fastcgi://[::1]:9001", host: "mapserver.example.com", referrer: "http://mainapplication.example.com" 2020/12/01 20:46:15 [warn] 32516#32516: *314532 upstream server temporarily disabled while connecting to upstream, client: ::ffff:193.140.[last_2_octets_removed], server: mapserver.example.com, request: "GET /index/en/1 HTTP/1.1", upstream: "fastcgi://[::1]:9001", host: "mapserver.example.com", referrer: "http://mainapplication.example.com" My tests show that my own client IP address is logged among these errors, even though I can't see a denial of service. So, I have reached to the conclusion that probably all requests to nginx (despite that they are successful) trigger the above logging. My questions: - What do the above errors mean? Could they indicate some temporary "denial of service" by php-fpm, which in the end responds successfully so the client does not notice anything except perhaps some slower than expected response? Or it may just be an indication of long response time to complete the reply by php-fpm, so that nginx logs it as an error? - Would these log messages provide a hint to another problem, based on your experience? - Should I troubleshoot the issue more and, if so, how? - Any config suggestions to help resolve the issue? Nginx and php-fpm configs follow for your reference. (Any "side suggestions" on these will be welcome!) I appreciate your help! Also: ============================================================================== nginx_status: ============================================================================== Active connections: 1 server accepts handled requests ?161566 161566 310064 Reading: 0 Writing: 1 Waiting: 0 ============================================================================== ============================================================================== fpm_status: ============================================================================== pool:???????????????? www process manager:????? dynamic start time:?????????? 01/Dec/2020:19:49:36 +0200 start since:????????? 9465 accepted conn:??????? 1600 listen queue:???????? 0 max listen queue:???? 0 listen queue len:???? 128 idle processes:?????? 5 active processes:???? 1 total processes:????? 6 max active processes: 2 max children reached: 0 slow requests:??????? 0 ============================================================================== ============================================================================== Nginx config (Real server_name has been obfuscated) ============================================================================== server { ??? listen [::]:80 ipv6only=off; ??? listen??? 443 ssl http2 default deferred; ??? listen??? [::]:443 ssl http2 default deferred; ??? server_name? mapserver.example.com; ??????? # Deny if the user agent is missing empty or contains just a single hyphen. ??????? if ($http_user_agent = "") { ??????????? return 403; ??????? } ??????? if ($http_user_agent = "-") { ??????????? return 403; ??????? } ??? ssl_certificate /etc/pki/tls/certs/star_example_com-19623584-with_CA.crt; ??? ssl_certificate_key /etc/pki/tls/private/star_example.com-19623584.key; ??? ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ??? ssl_ciphers 'EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA256:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EDH+aRSA+AESGCM:EDH+aRSA+SHA256:EDH+aRSA:EECDH:!aNULL:!eNULL:!MEDIUM:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4:!SEED'; ??? ssl_prefer_server_ciphers on; ??? ssl_session_cache shared:SSL:50m; ??? ssl_session_timeout? 1d; ??? ssl_session_tickets off; ?? ssl_dhparam /etc/pki/tls/certs/dhparam.pem; ??? access_log? /var/webs/example/log/access_log main; ??? error_log /var/webs/example/log/error_log warn; ??? root?? /var/webs/example/public/; ??? index? index.php index.html index.htm index.cgi default.html default.htm default.php; ??? location / { ?????? try_files $uri $uri/ /index.php?$args; ?????? allow all; ??? } ??? location ~ /nginx_status(.*) { ?????? stub_status on; ?????? access_log?? off; ?????? allow 127.0.0.1; ?????? allow ::1; ?????? allow 10.10.10.0/24; ?????? deny all; ??? } ??? location /fpm_status { ?????? access_log?? off; ?????? allow 127.0.0.1; ?????? allow ::1; ?????? allow 10.10.10.0/24; ?????? deny all; ?????? fastcgi_cache off; ?????? include fastcgi_params; ?????? fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; ?????? fastcgi_pass localhost:9001; ??? } ??? location ~ /tests { ?????? stub_status on; ?????? access_log?? off; ?????? allow 127.0.0.1; ?????? allow ::1; ?????? allow 10.10.10.0/24; ?????? deny all; ??? } ??? location ~* /(images|cache|media|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ { ??????? return 403; ??? } ??? location ~ /\.ht { ??????? deny? all; ??? } ??? location ~ \.php$ { ?????? allow all; ??????????????? # Setup var defaults ??????????????? set $no_cache ""; ??????????????? # If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie ??????????????? if ($request_method !~ ^(GET|HEAD)$) { ??????????????????? set $no_cache "1"; ??????????????? } ??????????????? # Drop no cache cookie if need be ??????????????? # (for some reason, add_header fails if included in prior if-block) ??????????????? if ($no_cache = "1") { ??????????????????? add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/"; ??????????????????? add_header X-Microcachable "0"; ??????????????? } ??????????????? # Bypass cache if no-cache cookie is set ??????????????? if ($http_cookie ~* "_mcnc") { ??????????????????????????? set $no_cache "1"; ??????????????? } ??????????????? # Bypass cache if flag is set #??????????????? fastcgi_no_cache $no_cache; #??????????????? fastcgi_cache_bypass $no_cache; ??????????????? fastcgi_no_cache "1"; ??????????????? fastcgi_cache_bypass "1"; ??????????????? fastcgi_cache microcache; ??????????????? fastcgi_cache_key $scheme$host$request_uri$request_method; ??????????????? fastcgi_cache_valid 200 301 302 303 502 5s; ??????????????? fastcgi_cache_use_stale updating error timeout invalid_header http_500; ??????????????? fastcgi_pass_header Set-Cookie; ??????????????? fastcgi_pass_header Cookie; ??????????????? fastcgi_ignore_headers Cache-Control Expires Set-Cookie; ??????????????? try_files $uri =404; ??????????????? include /etc/nginx/fastcgi_params; ??????????????? fastcgi_param PATH_INFO $fastcgi_script_name; ??????????????? fastcgi_intercept_errors on; ??????? fastcgi_buffer_size 384k; ??????? fastcgi_buffers 256 16k; ??????? fastcgi_busy_buffers_size 384k; ??????? fastcgi_temp_file_write_size 384k; ??????? fastcgi_read_timeout 240; ??????? fastcgi_pass localhost:9001; ??????? fastcgi_index index.php; ??????? include /etc/nginx/fastcgi_params; ??????? fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; ?? } ??? # caching of files ??? location ~* \.(ico|pdf|flv)$ { ??????? expires 2m; ??? } ??? location ~* gmap(.*)\.html$ { ??????? expires -1; ??? } ??? location ~* \.(js|css|png|jpg|jpeg|gif|swf|xml|txt|html|htm)$ { ??????? expires 2m; ??? } } ============================================================================== ============================================================================== php-fpm config: ============================================================================== [www] user = nginx group = nginx listen = 127.0.0.1:9001 listen.allowed_clients = 127.0.0.1 pm = dynamic pm.max_children = 50 pm.start_servers = 5 pm.min_spare_servers = 5 pm.max_spare_servers = 35 pm.max_requests = 10000 pm.status_path = /fpm_status slowlog = /var/opt/remi/php74/log/php-fpm/www-slow.log catch_workers_output = yes security.limit_extensions = .php .php3 .php4 .php5 .php7 php_admin_value[error_log] = /var/opt/remi/php74/log/php-fpm/www-error.log php_admin_flag[log_errors] = on php_value[session.save_handler] = files php_value[session.save_path]??? = /var/opt/remi/php74/lib/php/session php_value[soap.wsdl_cache_dir]? = /var/opt/remi/php74/lib/php/wsdlcache ============================================================================== Thanks in advance! All the best, Nick From r at roze.lv Tue Dec 1 22:44:36 2020 From: r at roze.lv (Reinis Rozitis) Date: Wed, 2 Dec 2020 00:44:36 +0200 Subject: Puzzling Log messages talking to php-fpm In-Reply-To: <1e6bd587-c52f-5fb6-78d8-bb0d2ac6bf41@noa.gr> References: <1e6bd587-c52f-5fb6-78d8-bb0d2ac6bf41@noa.gr> Message-ID: <001e01d6c833$8be0a150$a3a1e3f0$@roze.lv> > We have a server (real name substituted by mapserver.example.com) running nginx 1.18.0 on CentOS 7 with php-fpm listening on port 9001. Does the fpm listen also on ipv6 interface? Check: ss -ntlr | grep 9001 If you see [::]:9001 Since you have fastcgi_pass localhost:9001; I assume at some points the localhost resolves to an ipv6 > one with an IPv6 client and one with an IPv4 client): All your logs show ipv6 clients only - 2a02:587:da23:700:[last_4_parts_removed], and ::ffff:193.140.[last_2_octets_removed] are both ipv6 ips (the second is so called ipv4-mapped ipv6 address) Either make fpm listen to all interfaces or change the nginx config to: fastcgi_pass 127.0.0.1:9001; Or even better is to use unix sockets so you can avoid the tcp stack between nginx and php-fpm at all. rr From nginx-forum at forum.nginx.org Wed Dec 2 09:36:20 2020 From: nginx-forum at forum.nginx.org (TuzluKestane) Date: Wed, 02 Dec 2020 04:36:20 -0500 Subject: Mp4 mime-type module Message-ID: <353e47bdb1b3f4326817c72c51a30168.NginxMailingListEnglish@forum.nginx.org> Hello , I am trying to build a mp4 mime-type module but ? can't find the mime-type for comparison. I tried to use r->headers_out.content_type but ?t didn't work.Here is the peace of the code. tmp=r->headers_out.content_type; if (tmp!=NULL && ngx_strncmp(tmp,"video/mp4",9)==0) { ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, " found"); //r->keepalive = 0; return NGX_HTTP_FORBIDDEN; } Thanks a lot ! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290143,290143#msg-290143 From xeioex at nginx.com Wed Dec 2 10:54:06 2020 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 2 Dec 2020 13:54:06 +0300 Subject: njs-0.5.0 Message-ID: Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release focuses mostly on adding Buffer support in nginx modules. Buffer is a better alternative to string when working with arbitrary data and especially with binary protocols, because JavaScript strings operate on characters, not bytes. A character may take up to 4 bytes in UTF-8. The new Buffer properties are not designed to replace the string ones, but to be a better alternative for binary use cases. Notable new features: - r.rawVariables (nginx variables as a Buffer): : function is_local(r) { :??? return r.rawVariables.binary_remote_addr :?????????? .equals(Buffer.from([127,0,0,1])); : } - r.requestBuffer (request body as a Buffer): For a request with the following request body: ??? '{"a":{"b":"BAR"}}' : function get_slice_of_req_body(r) { :???? var body = r.requestBuffer; :???? var view = new DataView(body.buffer, 5, 11); :???? r.return(200, view); : } The response body will be: ??? '{"b":"BAR"}' - s.on() events now support new callbacks which receive data chuck as Buffer, this is especially useful for binary protocols. You can learn more about njs: - Overview and introduction: http://nginx.org/en/docs/njs/ - Using Babel to transpile JS code >= ES6 for njs https://github.com/jirutka/babel-preset-njs - Using node modules with njs: http://nginx.org/en/docs/njs/node_modules.html - Writing njs code using TypeScript definition files: ?? http://nginx.org/en/docs/njs/typescript.html Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.5.0???????????????????????????????????????? 01 Dec 2020 ??? nginx modules: ??? *) Feature: introduced global "ngx" object. ?????? The following methods were added: ???????? ngx.log(level, msg) ?????? The following properties were added: ???????? ngx.INFO, ???????? ngx.WARN, ???????? ngx.ERR. ??? *) Feature: added support for Buffer object where string ?????? is expected. ??? *) Feature: added Buffer version of existing properties. ?????? The following properties were added: ?????? r.requestBuffer (r.requestBody), ?????? r.responseBuffer (r.responseBody), ?????? r.rawVariables (r.variables), ?????? s.rawVariables (s.variables). ?????? The following events were added in stream module: ?????? upstream (upload), ?????? downstream (download). ??? *) Improvement: added aliases to existing properties. ?????? The following properties were added: ?????? r.requestText (r.requestBody), ?????? r.responseText (r.responseBody). ??? *) Improvement: throwing an exception in r.internalRedirect() ?????? for a subrequest. ??? *) Bugfix: fixed promise r.subrequest() with error_page redirect. ??? *) Bugfix: fixed promise events handling. ??? Core: ??? *) Feature: added TypeScript definitions for builtin ?????? modules. ?????? Thanks to Jakub Jirutka. ??? *) Feature: tracking unhandled promise rejection. ??? *) Feature: added initial iterator support. ?????? Thanks to Artem S. Povalyukhin. ??? *) Improvement: TypeScript definitions are refactored. ?????? Thanks to Jakub Jirutka. ??? *) Improvement: added forgotten support for ?????? Object.prototype.valueOf() in Buffer.from(). ??? *) Bugfix: fixed heap-use-after-free in JSON.parse(). ??? *) Bugfix: fixed heap-use-after-free in JSON.stringify(). ??? *) Bugfix: fixed JSON.stringify() for arrays resizable via ?????? getters. ??? *) Bugfix: fixed heap-buffer-overflow for ?????? RegExp.prototype[Symbol.replace]. ??? *) Bugfix: fixed returned value for Buffer.prototype.write* ?????? functions. ??? *) Bugfix: fixed querystring.stringify(). ?????? Thanks to Artem S. Povalyukhin. ??? *) Bugfix: fixed the catch handler for ?????? Promise.prototype.finally(). ??? *) Bugfix: fixed querystring.parse(). From nmilas at noa.gr Thu Dec 3 06:43:33 2020 From: nmilas at noa.gr (Nikolaos Milas) Date: Thu, 3 Dec 2020 08:43:33 +0200 Subject: Puzzling Log messages talking to php-fpm In-Reply-To: <001e01d6c833$8be0a150$a3a1e3f0$@roze.lv> References: <1e6bd587-c52f-5fb6-78d8-bb0d2ac6bf41@noa.gr> <001e01d6c833$8be0a150$a3a1e3f0$@roze.lv> Message-ID: On 2/12/2020 12:44 ?.?., Reinis Rozitis wrote: > Or even better is to use unix sockets so you can avoid the tcp stack between nginx and php-fpm at all. Thank you very much for your analysis and advice. You found the cause of the issue! I have managed to switch to connection via unix socket and the issue has gone! I guess I could have also used (in php-fpm) the directive: listen=9001 (which should bind to all interfaces) rather than: listen = 127.0.0.1:9001 (which was earlier used) but I decided to follow your advice and connect via a unix socket. I appreciate your time and eagerness to help! Thanks again, Nick From pgnet.dev at gmail.com Sat Dec 5 22:22:59 2020 From: pgnet.dev at gmail.com (PGNet Dev) Date: Sat, 5 Dec 2020 14:22:59 -0800 Subject: v1.19.5 OOPS: "Main process exited, code=dumped, status=11/SEGV" ? Message-ID: I'm running nginx/1.19.5 on a Fedora32 VM, w/ uname -rm 5.9.11-100.fc32.x86_64 x86_64 Its run for ages without issues. At least that I'd noticed ... Today, I caught a SEGV/core-dump; the server stopped systemctl status nginx ? nginx.service - The nginx HTTP and reverse proxy server Loaded: loaded (/etc/systemd/system/nginx.service; enabled; vendor preset: disabled) Active: failed (Result: core-dump) since Sat 2020-12-05 05:58:03 PST; 7h ago Process: 993 ExecStartPre=/bin/chown -R wwwrun:www /usr/local/etc/nginx (code=exited, status=0/SUCCESS) Process: 999 ExecStartPre=/usr/local/nginx/sbin/nginx -t -c /usr/local/etc/nginx/nginx.conf -g pid /run/nginx/nginx.pid; (code=exited, status=0/SUCCESS) Process: 1063 ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/etc/nginx/nginx.conf -g pid /run/nginx/nginx.pid; (code=exited, status=0/SUCCESS) Process: 1108 ExecStartPost=/bin/chown -R wwwrun:www /var/log/nginx (code=exited, status=0/SUCCESS) Process: 25986 ExecReload=/bin/kill -s HUP $MAINPID (code=exited, status=0/SUCCESS) Main PID: 1103 (code=dumped, signal=SEGV) CPU: 14.607s Checking logs (at current production loglevels) for this one, nothing out of the ordinary ... EXCEPT The last log entry I see,: /var/log/nginx/main.access.log:61.219.11.153 _ - [05/Dec/2020:05:08:14 -0800] \x01A\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 "400" 150 "-" "-" "-" Given the proximity of the timestamp, I'g guess it's related? I haven't yet figured out where/how to grab the core-dump ; working on that. Checking history it's happened a few times xzegrep SEGV /var/log/messages 2020-11-29T05:52:32.436235-08:00 vm0026 systemd[1]: nginx.service: Main process exited, code=dumped, status=11/SEGV 2020-12-01T05:39:03.218376-08:00 vm0026 systemd[1]: nginx.service: Main process exited, code=dumped, status=11/SEGV 2020-12-03T05:17:51.653637-08:00 vm0026 systemd[1]: nginx.service: Main process exited, code=dumped, status=11/SEGV 2020-12-05T05:58:03.611240-08:00 vm0026 systemd[1]: nginx.service: Main process exited, code=dumped, status=11/SEGV where each instance in log looks like, .... 2020-12-05T05:55:00.854490-08:00 vm0026 systemd[25768]: Reached target Shutdown. 2020-12-05T05:55:00.854510-08:00 vm0026 systemd[25768]: systemd-exit.service: Succeeded. 2020-12-05T05:55:00.854531-08:00 vm0026 systemd[25768]: Finished Exit the Session. 2020-12-05T05:55:00.854550-08:00 vm0026 systemd[25768]: Reached target Exit the Session. 2020-12-05T05:55:00.858225-08:00 vm0026 systemd[1]: user at 0.service: Succeeded. 2020-12-05T05:55:00.858322-08:00 vm0026 systemd[1]: Stopped User Manager for UID 0. 2020-12-05T05:55:00.860232-08:00 vm0026 systemd[1]: Stopping User Runtime Directory /run/user/0... 2020-12-05T05:55:00.868288-08:00 vm0026 systemd[1]: run-user-0.mount: Succeeded. 2020-12-05T05:55:00.870265-08:00 vm0026 systemd[1]: user-runtime-dir at 0.service: Succeeded. 2020-12-05T05:55:00.870383-08:00 vm0026 systemd[1]: Stopped User Runtime Directory /run/user/0. 2020-12-05T05:55:00.871216-08:00 vm0026 systemd[1]: Removed slice User Slice of UID 0. 2020-12-05T05:58:03.418222-08:00 vm0026 systemd[1]: Reloading The nginx HTTP and reverse proxy server. 2020-12-05T05:58:03.420214-08:00 vm0026 systemd[1]: Reloaded The nginx HTTP and reverse proxy server. 2020-12-05T05:58:03.432221-08:00 vm0026 systemd[1]: nginx.service: Unit cannot be reloaded because it is inactive. 2020-12-05T05:58:03.432358-08:00 vm0026 systemctl[25987]: nginx.service is not active, cannot reload. 2020-12-05T05:58:03.468235-08:00 vm0026 kernel: nginx[1103]: segfault at 10 ip 00007f5c566d6283 sp 00007ffeebdca500 error 4 in libperl.so.5.30.3[7f5c56668000+16d000] 2020-12-05T05:58:03.468374-08:00 vm0026 kernel: Code: f9 ff 48 89 45 10 48 83 c4 18 5b 5d 41 5c 41 5d 41 5e 41 5f c3 66 90 0f b6 7f 30 48 c1 e8 03 48 29 f8 48 89 c5 74 89 48 8b 02 <4c> 8b 68 10 4d 85 ed 0f 84 28 01 00 00 0f b6 40 30 49 c1 ed 03 49 2020-12-05T05:58:03.468407-08:00 vm0026 kernel: potentially unexpected fatal signal 11. 2020-12-05T05:58:03.468436-08:00 vm0026 kernel: CPU: 1 PID: 1103 Comm: nginx Not tainted 5.9.11-100.fc32.x86_64 #1 2020-12-05T05:58:03.468460-08:00 vm0026 kernel: Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014 2020-12-05T05:58:03.468483-08:00 vm0026 kernel: RIP: 0033:0x7f5c566d6283 2020-12-05T05:58:03.468504-08:00 vm0026 kernel: Code: f9 ff 48 89 45 10 48 83 c4 18 5b 5d 41 5c 41 5d 41 5e 41 5f c3 66 90 0f b6 7f 30 48 c1 e8 03 48 29 f8 48 89 c5 74 89 48 8b 02 <4c> 8b 68 10 4d 85 ed 0f 84 28 01 00 00 0f b6 40 30 49 c1 ed 03 49 2020-12-05T05:58:03.468549-08:00 vm0026 kernel: RSP: 002b:00007ffeebdca500 EFLAGS: 00010202 2020-12-05T05:58:03.468570-08:00 vm0026 kernel: RAX: 0000000000000000 RBX: 00007ffeebdca6c8 RCX: 0000000000000001 2020-12-05T05:58:03.468587-08:00 vm0026 kernel: RDX: 0000556ebd9e9750 RSI: 0000556ebd924b80 RDI: 0000000000000000 2020-12-05T05:58:03.468607-08:00 vm0026 kernel: RBP: 0000000000000597 R08: 00007ffeebdca6c8 R09: 0000000000000011 2020-12-05T05:58:03.468628-08:00 vm0026 kernel: R10: 0000556ebda8c808 R11: 00007ffeebdca570 R12: 0000556ebd99e7e0 2020-12-05T05:58:03.468646-08:00 vm0026 kernel: R13: 00007ffeebdca568 R14: 0000000000000001 R15: 0000000000000001 2020-12-05T05:58:03.468667-08:00 vm0026 kernel: FS: 00007f5c5837bb80 GS: 0000000000000000 !! 2020-12-05T05:58:03.611240-08:00 vm0026 systemd[1]: nginx.service: Main process exited, code=dumped, status=11/SEGV 2020-12-05T05:58:03.645225-08:00 vm0026 systemd[1]: nginx.service: Failed with result 'core-dump'. 2020-12-05T05:58:03.646343-08:00 vm0026 systemd[1]: nginx.service: Consumed 14.607s CPU time. 2020-12-05T06:00:43.703240-08:00 vm0026 systemd[1]: Starting system activity accounting tool... 2020-12-05T06:00:43.712400-08:00 vm0026 systemd[1]: sysstat-collect.service: Succeeded. 2020-12-05T06:00:43.712741-08:00 vm0026 systemd[1]: Finished system activity accounting tool. 2020-12-05T06:01:01.537222-08:00 vm0026 systemd[1]: Created slice User Slice of UID 982. 2020-12-05T06:01:01.538234-08:00 vm0026 systemd[1]: Starting User Runtime Directory /run/user/982... 2020-12-05T06:01:01.550219-08:00 vm0026 systemd[1]: Finished User Runtime Directory /run/user/982. 2020-12-05T06:01:01.552232-08:00 vm0026 systemd[1]: Starting User Manager for UID 982... 2020-12-05T06:01:01.705790-08:00 vm0026 systemd[26023]: Startup finished in 132ms. .... IIUC, nginx error "400" is 'bad header'. But I do note, segfault at 10 ip 00007f5c566d6283 sp 00007ffeebdca500 error 4 in libperl.so.5.30.3[7f5c56668000+16d000] Not clear what the cause of this infrequent/intermittent issue is. Apart from blocking that IP/block/country, what's the right way to harden nginx against this^ ? Ideally, preventing the SEGV in the first place. Or, hints as to what to look at more closely? From nginx-forum at forum.nginx.org Sat Dec 5 22:35:09 2020 From: nginx-forum at forum.nginx.org (itpp2012) Date: Sat, 05 Dec 2020 17:35:09 -0500 Subject: v1.19.5 OOPS: "Main process exited, code=dumped, status=11/SEGV" ? In-Reply-To: References: Message-ID: Known perl issue, google: "segfault at 10 error 4 in libperl.so" Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290159,290160#msg-290160 From pgnet.dev at gmail.com Sat Dec 5 22:50:38 2020 From: pgnet.dev at gmail.com (PGNet Dev) Date: Sat, 5 Dec 2020 14:50:38 -0800 Subject: v1.19.5 OOPS: "Main process exited, code=dumped, status=11/SEGV" ? In-Reply-To: References: Message-ID: <72c1593c-cf94-234b-e843-89159cea9858@gmail.com> On 12/5/20 2:35 PM, itpp2012 wrote: > Known perl issue, google: "segfault at 10 error 4 in libperl.so" aha. +1. thanks! noting, https://serverfault.com/questions/1041031/nginx-sometimes-gets-killed-after-reloading-it-using-systemd ... If you haven't got a need to run Perl code inside nginx (as most people do not) then you can uninstall the package libnginx-mod-http-perl and restart nginx to avoid the problem. This package was pulled in by the virtual package nginx-extras but most people don't actually run perl in the web server and so don't need it. ... my server IS built with ... --with-http_perl_module=dynamic ... and in config load_module /usr/local/nginx-modules/ngx_http_perl_module.so; afayk, is - load_module /usr/local/nginx-modules/ngx_http_perl_module.so; + #load_module /usr/local/nginx-modules/ngx_http_perl_module.so; a sufficient cure? or is a rebuild withOUT the --with-http_perl_module=dynamic opt required? Since it's dynamic, I suspect the simple disable _should_ do the trick; still reading to find/check details ... From rejaine at bhz.jamef.com.br Tue Dec 8 09:26:42 2020 From: rejaine at bhz.jamef.com.br (Rejaine Silveira Monteiro) Date: Tue, 8 Dec 2020 06:26:42 -0300 Subject: No subject Message-ID: Hi, I'm trying to update nginx by following the instructions on this link: https://nginx.org/en/linux_packages.html?_ga=2.188654056.174434793.1607418558-7036704.1590689345#SLES # zypper addrepo --gpgcheck --type yum --refresh --check \ 'http://nginx.org/packages/sles12' nginx-stable # curl -o /tmp/nginx_signing.key https://nginx.org/keys/nginx_signing.key # gpg --with-fingerprint /tmp/nginx_signing.key # zypper install nginx all the steps described were performed, but there is an error with libcrypt.so (but libcrypto.so.1.0.0 it is installed on my server) Problem: nothing provides libcrypto.so.1.0.0(OPENSSL_1.0.0)(64bit) needed by nginx-1.18.0-2.sles12.ngx.x86_64 Solution 1: do not install nginx-1.18.0-2.sles12.ngx.x86_64 Solution 2: break nginx-1.18.0-2.sles12.ngx.x86_64 by ignoring some of its dependencies Choose from above solutions by number or cancel [1/2/c] (c): I tried to install the stable and mainline packages. And my server already has libcrypto installed (libopenssl1_0_0-1.0.2j-60.52.1.x86_64) # whereis libcrypto.so.1.0.0 libcrypto.so.1.0: /usr/lib64/libcrypto.so.1.0.0 /lib/libcrypto.so.1.0.0 /lib64/libcrypto.so.1.0.0 Any idea? -- *Esta mensagem pode conter informa??es confidenciais ou privilegiadas, sendo seu sigilo protegido por lei. Se voc? n?o for o destinat?rio ou a pessoa autorizada a receber esta mensagem, n?o pode usar, copiar ou divulgar as informa??es nela contidas ou tomar qualquer a??o baseada nessas informa??es. Se voc? recebeu esta mensagem por engano, por favor avise imediatamente ao remetente, respondendo o e-mail e em seguida apague-o. Agradecemos sua coopera??o.* From thresh at nginx.com Tue Dec 8 09:35:59 2020 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 8 Dec 2020 12:35:59 +0300 Subject: [no subject] In-Reply-To: References: Message-ID: <02259b41-6914-f323-9c18-0763ad2560fd@nginx.com> Hello, 08.12.2020 12:26, Rejaine Silveira Monteiro wrote: > Hi, > > I'm trying to update nginx by following the instructions on this link: > https://nginx.org/en/linux_packages.html?_ga=2.188654056.174434793.1607418558-7036704.1590689345#SLES > > # zypper addrepo --gpgcheck --type yum --refresh --check \ > 'http://nginx.org/packages/sles12' nginx-stable > # curl -o /tmp/nginx_signing.key https://nginx.org/keys/nginx_signing.key > # gpg --with-fingerprint /tmp/nginx_signing.key > # zypper install nginx > > all the steps described were performed, but there is an error with > libcrypt.so (but libcrypto.so.1.0.0 it is installed on my server) > > Problem: nothing provides libcrypto.so.1.0.0(OPENSSL_1.0.0)(64bit) > needed by nginx-1.18.0-2.sles12.ngx.x86_64 > Solution 1: do not install nginx-1.18.0-2.sles12.ngx.x86_64 > Solution 2: break nginx-1.18.0-2.sles12.ngx.x86_64 by ignoring some > of its dependencies > Choose from above solutions by number or cancel [1/2/c] (c): > > I tried to install the stable and mainline packages. And my server > already has libcrypto installed > (libopenssl1_0_0-1.0.2j-60.52.1.x86_64) > > # whereis libcrypto.so.1.0.0 > libcrypto.so.1.0: /usr/lib64/libcrypto.so.1.0.0 > /lib/libcrypto.so.1.0.0 /lib64/libcrypto.so.1.0.0 > > Any idea? > What exact version of SLES 12 are you running? -- Konstantin Pavlov https://www.nginx.com/ From rejaine at bhz.jamef.com.br Tue Dec 8 09:44:59 2020 From: rejaine at bhz.jamef.com.br (Rejaine Silveira Monteiro) Date: Tue, 8 Dec 2020 06:44:59 -0300 Subject: [no subject] In-Reply-To: <02259b41-6914-f323-9c18-0763ad2560fd@nginx.com> References: <02259b41-6914-f323-9c18-0763ad2560fd@nginx.com> Message-ID: (sorry for the email without subject) i am using sles12 sp3 Em ter., 8 de dez. de 2020 ?s 06:36, Konstantin Pavlov escreveu: > Hello, > > 08.12.2020 12:26, Rejaine Silveira Monteiro wrote: > > Hi, > > > > I'm trying to update nginx by following the instructions on this link: > > > https://nginx.org/en/linux_packages.html?_ga=2.188654056.174434793.1607418558-7036704.1590689345#SLES > > > > # zypper addrepo --gpgcheck --type yum --refresh --check \ > > 'http://nginx.org/packages/sles12' nginx-stable > > # curl -o /tmp/nginx_signing.key > https://nginx.org/keys/nginx_signing.key > > # gpg --with-fingerprint /tmp/nginx_signing.key > > # zypper install nginx > > > > all the steps described were performed, but there is an error with > > libcrypt.so (but libcrypto.so.1.0.0 it is installed on my server) > > > > Problem: nothing provides libcrypto.so.1.0.0(OPENSSL_1.0.0)(64bit) > > needed by nginx-1.18.0-2.sles12.ngx.x86_64 > > Solution 1: do not install nginx-1.18.0-2.sles12.ngx.x86_64 > > Solution 2: break nginx-1.18.0-2.sles12.ngx.x86_64 by ignoring some > > of its dependencies > > Choose from above solutions by number or cancel [1/2/c] (c): > > > > I tried to install the stable and mainline packages. And my server > > already has libcrypto installed > > (libopenssl1_0_0-1.0.2j-60.52.1.x86_64) > > > > # whereis libcrypto.so.1.0.0 > > libcrypto.so.1.0: /usr/lib64/libcrypto.so.1.0.0 > > /lib/libcrypto.so.1.0.0 /lib64/libcrypto.so.1.0.0 > > > > Any idea? > > > > What exact version of SLES 12 are you running? > > -- > Konstantin Pavlov > https://www.nginx.com/ > -- *Esta mensagem pode conter informa??es confidenciais ou privilegiadas, sendo seu sigilo protegido por lei. Se voc? n?o for o destinat?rio ou a pessoa autorizada a receber esta mensagem, n?o pode usar, copiar ou divulgar as informa??es nela contidas ou tomar qualquer a??o baseada nessas informa??es. Se voc? recebeu esta mensagem por engano, por favor avise imediatamente ao remetente, respondendo o e-mail e em seguida apague-o. Agradecemos sua coopera??o.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From thresh at nginx.com Tue Dec 8 11:09:01 2020 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 8 Dec 2020 14:09:01 +0300 Subject: [no subject] In-Reply-To: References: <02259b41-6914-f323-9c18-0763ad2560fd@nginx.com> Message-ID: Hello, I don't have a SLES12 SP3 machine easily available, but on the latest SLES12 SP5 this dependency is provided via libopenssl1_0_0 package: $ zypper info libopenssl1_0_0: Information for package libopenssl1_0_0: ---------------------------------------- Repository : SLES12-SP5-Updates Name : libopenssl1_0_0 Version : 1.0.2p-3.27.1 Arch : x86_64 Vendor : SUSE LLC Support Level : Level 3 Installed Size : 3.0 MiB Installed : Yes (automatically) Status : out-of-date (version 1.0.2p-3.14.1 installed) Source package : openssl-1_0_0-1.0.2p-3.27.1.src Summary : Secure Sockets and Transport Layer Security Description : OpenSSL is a software library to be used in applications that need to secure communications over computer networks against eavesdropping or need to ascertain the identity of the party at the other end. OpenSSL contains an implementation of the SSL and TLS protocols. You should look into whether it's possible to install it on SP3, or consider updating to SP5. Hope this helps, 08.12.2020 12:44, Rejaine Silveira Monteiro wrote: > > (sorry for the email without subject) > i am using sles12 sp3 > > > > Em ter., 8 de dez. de 2020 ?s 06:36, Konstantin Pavlov > escreveu: > > Hello, > > 08.12.2020 12:26, Rejaine Silveira Monteiro wrote: > > Hi, > > > > I'm trying to update nginx by following the instructions on this link: > > > https://nginx.org/en/linux_packages.html?_ga=2.188654056.174434793.1607418558-7036704.1590689345#SLES > > > > # zypper addrepo --gpgcheck --type yum --refresh --check \ > > 'http://nginx.org/packages/sles12' nginx-stable > > # curl -o /tmp/nginx_signing.key > https://nginx.org/keys/nginx_signing.key > > # gpg --with-fingerprint /tmp/nginx_signing.key > > # zypper install nginx > > > >? ?all the steps described were performed, but there is an error with > > libcrypt.so (but? libcrypto.so.1.0.0 it is installed on my server) > > > > Problem: nothing provides libcrypto.so.1.0.0(OPENSSL_1.0.0)(64bit) > > needed by nginx-1.18.0-2.sles12.ngx.x86_64 > >? Solution 1: do not install nginx-1.18.0-2.sles12.ngx.x86_64 > >? Solution 2: break nginx-1.18.0-2.sles12.ngx.x86_64 by ignoring some > > of its dependencies > > Choose from above solutions by number or cancel [1/2/c] (c): > > > > I tried to install the stable and mainline packages. And my server > > already has libcrypto installed > > (libopenssl1_0_0-1.0.2j-60.52.1.x86_64) > > > >? # whereis libcrypto.so.1.0.0 > > libcrypto.so.1.0: /usr/lib64/libcrypto.so.1.0.0 > > /lib/libcrypto.so.1.0.0 /lib64/libcrypto.so.1.0.0 > > > > Any idea? > > > > What exact version of SLES 12 are you running? > > -- > Konstantin Pavlov > https://www.nginx.com/ > > > /Esta mensagem pode conter informa??es confidenciais ou privilegiadas, > sendo seu sigilo protegido por lei. Se voc? n?o for o destinat?rio ou a > pessoa autorizada a receber esta mensagem, n?o pode usar, copiar ou > divulgar as informa??es nela contidas ou tomar qualquer a??o baseada > nessas informa??es. Se voc? recebeu esta mensagem por engano, por favor > avise imediatamente ao remetente, respondendo o e-mail e em seguida > apague-o. Agradecemos sua coopera??o./ -- Konstantin Pavlov https://www.nginx.com/ From rejaine at bhz.jamef.com.br Tue Dec 8 11:36:14 2020 From: rejaine at bhz.jamef.com.br (Rejaine Silveira Monteiro) Date: Tue, 8 Dec 2020 08:36:14 -0300 Subject: No subject In-Reply-To: References: <02259b41-6914-f323-9c18-0763ad2560fd@nginx.com> Message-ID: openssl-1.0.2j already installed # whereis libcrypto.so.1.0.0 libcrypto.so.1.0: /usr/lib64/libcrypto.so.1.0.0 /lib/libcrypto.so.1.0.0 /lib64/libcrypto.so.1.0.0 finally I upgraded to nginx-1.17.8, which is the latest version supported by sles12 sp3 (officially) to update to nginx 19.5, I would also have to update openssl but that is not possible now (we will update to SLES15 at another time) Em ter., 8 de dez. de 2020 ?s 08:09, Konstantin Pavlov escreveu: > > Hello, > > I don't have a SLES12 SP3 machine easily available, but on the latest > SLES12 SP5 this dependency is provided via libopenssl1_0_0 package: > > $ zypper info libopenssl1_0_0: > > Information for package libopenssl1_0_0: > ---------------------------------------- > Repository : SLES12-SP5-Updates > Name : libopenssl1_0_0 > Version : 1.0.2p-3.27.1 > Arch : x86_64 > Vendor : SUSE LLC > Support Level : Level 3 > Installed Size : 3.0 MiB > Installed : Yes (automatically) > Status : out-of-date (version 1.0.2p-3.14.1 installed) > Source package : openssl-1_0_0-1.0.2p-3.27.1.src > Summary : Secure Sockets and Transport Layer Security > Description : > OpenSSL is a software library to be used in applications that need to > secure communications over computer networks against eavesdropping or > need to ascertain the identity of the party at the other end. > OpenSSL contains an implementation of the SSL and TLS protocols. > > You should look into whether it's possible to install it on SP3, or > consider updating to SP5. > > Hope this helps, > > 08.12.2020 12:44, Rejaine Silveira Monteiro wrote: > > > > (sorry for the email without subject) > > i am using sles12 sp3 > > > > > > > > Em ter., 8 de dez. de 2020 ?s 06:36, Konstantin Pavlov > > escreveu: > > > > Hello, > > > > 08.12.2020 12:26, Rejaine Silveira Monteiro wrote: > > > Hi, > > > > > > I'm trying to update nginx by following the instructions on this link: > > > > > https://nginx.org/en/linux_packages.html?_ga=2.188654056.174434793.1607418558-7036704.1590689345#SLES > > > > > > # zypper addrepo --gpgcheck --type yum --refresh --check \ > > > 'http://nginx.org/packages/sles12' nginx-stable > > > # curl -o /tmp/nginx_signing.key > > https://nginx.org/keys/nginx_signing.key > > > # gpg --with-fingerprint /tmp/nginx_signing.key > > > # zypper install nginx > > > > > > all the steps described were performed, but there is an error with > > > libcrypt.so (but libcrypto.so.1.0.0 it is installed on my server) > > > > > > Problem: nothing provides libcrypto.so.1.0.0(OPENSSL_1.0.0)(64bit) > > > needed by nginx-1.18.0-2.sles12.ngx.x86_64 > > > Solution 1: do not install nginx-1.18.0-2.sles12.ngx.x86_64 > > > Solution 2: break nginx-1.18.0-2.sles12.ngx.x86_64 by ignoring some > > > of its dependencies > > > Choose from above solutions by number or cancel [1/2/c] (c): > > > > > > I tried to install the stable and mainline packages. And my server > > > already has libcrypto installed > > > (libopenssl1_0_0-1.0.2j-60.52.1.x86_64) > > > > > > # whereis libcrypto.so.1.0.0 > > > libcrypto.so.1.0: /usr/lib64/libcrypto.so.1.0.0 > > > /lib/libcrypto.so.1.0.0 /lib64/libcrypto.so.1.0.0 > > > > > > Any idea? > > > > > > > What exact version of SLES 12 are you running? > > > > -- > > Konstantin Pavlov > > https://www.nginx.com/ > > > > > > /Esta mensagem pode conter informa??es confidenciais ou privilegiadas, > > sendo seu sigilo protegido por lei. Se voc? n?o for o destinat?rio ou a > > pessoa autorizada a receber esta mensagem, n?o pode usar, copiar ou > > divulgar as informa??es nela contidas ou tomar qualquer a??o baseada > > nessas informa??es. Se voc? recebeu esta mensagem por engano, por favor > > avise imediatamente ao remetente, respondendo o e-mail e em seguida > > apague-o. Agradecemos sua coopera??o./ > > -- > Konstantin Pavlov > https://www.nginx.com/ -- *Esta mensagem pode conter informa??es confidenciais ou privilegiadas, sendo seu sigilo protegido por lei. Se voc? n?o for o destinat?rio ou a pessoa autorizada a receber esta mensagem, n?o pode usar, copiar ou divulgar as informa??es nela contidas ou tomar qualquer a??o baseada nessas informa??es. Se voc? recebeu esta mensagem por engano, por favor avise imediatamente ao remetente, respondendo o e-mail e em seguida apague-o. Agradecemos sua coopera??o.* From praveenssit at gmail.com Tue Dec 8 12:14:59 2020 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Tue, 8 Dec 2020 17:44:59 +0530 Subject: Nginx HA Message-ID: Hello, I'm trying to achieve HA and was going through few forums that explained the same using pacemaker and keepalive. I would like to know what are the best practices and proven methods to achieve the same. Thanks! -- *Regards,* *Praveen* -------------- next part -------------- An HTML attachment was scrubbed... URL: From atif.ali at gmail.com Tue Dec 8 13:40:09 2020 From: atif.ali at gmail.com (aT) Date: Tue, 8 Dec 2020 17:40:09 +0400 Subject: Nginx HA In-Reply-To: References: Message-ID: We are using the following architecture to maintain HA for Nginx . Machine1 { Keepalived -> LVS } -> Nginx Machine2 { Keepalived -> LVS } -> Nginx . VIP - managed by Keepalived LVS with keepalived in HA , as layer 4 Load balancer Nginx multiple machines . This might help : https://programming.vip/docs/setup-of-linux-high-availability-lvs-load-balancing-cluster-keepalived-lvs-dr.html On Tue, Dec 8, 2020 at 4:15 PM Praveen Kumar K S wrote: > Hello, > > I'm trying to achieve HA and was going through few forums that explained > the same using pacemaker and keepalive. > I would like to know what are the best practices and proven methods to > achieve the same. Thanks! > > -- > > *Regards,* > > *Praveen* > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Syed Atif Ali Desk: 971 4 4493131 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gozdal at gmail.com Tue Dec 8 14:09:51 2020 From: gozdal at gmail.com (Marcin Gozdalik) Date: Tue, 8 Dec 2020 14:09:51 +0000 Subject: Ubuntu repo disappeared Message-ID: Hello It seems that http://nginx.org/packages/ubuntu/ has disappeared. It returns 404 although the URL is documented as official at http://nginx.org/en/linux_packages.html#Ubuntu Any chance of bringing it back? Thanks Marcin Gozdalik -------------- next part -------------- An HTML attachment was scrubbed... URL: From thresh at nginx.com Tue Dec 8 14:16:35 2020 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 8 Dec 2020 17:16:35 +0300 Subject: Ubuntu repo disappeared In-Reply-To: References: Message-ID: <266c30f8-7985-e329-b55e-570a7e35c554@nginx.com> Hi Marcin, 08.12.2020 17:09, Marcin Gozdalik wrote: > Hello > > It seems that http://nginx.org/packages/ubuntu/ has disappeared. It > returns 404 although the URL is documented as official at > http://nginx.org/en/linux_packages.html#Ubuntu > > Any chance of bringing it back? Thanks for notification - indeed, we've been doing some maintenance work on mirrors and those got moved away. They're now restored, can you please check if they work fine on your side? -- Konstantin Pavlov https://www.nginx.com/ From gozdal at gmail.com Tue Dec 8 14:18:35 2020 From: gozdal at gmail.com (Marcin Gozdalik) Date: Tue, 8 Dec 2020 14:18:35 +0000 Subject: Ubuntu repo disappeared In-Reply-To: <266c30f8-7985-e329-b55e-570a7e35c554@nginx.com> References: <266c30f8-7985-e329-b55e-570a7e35c554@nginx.com> Message-ID: Yes, everything works fine. Thanks for a quick fix! wt., 8 gru 2020 o 14:16 Konstantin Pavlov napisa?(a): > Hi Marcin, > > 08.12.2020 17:09, Marcin Gozdalik wrote: > > Hello > > > > It seems that http://nginx.org/packages/ubuntu/ has disappeared. It > > returns 404 although the URL is documented as official at > > http://nginx.org/en/linux_packages.html#Ubuntu > > > > Any chance of bringing it back? > > Thanks for notification - indeed, we've been doing some maintenance work > on mirrors and those got moved away. They're now restored, can you > please check if they work fine on your side? > > -- > Konstantin Pavlov > https://www.nginx.com/ > -- Marcin Gozdalik -------------- next part -------------- An HTML attachment was scrubbed... URL: From nmilas at noa.gr Wed Dec 9 21:34:38 2020 From: nmilas at noa.gr (Nikolaos Milas) Date: Wed, 9 Dec 2020 23:34:38 +0200 Subject: Nginx not loading different certs on two hosts Message-ID: Hello, On a Centos 7 with nginx-1.18.0 I have configured two vhosts, as follows: First one: server { ??? listen [::]:80 ipv6only=off; ??? listen??? 443 ssl http2 default deferred; ??? listen??? [::]:443 ssl http2 default deferred; ??? server_name? site1.world.example.com; ??? ssl_certificate???? /etc/pki/tls/certs/star_world.crt; ??? ssl_certificate_key /etc/pki/tls/private/star_world.key; ??? ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ??? ssl_ciphers 'EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA256:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EDH+aRSA+AESGCM:EDH+aRSA+SHA256:EDH+aRSA:EECDH:!aNULL:!eNULL:!MEDIUM:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4:!SEED'; ??? ssl_prefer_server_ciphers on; ??? ssl_session_cache shared:SSL:50m; ??? ssl_session_timeout? 1d; ??? ssl_session_tickets off; ??? ssl_dhparam /etc/pki/tls/certs/dhparam.pem; ??? ... and the second: server { ??? listen [::]:80; ??? listen [::]:443 ssl; ??? server_name? site2.local.world.example.com; ??? ssl_certificate???? /etc/pki/tls/certs/star_local_world.cer; ??? ssl_certificate_key /etc/pki/tls/private/star_local_world.key; ??? ssl_protocols TLSv1.1 TLSv1.2; ??? ssl_ciphers 'EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA256:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EDH+aRSA+AESGCM:EDH+aRSA+SHA256:EDH+aRSA:EECDH:!aNULL:!eNULL:!MEDIUM:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4:!SEED'; ??? ssl_prefer_server_ciphers on; ??? ssl_session_cache shared:SSL:50m; ??? ssl_session_timeout? 1d; ??? ssl_session_tickets off; ??? ssl_dhparam /etc/pki/tls/certs/dhparam.pem; ??? ... However, while the first one works correctly, the second one is clearly using the SSL certs of the first vhost (and thus it produces a Risk warning due to mismatch between name-cert) and not the ones configured in its own config (the second). (I confirmed that SNI support is enabled.) What am I doing wrong? (Obviously I am a very basic nginx user.) How shall I make the second vhost load/use its own ssl configuration correctly? Finally, what is the best way to successfully listen (i.e. the suggested way to configure the "listen" directives) to 80 and 443 ports on both IPv4 and IPv6 on all hosts (each and every one of them)? Thanks in advance! Cheers, Nick From mdounin at mdounin.ru Thu Dec 10 14:42:16 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Dec 2020 17:42:16 +0300 Subject: Nginx not loading different certs on two hosts In-Reply-To: References: Message-ID: <20201210144216.GB1147@mdounin.ru> Hello! On Wed, Dec 09, 2020 at 11:34:38PM +0200, Nikolaos Milas wrote: > Hello, > > On a Centos 7 with nginx-1.18.0 I have configured two vhosts, as follows: > > First one: > > server { > > ??? listen [::]:80 ipv6only=off; > > ??? listen??? 443 ssl http2 default deferred; > ??? listen??? [::]:443 ssl http2 default deferred; > > ??? server_name? site1.world.example.com; > > ??? ssl_certificate???? /etc/pki/tls/certs/star_world.crt; > ??? ssl_certificate_key /etc/pki/tls/private/star_world.key; > > ??? ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > ??? ssl_ciphers > 'EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA256:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EDH+aRSA+AESGCM:EDH+aRSA+SHA256:EDH+aRSA:EECDH:!aNULL:!eNULL:!MEDIUM:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4:!SEED'; > ??? ssl_prefer_server_ciphers on; > > ??? ssl_session_cache shared:SSL:50m; > ??? ssl_session_timeout? 1d; > ??? ssl_session_tickets off; > > ??? ssl_dhparam /etc/pki/tls/certs/dhparam.pem; > ??? ... > > and the second: > > server { > ??? listen [::]:80; > ??? listen [::]:443 ssl; > ??? server_name? site2.local.world.example.com; > > ??? ssl_certificate???? /etc/pki/tls/certs/star_local_world.cer; > ??? ssl_certificate_key /etc/pki/tls/private/star_local_world.key; > > ??? ssl_protocols TLSv1.1 TLSv1.2; > ??? ssl_ciphers > 'EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA256:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EDH+aRSA+AESGCM:EDH+aRSA+SHA256:EDH+aRSA:EECDH:!aNULL:!eNULL:!MEDIUM:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4:!SEED'; > ??? ssl_prefer_server_ciphers on; > > ??? ssl_session_cache shared:SSL:50m; > ??? ssl_session_timeout? 1d; > ??? ssl_session_tickets off; > > ??? ssl_dhparam /etc/pki/tls/certs/dhparam.pem; > ??? ... > > However, while the first one works correctly, the second one is clearly > using the SSL certs of the first vhost (and thus it produces a Risk > warning due to mismatch between name-cert) and not the ones configured > in its own config (the second). > > (I confirmed that SNI support is enabled.) > > What am I doing wrong? (Obviously I am a very basic nginx user.) How do you test it? Note well that the second vhost is only available on port 443 via IPv6. > Finally, what is the best way to successfully listen (i.e. the suggested > way to configure the "listen" directives) to 80 and 443 ports on both > IPv4 and IPv6 on all hosts (each and every one of them)? The recommended approach is to list all relevant "listen" directives in all relevant servers. That is, for 80 ad 443 ports on both IPv4 and IPv6 you have to use (assuming no "ipv6only=off"): listen 80; listen [::]:80; listen 443 ssl; listen [::]:443 ssl; If this looks too complex, consider using an include with all these listen directives (http://nginx.org/r/include). Note though that using includes might introduce additional configuration errors by hiding parts of the configuration, so I usually recommend to refrain from using includes (except may be a few standard ones, such as mime.types) and use single self-consistent nginx.conf instead. -- Maxim Dounin http://mdounin.ru/ From singhpriyansh51001 at gmail.com Thu Dec 10 16:58:58 2020 From: singhpriyansh51001 at gmail.com (Priyansh Singh) Date: Thu, 10 Dec 2020 22:28:58 +0530 Subject: Video Streaming using Nginx and Django Message-ID: Hi recently i watched this tutorial https://www.nginx.com/blog/video-streaming-for-remote-learning-with-nginx/ and now i want to integrate Django and nginx rtmp server to check whether user is authenticated or not before starting stream and each time sending video response -------------- next part -------------- An HTML attachment was scrubbed... URL: From praveenssit at gmail.com Fri Dec 11 08:06:59 2020 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Fri, 11 Dec 2020 13:36:59 +0530 Subject: Dashboard Message-ID: Hello, I have configured round robin load balance in nginx conf and the requests are being served. I would like to know if there are any useful tools to capture the requests and we can see the report in a dashboard or a report which provides the number of requests served by the upstream servers. For example, I have 3 servers defined under upstream and if I'm hitting 10K requests, is there any way to get, out of 10K requests, which upstream server served how many requests. Thanks. -- *Regards,* *Praveen* -------------- next part -------------- An HTML attachment was scrubbed... URL: From praveenssit at gmail.com Fri Dec 11 08:16:25 2020 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Fri, 11 Dec 2020 13:46:25 +0530 Subject: Nginx HA In-Reply-To: References: Message-ID: Hello, Thanks for the response. Can we achieve layer 7 load balancing by using this approach ? On Tue, Dec 8, 2020 at 7:10 PM aT wrote: > We are using the following architecture to maintain HA for Nginx . > > Machine1 { Keepalived -> LVS } -> Nginx > Machine2 { Keepalived -> LVS } -> Nginx . > > VIP - managed by Keepalived > LVS with keepalived in HA , as layer 4 Load balancer > Nginx multiple machines . > > This might help : > https://programming.vip/docs/setup-of-linux-high-availability-lvs-load-balancing-cluster-keepalived-lvs-dr.html > > > > On Tue, Dec 8, 2020 at 4:15 PM Praveen Kumar K S > wrote: > >> Hello, >> >> I'm trying to achieve HA and was going through few forums that explained >> the same using pacemaker and keepalive. >> I would like to know what are the best practices and proven methods to >> achieve the same. Thanks! >> >> -- >> >> *Regards,* >> >> *Praveen* >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > Syed Atif Ali > Desk: 971 4 4493131 > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- *Regards,* *K S Praveen KumarM: +91-9986855625 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From praveenssit at gmail.com Fri Dec 11 08:22:02 2020 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Fri, 11 Dec 2020 13:52:02 +0530 Subject: Dashboard In-Reply-To: References: Message-ID: By the way, I'm running nginx as docker. On Fri, Dec 11, 2020 at 1:36 PM Praveen Kumar K S wrote: > Hello, > > I have configured round robin load balance in nginx conf and the requests > are being served. I would like to know if there are any useful tools to > capture the requests and we can see the report in a dashboard or a report > which provides the number of requests served by the upstream servers. For > example, I have 3 servers defined under upstream and if I'm hitting 10K > requests, is there any way to get, out of 10K requests, which upstream > server served how many requests. Thanks. > > -- > > *Regards,* > > > *Praveen* > -- *Regards,* *K S Praveen KumarM: +91-9986855625 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From nmilas at noa.gr Fri Dec 11 11:44:06 2020 From: nmilas at noa.gr (Nikolaos Milas) Date: Fri, 11 Dec 2020 13:44:06 +0200 Subject: Nginx not loading different certs on two hosts In-Reply-To: <20201210144216.GB1147@mdounin.ru> References: <20201210144216.GB1147@mdounin.ru> Message-ID: On 10/12/2020 4:42 ?.?., Maxim Dounin wrote: > How do you test it? Note well that the second vhost is only > available on port 443 via IPv6. >> Finally, what is the best way to successfully listen (i.e. the suggested >> way to configure the "listen" directives) to 80 and 443 ports on both >> IPv4 and IPv6 on all hosts (each and every one of them)? > The recommended approach is to list all relevant "listen" > directives in all relevant servers. Hi Maxim, Thank you for your reply! I used the listen directives as you suggested on both vhosts and then I retried. After restarting nginx, both vhosts worked fine, both with http and https! Please note that with the initial config (as I had sent it), the second vhost was in fact responding to IPv4 clients as well, (through the use of ipv4-mapped ipv6 addresses). Actually, the second vhost used to work ONLY with http. When the second site was being accessed via https, it would produce an SSL warning, and by checking the certificate details I could see that it was the one used for the first vhost. If I would ignore the security warning and accept the certificate, it would continue, yet it would not load the expected page specified by the URL but a 404 page. So, for example, the URL: http://site2.local.world.example.com/catalog/en would work fine, but the URL: https://site2.local.world.example.com/catalog/en wouldn't load, either the correct cert or the page! I couldn't find any associated error in vhost logs. I haven't been able to understand the above described behavior! In any case, everything works fine now! Thanks again! All the best, Nick From gfrankliu at gmail.com Sat Dec 12 00:51:16 2020 From: gfrankliu at gmail.com (Frank Liu) Date: Fri, 11 Dec 2020 16:51:16 -0800 Subject: logging of invalid headers Message-ID: Hi, If we use ignore_invalid_headers and underscores_in_headers to allow those non-compliant headers, is there a way to log such violations while letting them through? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Dec 12 00:54:24 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 12 Dec 2020 03:54:24 +0300 Subject: Nginx not loading different certs on two hosts In-Reply-To: References: <20201210144216.GB1147@mdounin.ru> Message-ID: <20201212005424.GN1147@mdounin.ru> Hello! On Fri, Dec 11, 2020 at 01:44:06PM +0200, Nikolaos Milas wrote: > On 10/12/2020 4:42 ?.?., Maxim Dounin wrote: > > How do you test it? Note well that the second vhost is only > > available on port 443 via IPv6. > >> Finally, what is the best way to successfully listen (i.e. the suggested > >> way to configure the "listen" directives) to 80 and 443 ports on both > >> IPv4 and IPv6 on all hosts (each and every one of them)? > > The recommended approach is to list all relevant "listen" > > directives in all relevant servers. > > Hi Maxim, > > Thank you for your reply! > > I used the listen directives as you suggested on both vhosts and then I > retried. After restarting nginx, both vhosts worked fine, both with http > and https! > > Please note that with the initial config (as I had sent it), the second > vhost was in fact responding to IPv4 clients as well, (through the use > of ipv4-mapped ipv6 addresses). Actually, the second vhost used to work > ONLY with http. > > When the second site was being accessed via https, it would produce an > SSL warning, and by checking the certificate details I could see that it > was the one used for the first vhost. That's because the second vhosts has IPv6 listening socket on port 80 configured with "ipv6only=off", so it accepted both IPv6 and IPv4 connections. In contrast, IPv6 socket on port 443 (the one used for https) is _not_ configured with "ipv6only=off", so it only accepts IPv6 connections, but not IPv4. And the separate IPv4 listening socket on port 443 was only configured in the first vhost, but not in the second one. As such, all IPv4 https connections were handled by the first vhost only. [...] > I haven't been able to understand the above described behavior! In any > case, everything works fine now! Glad it works now, and hope the previous behaviour is clear now as well: it is a result of no IPv4 listening socket on port 443 in the second vhost in the original configuration. -- Maxim Dounin http://mdounin.ru/ From praveenssit at gmail.com Sun Dec 13 15:49:05 2020 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Sun, 13 Dec 2020 21:19:05 +0530 Subject: Dashboard In-Reply-To: References: Message-ID: Hello, Can someone please guide if this is possible? On Fri, Dec 11, 2020 at 1:52 PM Praveen Kumar K S wrote: > By the way, I'm running nginx as docker. > > On Fri, Dec 11, 2020 at 1:36 PM Praveen Kumar K S > wrote: > >> Hello, >> >> I have configured round robin load balance in nginx conf and the requests >> are being served. I would like to know if there are any useful tools to >> capture the requests and we can see the report in a dashboard or a report >> which provides the number of requests served by the upstream servers. For >> example, I have 3 servers defined under upstream and if I'm hitting 10K >> requests, is there any way to get, out of 10K requests, which upstream >> server served how many requests. Thanks. >> >> -- >> >> *Regards,* >> >> >> *Praveen* >> > > > -- > > > *Regards,* > > > *K S Praveen Kumar* > -- *Regards,* *K S Praveen Kumar* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Dec 13 16:20:57 2020 From: nginx-forum at forum.nginx.org (itpp2012) Date: Sun, 13 Dec 2020 11:20:57 -0500 Subject: Dashboard In-Reply-To: References: Message-ID: This may work for you; https://github.com/vozlt/nginx-module-vts Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290190,290203#msg-290203 From praveenssit at gmail.com Sun Dec 13 16:27:26 2020 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Sun, 13 Dec 2020 21:57:26 +0530 Subject: Dashboard In-Reply-To: References: Message-ID: Hello, Looks interesting. Let me try it out. Thanks! On Sun, Dec 13, 2020 at 9:51 PM itpp2012 wrote: > This may work for you; > https://github.com/vozlt/nginx-module-vts > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,290190,290203#msg-290203 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Regards,* *K S Praveen KumarM: +91-9986855625 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From praveenssit at gmail.com Mon Dec 14 07:15:00 2020 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Mon, 14 Dec 2020 12:45:00 +0530 Subject: nginx: [emerg] host not found in upstream Message-ID: Hello, I'm facing issue where if one of the upstream servers is down, nginx going to emerg mode with the subjected error. My requirement is, nginx should go down when all of the upstream servers are down. But when even one of the upstream servers is up, nginx should still serve the requests by proxying requests to that one live upstream server. Any help would be appreciated. Below is my conf *events { worker_connections 4096;}http { error_log /etc/nginx/error_log.log warn; client_max_body_size 20m; log_format upstreamlog '$server_name to: $upstream_addr {$request} ' 'upstream_response_time $upstream_response_time' ' request_time $request_time'; proxy_cache_path /etc/nginx/cache keys_zone=one:500m max_size=1000m; access_log /var/log/nginx/nginx-access.log upstreamlog;upstream app.localhost { # simple round-robin server app-1:9763 fail_timeout=60s; server app-2:9763 fail_timeout=60s; server app-3:9763 fail_timeout=60s; }server { listen 80; server_name app.localhost; location / { proxy_pass xxxx }}}* -- *Regards,* *K S Praveen Kumar* -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Dec 14 10:53:36 2020 From: francis at daoine.org (Francis Daly) Date: Mon, 14 Dec 2020 10:53:36 +0000 Subject: nginx: [emerg] host not found in upstream In-Reply-To: References: Message-ID: <20201214105336.GD23032@daoine.org> On Mon, Dec 14, 2020 at 12:45:00PM +0530, Praveen Kumar K S wrote: Hi there, > I'm facing issue where if one of the upstream servers is down, nginx going > to emerg mode with the subjected error. Generally, nginx does not care if individual IP:ports defined in an "upstream" are accessible or not. nginx does care that any hostnames used in "upstream" can be resolved at nginx-start-time using the system resolver, and will fail with an error like "host not found in upstream" if one does not resolve. Are you reporting that nginx stops working when an upstream server is down; or are you reporting that nginx fails to start when an upstream server is down? And if the latter -- does the hostname that you have configured nginx to talk to, resolve, when the upstream server is down? > My requirement is, nginx should go down when all of the upstream servers > are down. > But when even one of the upstream servers is up, nginx should still serve > the requests by proxying requests to that one live upstream server. What you describe in that last sentence is the expected behaviour. If you are not seeing that -- can you provide a fuller description of your setup? If you have unreliable name resolution, you may be able to change your "upstream" config to use the IP addresses, which would avoid nginx having to try name resolution. Cheers, f -- Francis Daly francis at daoine.org From praveenssit at gmail.com Mon Dec 14 13:27:50 2020 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Mon, 14 Dec 2020 18:57:50 +0530 Subject: nginx: [emerg] host not found in upstream In-Reply-To: <20201214105336.GD23032@daoine.org> References: <20201214105336.GD23032@daoine.org> Message-ID: Hello, 1. nginx fails to start when an upstream server is down or not being resolved. 2. I can't use the ipaddress because I'm running all services in docker swarm. So I can only resolve using the service names. 3. Now I get your point. When nginx starts, it should resolve all upstream servers. Else, it will fail to start. Now, let me explain my issue. Please let me know if this is possible. 1. Today I have 3 servers defined in upstream. Lets say app1,app2,app3 2. Tomorrow I might scale the app by 2 more. Lets say app4,app5 3. Now I want to define that [app4,app5] in my nginx configuration 4. But I thought of defining app1,2,3,4,5 upfront in nginx conf and scale my app whenever required. In this case, when nginx is unable to resolve app4,5, it should ignore and when I scale my app, it should load balance the requests to all 5. On Mon, Dec 14, 2020 at 4:23 PM Francis Daly wrote: > On Mon, Dec 14, 2020 at 12:45:00PM +0530, Praveen Kumar K S wrote: > > Hi there, > > > I'm facing issue where if one of the upstream servers is down, nginx > going > > to emerg mode with the subjected error. > > Generally, nginx does not care if individual IP:ports defined in an > "upstream" are accessible or not. > > nginx does care that any hostnames used in "upstream" can be resolved > at nginx-start-time using the system resolver, and will fail with an > error like "host not found in upstream" if one does not resolve. > > > Are you reporting that nginx stops working when an upstream server is > down; or are you reporting that nginx fails to start when an upstream > server is down? > > And if the latter -- does the hostname that you have configured nginx > to talk to, resolve, when the upstream server is down? > > > My requirement is, nginx should go down when all of the upstream servers > > are down. > > But when even one of the upstream servers is up, nginx should still serve > > the requests by proxying requests to that one live upstream server. > > What you describe in that last sentence is the expected behaviour. > > If you are not seeing that -- can you provide a fuller description of > your setup? > > If you have unreliable name resolution, you may be able to change your > "upstream" config to use the IP addresses, which would avoid nginx having > to try name resolution. > > Cheers, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Regards,* *K S Praveen Kumar* -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Dec 14 21:57:47 2020 From: francis at daoine.org (Francis Daly) Date: Mon, 14 Dec 2020 21:57:47 +0000 Subject: nginx: [emerg] host not found in upstream In-Reply-To: References: <20201214105336.GD23032@daoine.org> Message-ID: <20201214215747.GE23032@daoine.org> On Mon, Dec 14, 2020 at 06:57:50PM +0530, Praveen Kumar K S wrote: Hi there, > 1. nginx fails to start when an upstream server is down or not being > resolved. "nginx fails to start when an upstream server is down" is not the expected behaviour. "nginx fails to start when an upstream server name is not being resolved" is the expected behaviour. If you have a reproducible case of the first without the second, that will probably be a useful bug report. > 2. I can't use the ipaddress because I'm running all services in docker > swarm. So I can only resolve using the service names. That's a valid setup for your use case. Stock-nginx does not work in those circumstances, if the upstream service names do not resolve when nginx starts. (Also: I think that stock-nginx will not try to re-resolve the names while it is running; so giving it "dummy" information at startup and changing it later, will not work.) > 3. Now I get your point. When nginx starts, it should resolve all upstream > servers. Else, it will fail to start. Correct. > Now, let me explain my issue. Please let me know if this is possible. > 1. Today I have 3 servers defined in upstream. Lets say app1,app2,app3 > 2. Tomorrow I might scale the app by 2 more. Lets say app4,app5 > 3. Now I want to define that [app4,app5] in my nginx configuration > 4. But I thought of defining app1,2,3,4,5 upfront in nginx conf and scale > my app whenever required. In this case, when nginx is unable to resolve > app4,5, it should ignore and when I scale my app, it should load balance > the requests to all 5. Step 4 is not a thing that stock-nginx can do today. You could potentially define your "upstream" to only include the servers that resolve today; and then tomorrow change it to only include the servers that resolve tomorrow, and invite nginx to re-read its config file ("reload" rather than "restart"). Or you could potentially define your "upstream" with all 5 names if you know the IP addresses that they will have when they are running, and let nginx load-balance across whichever services are "up" at each time. Maybe you can find or write an external module that can get nginx to do what you want? The documentation for "upstream" is at http://nginx.org/r/upstream On that page, there are also mentions of some dynamic features that are not available in stock nginx, but which are available in a commercial subscription. Depending on your requirements, that may or may not be a useful path to investigate. The fact that "dynamic configuration" code exists proves that it can be written, which might be inspiration to re-implement it, or to take advantage of what others have already done. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Dec 15 13:51:21 2020 From: nginx-forum at forum.nginx.org (Flinou) Date: Tue, 15 Dec 2020 08:51:21 -0500 Subject: Http_v2 dynamic module Message-ID: Hello, I would like to build http_v2 module as a dynamic one. I noticed on the internet that some built-in modules can be built as dynamic (http_xslt_module for example) but it does not work the same way for http_v2. Is there any way to build this module as a dynamic one ? Thank you in advance, Antoine Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290210,290210#msg-290210 From mdounin at mdounin.ru Tue Dec 15 14:02:31 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 Dec 2020 17:02:31 +0300 Subject: Http_v2 dynamic module In-Reply-To: References: Message-ID: <20201215140231.GQ1147@mdounin.ru> Hello! On Tue, Dec 15, 2020 at 08:51:21AM -0500, Flinou wrote: > I would like to build http_v2 module as a dynamic one. I noticed on the > internet that some built-in modules can be built as dynamic > (http_xslt_module for example) but it does not work the same way for > http_v2. > > Is there any way to build this module as a dynamic one ? No. -- Maxim Dounin http://mdounin.ru/ From wi2p at hotmail.com Tue Dec 15 14:07:09 2020 From: wi2p at hotmail.com (kev jr) Date: Tue, 15 Dec 2020 14:07:09 +0000 Subject: Implement Digest authentication on Nginx behind a proxy Message-ID: Hi everyone, I try to implement digest authentication on Nginx. The architecture is the following : Server A is the client Server B is the proxy (a API solution which only transmits the request as a proxy) Server C is my Nginx server where I configure the Digest authentification I have the following error, when my client try to connect to my Nginx through the proxy : uri mismatch - does not match request-uri because the client (server A) send the following parameter for the authentication Digest username="client", realm="Test", nonce="xxxxxx", uri="proxyuri", cnonce="xxxx=", nc=xxxx, qop=auth, response="xxx", algorithm=MD5\r\n The client (server A) send the proxyuri, and the Nginx server (server C) waiting for the nginxuri. Do you know which parameter, I need to add in my Nginx configuration to perform the connection ? Or Do you know, if it's possible to implement Digest authentication on Nginx behind a proxy ? For your information, the direct connection Client to Nginx server with Digest authentication works fine. I have the same problem in a Apache configuration. Thanks for your help, -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Dec 15 14:59:47 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 Dec 2020 17:59:47 +0300 Subject: nginx-1.19.6 Message-ID: <20201215145947.GT1147@mdounin.ru> Changes with nginx 1.19.6 15 Dec 2020 *) Bugfix: "no live upstreams" errors if a "server" inside "upstream" block was marked as "down". *) Bugfix: a segmentation fault might occur in a worker process if HTTPS was used; the bug had appeared in 1.19.5. *) Bugfix: nginx returned the 400 response on requests like "GET http://example.com?args HTTP/1.0". *) Bugfix: in the ngx_http_flv_module and ngx_http_mp4_module. Thanks to Chris Newton. -- Maxim Dounin http://nginx.org/ From nmilas at noa.gr Tue Dec 15 17:01:26 2020 From: nmilas at noa.gr (Nikolaos Milas) Date: Tue, 15 Dec 2020 19:01:26 +0200 Subject: Nginx not loading different certs on two hosts In-Reply-To: <20201212005424.GN1147@mdounin.ru> References: <20201210144216.GB1147@mdounin.ru> <20201212005424.GN1147@mdounin.ru> Message-ID: <0f14555f-6555-fe4c-2738-a7502e7c272f@noa.gr> On 12/12/2020 2:54 ?.?., Maxim Dounin wrote: > Glad it works now, and hope the previous behaviour is clear now as > well: it is a result of no IPv4 listening socket on port 443 in > the second vhost in the original configuration. Thank you Maxim, I appreciate your detailed explanation and all your efforts! All the best, Nick From luciano at vespaperitivo.it Tue Dec 15 17:17:13 2020 From: luciano at vespaperitivo.it (Luciano Mannucci) Date: Tue, 15 Dec 2020 18:17:13 +0100 Subject: Nginx not reloading after SIGHUP in freebsd Message-ID: <4CwQ001mXLz1ftWl@baobab.bilink.it> Hello all! If I issue a "kill -s HUP" from root to the pid I find on my freeBSD machine it does'nt reload the configuration. If I issue a "service nginx reload" it does. Is it normal? Here is the output of my "pkg info nginx" which shoud carry all the relevant informations: nginx-devel-1.19.3_3 Robust and small WWW server py37-certbot-nginx-1.8.0 NGINX plugin for Certbot root at brunocover:~ # pkg info nginx-devel nginx-devel-1.19.3_3 Name : nginx-devel Version : 1.19.3_3 Installed on : Wed Nov 11 20:21:34 2020 CET Origin : www/nginx-devel Architecture : FreeBSD:11:i386 Prefix : /usr/local Categories : www Licenses : BSD2CLAUSE Maintainer : osa at FreeBSD.org WWW : https://nginx.org/ Comment : Robust and small WWW server Options : AJP : off ARRAYVAR : off AWS_AUTH : off BROTLI : off CACHE_PURGE : off CLOJURE : off COOKIE_FLAG : off CT : off DEBUG : off DEBUGLOG : off DEVEL_KIT : off DRIZZLE : off DSO : on DYNAMIC_HC : off DYNAMIC_UPSTREAM: off ECHO : off ENCRYPTSESSION : off FILE_AIO : on FORMINPUT : off GOOGLE_PERFTOOLS: off GRIDFS : off GSSAPI_HEIMDAL : off GSSAPI_MIT : off HEADERS_MORE : off HTTP : on HTTPV2 : on HTTP_ACCEPT_LANGUAGE: off HTTP_ADDITION : on HTTP_AUTH_DIGEST: off HTTP_AUTH_KRB5 : off HTTP_AUTH_LDAP : off HTTP_AUTH_PAM : off HTTP_AUTH_REQ : on HTTP_CACHE : on HTTP_DAV : on HTTP_DAV_EXT : off HTTP_DEGRADATION: off HTTP_EVAL : off HTTP_FANCYINDEX: off HTTP_FLV : on HTTP_FOOTER : off HTTP_GEOIP2 : off HTTP_GUNZIP_FILTER: on HTTP_GZIP_STATIC: on HTTP_IMAGE_FILTER: off HTTP_IP2LOCATION: off HTTP_IP2PROXY : off HTTP_JSON_STATUS: off HTTP_MOGILEFS : off HTTP_MP4 : on HTTP_MP4_H264 : off HTTP_NOTICE : off HTTP_PERL : off HTTP_PUSH : off HTTP_PUSH_STREAM: off HTTP_RANDOM_INDEX: on HTTP_REALIP : on HTTP_REDIS : off HTTP_RESPONSE : off HTTP_REWRITE : on HTTP_SECURE_LINK: on HTTP_SLICE : on HTTP_SLICE_AHEAD: off HTTP_SSL : on HTTP_STATUS : on HTTP_SUB : on HTTP_SUBS_FILTER: off HTTP_TARANTOOL : off HTTP_UPLOAD : off HTTP_UPLOAD_PROGRESS: off HTTP_UPSTREAM_CHECK: off HTTP_UPSTREAM_FAIR: off HTTP_UPSTREAM_STICKY: off HTTP_VIDEO_THUMBEXTRACTOR: off HTTP_XSLT : off HTTP_ZIP : off ICONV : off IPV6 : on LET : off LINK : off LUA : off MAIL : on MAIL_IMAP : off MAIL_POP3 : off MAIL_SMTP : off MAIL_SSL : on MEMC : off MODSECURITY3 : off NAXSI : off NJS : off OPENTRACING : off PASSENGER : off POSTGRES : off RDS_CSV : off RDS_JSON : off REDIS2 : off RTMP : off SET_MISC : off SFLOW : off SHIBBOLETH : off SLOWFS_CACHE : off SMALL_LIGHT : off SRCACHE : off STREAM : on STREAM_REALIP : on STREAM_SSL : on STREAM_SSL_PREREAD: on THREADS : on VOD : off VTS : off WEBSOCKIFY : off WWW : on XSS : off Shared Libs required: libpcre.so.1 Annotations : FreeBSD_version: 1104001 cpe : cpe:2.3:a:nginx:nginx:1.19.3:::::freebsd11:x86:3 repo_type : binary repository : FreeBSD Flat size : 1.19MiB Cheers && thanks in advance, Luciano. -- /"\ /Via A. Salaino, 7 - 20144 Milano (Italy) \ / ASCII RIBBON CAMPAIGN / PHONE : +39 02485781 FAX: +39 0248028247 X AGAINST HTML MAIL / E-MAIL: posthamster at sublink.sublink.ORG / \ AND POSTINGS / WWW: http://www.lesassaie.IT/ From edigarov at qarea.com Tue Dec 15 17:42:00 2020 From: edigarov at qarea.com (Gregory Edigarov) Date: Tue, 15 Dec 2020 19:42:00 +0200 Subject: help need (convert vhost to location) Message-ID: Hello everybody, I have this server section: server { ??? server_name postmaster.example.com; ??? listen 80; ??? access_log?? /var/log/nginx/vexim-access.log; ??? error_log??? /var/log/nginx/vexim-error.log; ??? root /var/www/vexim/vexim; ??? index index.php index.htm index.html; ??? location / { ??????? try_files $uri $uri/ /index.php; ??? } ??? location ~* \.php$ { ??????? fastcgi_split_path_info ^(.+?\.php)(/.*)$; ??????? if (!-f $document_root$fastcgi_script_name) {return 404;} ??????? fastcgi_pass? unix:/run/php/php7.3-fpm-postmaster.sock; ??????? fastcgi_index index.php; ??????? include fastcgi_params; ??????? fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; ??? } } this is working correctly. ? now I need to convert this virtual server to location. i.e. to be called from postmaster.example.com/control/ how could this be achieved? thank you. -- With best regards, ???? Gregory Edigarov From praveenssit at gmail.com Wed Dec 16 13:05:49 2020 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Wed, 16 Dec 2020 18:35:49 +0530 Subject: nginx: [emerg] host not found in upstream In-Reply-To: <20201214215747.GE23032@daoine.org> References: <20201214105336.GD23032@daoine.org> <20201214215747.GE23032@daoine.org> Message-ID: Hello, Thanks for the explanation. I will look into the options what we have at this point. On Tue, Dec 15, 2020 at 3:27 AM Francis Daly wrote: > On Mon, Dec 14, 2020 at 06:57:50PM +0530, Praveen Kumar K S wrote: > > Hi there, > > > 1. nginx fails to start when an upstream server is down or not being > > resolved. > > "nginx fails to start when an upstream server is down" is not the > expected behaviour. > > "nginx fails to start when an upstream server name is not being resolved" > is the expected behaviour. > > If you have a reproducible case of the first without the second, that > will probably be a useful bug report. > > > 2. I can't use the ipaddress because I'm running all services in docker > > swarm. So I can only resolve using the service names. > > That's a valid setup for your use case. > > Stock-nginx does not work in those circumstances, if the upstream service > names do not resolve when nginx starts. > > (Also: I think that stock-nginx will not try to re-resolve the names > while it is running; so giving it "dummy" information at startup and > changing it later, will not work.) > > > 3. Now I get your point. When nginx starts, it should resolve all > upstream > > servers. Else, it will fail to start. > > Correct. > > > Now, let me explain my issue. Please let me know if this is possible. > > 1. Today I have 3 servers defined in upstream. Lets say app1,app2,app3 > > 2. Tomorrow I might scale the app by 2 more. Lets say app4,app5 > > 3. Now I want to define that [app4,app5] in my nginx configuration > > 4. But I thought of defining app1,2,3,4,5 upfront in nginx conf and scale > > my app whenever required. In this case, when nginx is unable to resolve > > app4,5, it should ignore and when I scale my app, it should load balance > > the requests to all 5. > > Step 4 is not a thing that stock-nginx can do today. > > You could potentially define your "upstream" to only include the servers > that resolve today; and then tomorrow change it to only include the > servers that resolve tomorrow, and invite nginx to re-read its config > file ("reload" rather than "restart"). > > Or you could potentially define your "upstream" with all 5 names if you > know the IP addresses that they will have when they are running, and > let nginx load-balance across whichever services are "up" at each time. > > > Maybe you can find or write an external module that can get nginx to do > what you want? > > The documentation for "upstream" is at http://nginx.org/r/upstream > > On that page, there are also mentions of some dynamic features that are > not available in stock nginx, but which are available in a commercial > subscription. Depending on your requirements, that may or may not be > a useful path to investigate. The fact that "dynamic configuration" > code exists proves that it can be written, which might be inspiration > to re-implement it, or to take advantage of what others have already done. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Regards,* *K S Praveen Kumar* -------------- next part -------------- An HTML attachment was scrubbed... URL: From st.gabrielli at libero.it Wed Dec 16 14:34:38 2020 From: st.gabrielli at libero.it (st.gabrielli at libero.it) Date: Wed, 16 Dec 2020 15:34:38 +0100 (CET) Subject: Thread Pool HTTP POST issue Message-ID: <765865560.234383.1608129278848@mail1.libero.it> Hi,we are doing some tests using the nginx thread pool feature. The scenario is the following:1) the worker process register with ngx_http_read_client_request_body fuction a handler for the request. We can call it "post_handler" 2) the worker process receive a POST bid request. The payload is a json that contains an id field (unique among requests that will be used in the response generated by our web service) 3) the worker process in the "post_handler" create the task context and launch the task 4) the request payload is read in the task code that is executed in the thread pool 5) the blocking code is executed in the task code that is executed in the thread pool 6) task completion handler (worker process - main thread - outside thread pool): the ngx_http_send_header(r) ngx_http_output_filter(r, out); ngx_http_finalize_request(r, r->headers_out.status); code is executed in the task completion handler Now some details: * in point 3: -- the "r" (ngx_http_request_t) pointer and "b" (ngx_buf_t *) the response buffer pointer are saved as fields in the task context (a simple c structure used as the context of the task) -- just before the code for the task launch (call to "ngx_thread_task_post" function) we put the following lines of code: r->main->blocked++; r->aio = 1;* in point 5, after the blocking code execution and after the generated response string has been saved in a task context field, -- the buffer output chain is allocated into the heap and stored into a task context field: //buffer chain needed to put response buffer in ngx_chain_t *out = ngx_alloc_chain_link(r->pool); -- the buffers are filled in (r, b were stored into the task context before task launch - response is a local var where the response saved into the task context has been copied): r->headers_out.content_type_len = sizeof("application/json") - 1; r->headers_out.content_type.data = (u_char *) "application/json"; r->headers_out.content_length_n = strlen(response); /* adjust the pointers of the buffer */ b->pos = (u_char *)response; b->last = (u_char *)response + strlen(response); b->memory = 1; /* this buffer is in memory */ b->last_buf = 1; /* this is the last buffer in the buffer chain */ (*out).buf = b; (*out).next = NULL;* in point 6, just before the instructions described above (for point 6) we put the following lines of code: r->main->blocked--; r->aio = 0; Issue we are trying to fix:- in point 6 comparing the response string saved into the task context with the buffer chain "out" var content (the comparison is done just before calling "ngx_http_output_filter" and "ngx_http_finalize_request") we discovered that about the 40% of the times the 2 values are different for two reasons: - "out" content is an empty string - "out" content is a response with id different than the one of the response string in the task context (that is the one used to fill in "out" buffer chain in point 5)We think there is probably an error either about what it is possible to put into the task context structure (maybe it is not possible to manipulate "r", "b", "out" outside the main thread...) or the right usage and position of "r->main->blocked" counter and "r->aio" flag. Any hints about how to solve this issue?Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Dec 16 17:20:59 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 16 Dec 2020 20:20:59 +0300 Subject: Thread Pool HTTP POST issue In-Reply-To: <765865560.234383.1608129278848@mail1.libero.it> References: <765865560.234383.1608129278848@mail1.libero.it> Message-ID: <20201216172059.GZ1147@mdounin.ru> Hello! On Wed, Dec 16, 2020 at 03:34:38PM +0100, st.gabrielli at libero.it wrote: > Hi,we are doing some tests using the nginx thread pool feature. The scenario is the following:1) the worker process register with ngx_http_read_client_request_body fuction a handler for the request. We can call it "post_handler" > 2) the worker process receive a POST bid request. The payload is a json that contains an id field (unique among requests that will be used in the response generated by our web service) > 3) the worker process in the "post_handler" create the task context and launch the task > 4) the request payload is read in the task code that is executed in the thread pool > 5) the blocking code is executed in the task code that is executed in the thread pool > 6) task completion handler (worker process - main thread - outside thread pool): > the ngx_http_send_header(r) > ngx_http_output_filter(r, out); ngx_http_finalize_request(r, r->headers_out.status); code is executed in the task completion handler Now some details: > * in point 3: > -- the "r" (ngx_http_request_t) pointer and "b" (ngx_buf_t *) the response buffer pointer are saved as fields in the task context (a simple c structure used as the context of the task) > -- just before the code for the task launch (call to "ngx_thread_task_post" function) we put the following lines of code: > r->main->blocked++; > r->aio = 1;* in point 5, after the blocking code execution and after the generated response string has been saved in a task context field, > -- the buffer output chain is allocated into the heap and stored into a task context field: > //buffer chain needed to put response buffer in > ngx_chain_t *out = ngx_alloc_chain_link(r->pool); -- the buffers are filled in (r, b were stored into the task context before task launch - response is a local var where the response saved into the task context has been copied): > r->headers_out.content_type_len = sizeof("application/json") - 1; > r->headers_out.content_type.data = (u_char *) "application/json"; r->headers_out.content_length_n = strlen(response); /* adjust the pointers of the buffer */ > b->pos = (u_char *)response; > b->last = (u_char *)response + strlen(response); b->memory = 1; /* this buffer is in memory */ > b->last_buf = 1; /* this is the last buffer in the buffer chain */ > (*out).buf = b; > (*out).next = NULL;* in point 6, just before the instructions described above (for point 6) we put the following lines of code: > r->main->blocked--; > r->aio = 0; Issue we are trying to fix:- in point 6 comparing the response string saved into the task context with the buffer chain "out" var content (the comparison is done just before calling "ngx_http_output_filter" and "ngx_http_finalize_request") we discovered that about the 40% of the times the 2 values are different for two reasons: > - "out" content is an empty string > - "out" content is a response with id different than the one of the response string in the task context (that is the one used to fill in "out" buffer chain in point 5)We think there is probably an error either about what it is possible to put into the task context structure (maybe it is not possible to manipulate "r", "b", "out" outside the main thread...) or the right usage and position of "r->main->blocked" counter and "r->aio" flag. > Any hints about how to solve this issue?Thanks. Thread pools are designed to execute short self-contained blocking code like system calls for reading a file, and the code executed in a thread pool is expected to be thread-safe. In particular, this means that it cannot call most of nginx functions or access nginx structures. For example, accessing the request structure might lead to undefined results if it is modified by the main thread at the same time. If I understand the above text correctly, you are doing allocations from the request pool and various modifications of the request structure within a thread. This is not going to work, as these operations are not tread-safe, see above. You should do relevant operations from the main thread instead. That is, you should either pre-allocate appropriate buffers for your threaded code to return results in (for example, nginx's own ngx_thread_read() reads the data into pre-allocated buffer), or provide the result within your own buffers (for example, allocated with thread-safe malloc() within your thread code) and free these buffers afterwards as appropriate. -- Maxim Dounin http://mdounin.ru/ From lemur117 at protonmail.com Thu Dec 17 05:01:54 2020 From: lemur117 at protonmail.com (Jon Carmicheal) Date: Thu, 17 Dec 2020 05:01:54 +0000 Subject: How to adjust HPACK dynamic table? Message-ID: I would like to disable the caching of headers in the dynamic table of the HTTP/2 HPACK compression algorithm described in RFC 7541. I have defined my nginx server with listen 8080 http2 ; and I've confirmed that the HPACK algorithm is working as expected with Huffman encoding, static header table indexing, and dynamic header table indexing. But I haven't been able to disable the dynamic table. RFC 7541 mentions in "Section 4.2. Maximum Table Size" the ability of an HTTP/2 node to "clear entries from the dynamic table by setting a maximum size of 0, which can subsequently be restored." Is that a feature supported by nginx? Can I disable the dynamic table entirely so that no header fields are cached? And can I arbitrarily send a flush request so that all entries are evicted and then the dynamic table size is restored? If so, how? I've been trying to play with "http2_max_field_size" and "http2_max_header_size" in the server configuration file as described in https://nginx.org/en/docs/http/ngx_http_v2_module.html. But I don't think those are the right parameters. When I set either of them to zero, it makes the server return an error when a header is sent. Thanks for any pointers you can give me. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Dec 17 18:02:09 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 17 Dec 2020 21:02:09 +0300 Subject: How to adjust HPACK dynamic table? In-Reply-To: References: Message-ID: <20201217180209.GC1147@mdounin.ru> Hello! On Thu, Dec 17, 2020 at 05:01:54AM +0000, Jon Carmicheal wrote: > I would like to disable the caching of headers in the dynamic > table of the HTTP/2 HPACK compression algorithm described in RFC > 7541. I have defined my nginx server with > > listen 8080 > > http2 > > ; > > and I've confirmed that the HPACK algorithm is working as > expected with Huffman encoding, static header table indexing, > and dynamic header table indexing. But I haven't been able to > disable the dynamic table. You cannot disable dynamic table support in nginx. As an HPACK decoder, nginx supports dynamic table of up to 4096 octets (the default for SETTINGS_HEADER_TABLE_SIZE in HTTP/2). > RFC 7541 mentions in "Section 4.2. Maximum Table Size" the > ability of an HTTP/2 node to "clear entries from the dynamic > table by setting a maximum size of 0, which can subsequently be > restored." Is that a feature supported by nginx? Can I disable > the dynamic table entirely so that no header fields are cached? > And can I arbitrarily send a flush request so that all entries > are evicted and then the dynamic table size is restored? If so, > how? Yes, it is supported. The "how" is specified in the section "6.3. Dynamic Table Size Update" of the same RFC (https://tools.ietf.org/html/rfc7541#section-6.3). > I've been trying to play with "http2_max_field_size" and > "http2_max_header_size" in the server configuration file as > described in > https://nginx.org/en/docs/http/ngx_http_v2_module.html. But I > don't think those are the right parameters. When I set either of > them to zero, it makes the server return an error when a header > is sent. These are unrelated parameters. They set size limits on compressed individual header fields and total length of all uncompressed headers, respectively, so nginx will reject attempts to use larger headers. -- Maxim Dounin http://mdounin.ru/ From aliofthemohsins at gmail.com Fri Dec 18 13:54:57 2020 From: aliofthemohsins at gmail.com (Ali Mohsin) Date: Fri, 18 Dec 2020 18:54:57 +0500 Subject: limit requests and CORS Policy Message-ID: Hello, I want to setup limit requests on my API server but I want separate limit to every separate link. e.g i'm requesting to /api/link1 and i'm being blocked I should be able to use /api/link2 normally if I don't exceed the limit. I have achieved this with the following code limit_req_zone $binary_remote_addr$v1 zone=mylimit:10m rate=3r/s; location ~ "^/api/(?)$" { limit_req zone=mylimit; but i'm unable to set my CORS in headers and my APIs are inaccessible. any help would be appreciated Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeffrey.knight at gmail.com Fri Dec 18 18:37:57 2020 From: jeffrey.knight at gmail.com (Jeffrey Knight) Date: Fri, 18 Dec 2020 18:37:57 +0000 Subject: proxy_pass with variable Message-ID: Hello ! I'm trying to get up a reverse proxy where my users can pass in a url of the form https://my.server.com?https://some.other.server.com and it'll proxy to it. It works perfectly with this configuration with the proxy_pass target hard coded: ``` server { server_name my.server.com; listen [::]:443 ssl ipv6only=on; # managed by Certbot listen 443 ssl; # managed by Certbot location / { proxy_pass https://some.other.server.com; add_header Cache-Control "public, max-age=3"; add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'X-Frame-Options' "ALLOW FROM $http_origin"; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Vary' 'Origin'; } } ``` testing with: > curl -X POST https://my.server.com -H "Content-Type: application/json" -d "{\"id\": \"123\"}" But if I swap out the proxy_pass target with a variable, I'm getting a 502 Bad Gateway. ``` server { server_name my.server.com; listen [::]:443 ssl ipv6only=on; # managed by Certbot listen 443 ssl; # managed by Certbot location / { proxy_pass $args; add_header Cache-Control "public, max-age=3"; add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'X-Frame-Options' "ALLOW FROM $http_origin"; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Vary' 'Origin'; } } ``` Testing with: > curl -X POST https://my.server.com?https://some.other.server.com -H "Content-Type: application/json" -d "{\"id\": \"123\"}" I'm writing the $args out to the logs: log_format main 'ARGS: >>$args<<'; access_log /var/log/nginx/access.log main; and it looks fine."$args" is identical to what I had hard coded, so I know that "args" is exactly the url I want to proxy_pass to. My location is not a regular expression and according to the docs [1], variables in proxy_pass should be fair game? ----- When variables are used in proxy_pass: location /name/ { proxy_pass http://127.0.0.1$request_uri; } In this case, if URI is specified in the directive, it is passed to the server as is, replacing the original request URI. ----- Any help is much appreciated ! Jeff [1] https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Dec 18 19:13:03 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 18 Dec 2020 22:13:03 +0300 Subject: proxy_pass with variable In-Reply-To: References: Message-ID: <20201218191303.GD1147@mdounin.ru> Hello! On Fri, Dec 18, 2020 at 06:37:57PM +0000, Jeffrey Knight wrote: > Hello ! > > I'm trying to get up a reverse proxy where my users can pass in a url of the form > > https://my.server.com?https://some.other.server.com > > and it'll proxy to it. > > It works perfectly with this configuration with the proxy_pass target hard coded: > > ``` > server { > server_name my.server.com; > > listen [::]:443 ssl ipv6only=on; # managed by Certbot > listen 443 ssl; # managed by Certbot > > location / { > proxy_pass https://some.other.server.com; > add_header Cache-Control "public, max-age=3"; > > add_header 'Access-Control-Allow-Origin' "$http_origin"; > add_header 'X-Frame-Options' "ALLOW FROM $http_origin"; > add_header 'Access-Control-Allow-Credentials' 'true'; > add_header 'Vary' 'Origin'; > } > } > ``` > > testing with: > > curl -X POST https://my.server.com -H "Content-Type: application/json" -d "{\"id\": \"123\"}" > > But if I swap out the proxy_pass target with a variable, I'm getting a 502 Bad Gateway. > > ``` > server { > server_name my.server.com; > > listen [::]:443 ssl ipv6only=on; # managed by Certbot > listen 443 ssl; # managed by Certbot > > location / { > proxy_pass $args; > add_header Cache-Control "public, max-age=3"; > > add_header 'Access-Control-Allow-Origin' "$http_origin"; > add_header 'X-Frame-Options' "ALLOW FROM $http_origin"; > add_header 'Access-Control-Allow-Credentials' 'true'; > add_header 'Vary' 'Origin'; > } > } > ``` > > Testing with: > > curl -X POST https://my.server.com?https://some.other.server.com -H "Content-Type: application/json" -d "{\"id\": \"123\"}" The 502 error returned by nginx implies there is a relevant message in the error log at the "error" level. What's in the error log? [...] > [1] https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass I suspect the relevant quote from this link is: : Parameter value can contain variables. In this case, if an address : is specified as a domain name, the name is searched among the : described server groups, and, if not found, is determined using a : resolver. And you don't have resolver defined in your configuration. But the error log should know better. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Sat Dec 19 10:04:36 2020 From: nginx-forum at forum.nginx.org (graxlop) Date: Sat, 19 Dec 2020 05:04:36 -0500 Subject: TLS 1.3 and ssl_reject_handshake Message-ID: <17b706ba2ff981121201d68e80dac7ef.NginxMailingListEnglish@forum.nginx.org> Hello, I'm using nginx 1.19.6 and when enabling "ssl_reject_handshake" in the top server block, it will disable TLS 1.3 if no certificate is included in the same server block or in the http block. server { listen 443 ssl; ssl_reject_handshake on; } server { listen 443 http2 ssl; server_name test.com; root /home/test; ssl_certificate ssl/rsa.crt; ssl_certificate_key ssl/rsa.key; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290250,290250#msg-290250 From lemur117 at protonmail.com Sat Dec 19 19:18:51 2020 From: lemur117 at protonmail.com (Jon Carmicheal) Date: Sat, 19 Dec 2020 19:18:51 +0000 Subject: How to adjust HPACK dynamic table? In-Reply-To: References: Message-ID: <661f4f82-2fdb-8d82-5c87-cede3e2a6fc4@protonmail.com> On 12/18/20 6:00 AM, nginx-request at nginx.org wrote: > Re: How to adjust HPACK dynamic table? Sorry I'm not yet familiar with how to write a follow-up on the mailing list, including the inline text, but thank you Maxim for your response to my inquiry. Please see below for a follow-up question. On Thu, Dec 17, 2020 at 05:01:54AM +0000, Jon Carmicheal wrote: > I would like to disable the caching of headers in the dynamic > table of the HTTP/2 HPACK compression algorithm described in RFC > 7541. I have defined my nginx server with > > listen 8080 > > http2 > > ; > > and I've confirmed that the HPACK algorithm is working as > expected with Huffman encoding, static header table indexing, > and dynamic header table indexing. But I haven't been able to > disable the dynamic table. You cannot disable dynamic table support in nginx. As an HPACK decoder, nginx supports dynamic table of up to 4096 octets (the default for SETTINGS_HEADER_TABLE_SIZE in HTTP/2). > RFC 7541 mentions in "Section 4.2. Maximum Table Size" the > ability of an HTTP/2 node to "clear entries from the dynamic > table by setting a maximum size of 0, which can subsequently be > restored." Is that a feature supported by nginx? Can I disable > the dynamic table entirely so that no header fields are cached? > And can I arbitrarily send a flush request so that all entries > are evicted and then the dynamic table size is restored? If so, > how? Yes, it is supported. The "how" is specified in the section "6.3. Dynamic Table Size Update" of the same RFC ( https://tools.ietf.org/html/rfc7541#section-6.3 ). How is this accomplished in nginx? Can I configure the nginx server so that it sets the size of the dynamic table to 0 immediately when an HTTP/2 session is initiated? > I've been trying to play with "http2_max_field_size" and > "http2_max_header_size" in the server configuration file as > described in > https://nginx.org/en/docs/http/ngx_http_v2_module.html > . But I > don't think those are the right parameters. When I set either of > them to zero, it makes the server return an error when a header > is sent. These are unrelated parameters. They set size limits on compressed individual header fields and total length of all uncompressed headers, respectively, so nginx will reject attempts to use larger headers. Thanks, this was my assumption. -- Maxim Dounin http://mdounin.ru/ ------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Dec 21 11:48:54 2020 From: nginx-forum at forum.nginx.org (balu) Date: Mon, 21 Dec 2020 06:48:54 -0500 Subject: Unable to reverse proxy requests to Nifi running in the backend using client auth mechanism Message-ID: I have configured Nginx as reverse proxy server for my Nifi Application running in the backend on port 9443; Here goes my nginx conf: worker_processes 1; events { worker_connections 1024; } http { map_hash_bucket_size 128; sendfile on; large_client_header_buffers 4 64k; upstream nifi { server cloud-analytics-test2-nifi-a.insights.io:9443; } server { listen 443 ssl; #ssl on; server_name nifi-test-nginx.insights.np.vocera.io; ssl_certificate /etc/nginx/cert1.pem; ssl_certificate_key /etc/nginx/privkey1.pem; ssl_client_certificate /etc/nginx/nifi-client.pem; ssl_verify_client optional_no_ca; ssl_verify_depth 2; error_log /var/log/nginx/error.log debug; proxy_ssl_certificate /etc/nginx/cert1.pem; proxy_ssl_certificate_key /etc/nginx/privkey1.pem; proxy_ssl_trusted_certificate /etc/nginx/nifi-client.pem; location / { proxy_pass https://nifi; proxy_set_header X-ProxyScheme https; proxy_set_header X-ProxyHost nifi-test-nginx.insights.io; proxy_set_header X-ProxyPort 443; proxy_set_header X-ProxyContextPath /; proxy_set_header X-ProxiedEntitiesChain "<$ssl_client_s_dn>"; proxy_set_header X-SSL-CERT $ssl_client_escaped_cert; } } } When ever I try to access Nifi using Nginx Reverse Proxy Address/hostname I am getting below error. ```2020/12/21 11:46:45 [debug] 14165#0: *5 SSL_shutdown: 1 2020/12/21 11:46:45 [debug] 14165#0: *5 reusable connection: 0 2020/12/21 11:46:45 [debug] 14165#0: *5 free: 000055F192862800 2020/12/21 11:46:45 [debug] 14165#0: *5 free: 000055F192801300 2020/12/21 11:46:45 [debug] 14165#0: *5 free: 000055F19280EC50, unused: 8 2020/12/21 11:46:45 [debug] 14165#0: *5 free: 000055F1928596D0, unused: 384 2020/12/21 11:46:45 [debug] 14165#0: *6 SSL handshake handler: 0 2020/12/21 11:46:45 [debug] 14165#0: *6 SSL_do_handshake: -1 2020/12/21 11:46:45 [debug] 14165#0: *6 SSL_get_error: 2 2020/12/21 11:46:45 [debug] 14165#0: *6 SSL handshake handler: 0 2020/12/21 11:46:45 [debug] 14165#0: *6 verify:0, error:2, depth:1, subject:"/C=AT/O=ZeroSSL/CN=ZeroSSL RSA Domain Secure Site CA", issuer:"/C=US/ST=New Jersey/L=Jersey City/O=The USERTRUST Network/CN=USERTrust RSA Certification Authority" 2020/12/21 11:46:45 [debug] 14165#0: *6 verify:1, error:2, depth:0, subject:"/CN=nifi-admin.insights.io", issuer:"/C=AT/O=ZeroSSL/CN=ZeroSSL RSA Domain Secure Site CA" 2020/12/21 11:46:45 [debug] 14165#0: *6 SSL_do_handshake: 1 2020/12/21 11:46:45 [debug] 14165#0: *6 SSL: TLSv1.2, cipher: "ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD" 2020/12/21 11:46:45 [debug] 14165#0: *6 reusable connection: 1 2020/12/21 11:46:45 [debug] 14165#0: *6 http wait request handler 2020/12/21 11:46:45 [debug] 14165#0: *6 malloc: 000055F192801300:1024 2020/12/21 11:46:45 [debug] 14165#0: *6 SSL_read: -1 2020/12/21 11:46:45 [debug] 14165#0: *6 SSL_get_error: 2 2020/12/21 11:46:45 [debug] 14165#0: *6 free: 000055F192801300 2020/12/21 11:46:45 [debug] 14165#0: *6 http wait request handler 2020/12/21 11:46:45 [debug] 14165#0: *6 malloc: 000055F192801300:1024 2020/12/21 11:46:45 [debug] 14165#0: *6 SSL_read: 570 2020/12/21 11:46:45 [debug] 14165#0: *6 SSL_read: -1 2020/12/21 11:46:45 [debug] 14165#0: *6 SSL_get_error: 2 2020/12/21 11:46:45 [debug] 14165#0: *6 reusable connection: 0 2020/12/21 11:46:45 [debug] 14165#0: *6 posix_memalign: 000055F1928687C0:4096 @16 2020/12/21 11:46:45 [debug] 14165#0: *6 http process request line 2020/12/21 11:46:45 [debug] 14165#0: *6 http request line: "GET /favicon.ico HTTP/1.1" 2020/12/21 11:46:45 [debug] 14165#0: *6 http uri: "/favicon.ico" 2020/12/21 11:46:45 [debug] 14165#0: *6 http args: "" 2020/12/21 11:46:45 [debug] 14165#0: *6 http exten: "ico" 2020/12/21 11:46:45 [debug] 14165#0: *6 posix_memalign: 000055F192854110:4096 @16 2020/12/21 11:46:45 [debug] 14165#0: *6 http process request header line 2020/12/21 11:46:45 [debug] 14165#0: *6 http header: "Host: nifi-test-nginx.insights.io" 2020/12/21 11:46:45 [debug] 14165#0: *6 http header: "Connection: keep-alive" 2020/12/21 11:46:45 [debug] 14165#0: *6 http header: "User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36" 2020/12/21 11:46:45 [debug] 14165#0: *6 http header: "Accept: image/avif,image/webp,image/apng,image/*,*/*;q=0.8" 2020/12/21 11:46:45 [debug] 14165#0: *6 http header: "Sec-Fetch-Site: same-origin" 2020/12/21 11:46:45 [debug] 14165#0: *6 http header: "Sec-Fetch-Mode: no-cors" 2020/12/21 11:46:45 [debug] 14165#0: *6 http header: "Sec-Fetch-Dest: image" 2020/12/21 11:46:45 [debug] 14165#0: *6 http header: "Referer: https://nifi-test-nginx.insights.io/nifi/?processGroupId=root&componentIds=87a087ca-0175-1000-ca56-1d437d733fb0" 2020/12/21 11:46:45 [debug] 14165#0: *6 http header: "Accept-Encoding: gzip, deflate, br" 2020/12/21 11:46:45 [debug] 14165#0: *6 http header: "Accept-Language: en-US,en;q=0.9" 2020/12/21 11:46:45 [debug] 14165#0: *6 http header done 2020/12/21 11:46:45 [info] 14165#0: *6 client SSL certificate verify error: (2:unable to get issuer certificate) while reading client request headers, client: 49.207.211.47, server: nifi-test-nginx.insights.io, request: "GET /favicon.ico HTTP/1.1", host: "nifi-test-nginx.insights.io", referrer: "https://nifi-test-nginx.insights.io/nifi/?processGroupId=root&componentIds=87a087ca-0175-1000-ca56-1d437d733fb0" 2020/12/21 11:46:45 [debug] 14165#0: *6 http finalize request: 495, "/favicon.ico?" a:1, c:1 2020/12/21 11:46:45 [debug] 14165#0: *6 event timer del: 11: 2253744188 2020/12/21 11:46:45 [debug] 14165#0: *6 http special response: 495, "/favicon.ico?" 2020/12/21 11:46:45 [debug] 14165#0: *6 http set discard body 2020/12/21 11:46:45 [debug] 14165#0: *6 HTTP/1.1 400 Bad Request Server: nginx/1.18.0 Date: Mon, 21 Dec 2020 11:46:45 GMT Content-Type: text/html Content-Length: 617 Connection: close ``` Can someone help me in fixing above error. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290255,290255#msg-290255 From kenneth.s.brooks at gmail.com Wed Dec 23 15:27:44 2020 From: kenneth.s.brooks at gmail.com (Kenneth Brooks) Date: Wed, 23 Dec 2020 10:27:44 -0500 Subject: least_conn not working for me Message-ID: I have a fully working config doing loadbalancing over 2 upstream servers. I want to use "least_conn" When I put least_conn in, it still is doing round robin. I can confirm that other configs like "weight' and "ip_hash" are working as expected. Is there some other configuration/setting that also affects whether least_conn is honored? Currently using nginx 1.16.1 (trying to get 1.18.0 in house to see if that helps) What other info should I provide to help troubleshoot? Thanks, Ken -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Dec 23 15:57:47 2020 From: nginx-forum at forum.nginx.org (kenneth.s.brooks) Date: Wed, 23 Dec 2020 10:57:47 -0500 Subject: least_conn not working for me In-Reply-To: References: Message-ID: <84f14ee76cd87a4d5bcfdaf89b469f3c.NginxMailingListEnglish@forum.nginx.org> Small update: Moved to 1.18.0 and still seeing the same results. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290285,290287#msg-290287 From mdounin at mdounin.ru Wed Dec 23 16:39:17 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Dec 2020 19:39:17 +0300 Subject: TLS 1.3 and ssl_reject_handshake In-Reply-To: <17b706ba2ff981121201d68e80dac7ef.NginxMailingListEnglish@forum.nginx.org> References: <17b706ba2ff981121201d68e80dac7ef.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20201223163917.GE1147@mdounin.ru> Hello! On Sat, Dec 19, 2020 at 05:04:36AM -0500, graxlop wrote: > I'm using nginx 1.19.6 and when enabling "ssl_reject_handshake" in the top > server block, it will disable TLS 1.3 if no certificate is included in the > same server block or in the http block. > > server { > listen 443 ssl; > ssl_reject_handshake on; > } > > server { > listen 443 http2 ssl; > server_name test.com; > root /home/test; > > ssl_certificate ssl/rsa.crt; > ssl_certificate_key ssl/rsa.key; > } This is a bug in OpenSSL. This bug is already fixed and the fix is expected to be available in the next OpenSSL release. Details can be found here: https://trac.nginx.org/nginx/ticket/2071 https://github.com/openssl/openssl/issues/13291 The most simple workaround is to define a dummy certificate for the server block with ssl_reject_handshake. This certificate won't be used, but will prevent OpenSSL from incorrectly disabling TLSv1.3. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Dec 23 17:14:42 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Dec 2020 20:14:42 +0300 Subject: least_conn not working for me In-Reply-To: References: Message-ID: <20201223171442.GG1147@mdounin.ru> Hello! On Wed, Dec 23, 2020 at 10:27:44AM -0500, Kenneth Brooks wrote: > I have a fully working config doing loadbalancing over 2 upstream servers. > I want to use "least_conn" > > When I put least_conn in, it still is doing round robin. > I can confirm that other configs like "weight' and "ip_hash" are working as > expected. > > Is there some other configuration/setting that also affects whether > least_conn is honored? The "least_conn" balancing method is equivalent to round-robin as long as all configured upstream servers have equal number of connections opened. That is, if you are seeing nginx is "doing round robin", most likely this means that there isn't enough active connections for least_conn to be different from round-robin. Note well that "number of connections" applies to a single nginx worker process, and if there is more than one worker, least_conn might not see all the connections (unless you've configured nginx to share information about upstream servers between worker processes, see http://nginx.org/r/zone). -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Wed Dec 23 17:36:16 2020 From: nginx-forum at forum.nginx.org (kenneth.s.brooks) Date: Wed, 23 Dec 2020 12:36:16 -0500 Subject: least_conn not working for me In-Reply-To: <20201223171442.GG1147@mdounin.ru> References: <20201223171442.GG1147@mdounin.ru> Message-ID: Thanks for the response. I understand what you are saying about the worker processes. We have only a single worker process. I have 2 upstream servers. To validate: I am sending a request for a LARGE file. I see it hit server1. Server1 is now serving that request for the next couple of minutes. I send a request for a very tiny file. I see it hit server2. It finishes (server1 is still processing request #1) I send a request for a very tiny file. I see it hit server1 (even tho it is still serving request #1 and server2 is not serving anything) I repeat that over and over, and I'll see all the subsequent requests being routed to server1, then 2, then 1 then 2. If I submit another LARGE file request, if the last request went to server2, then now I have 2 LARGE file requests being served by server1. If I submit more requests, they all continue to be equally distributed to server1 and server2 (even though server1 has 2 active things it is serving). Is there some sort of a 'fudge factor' or threshold? That there has to be n number of requests that one server is handling more than another server? I wouldn't think so, but I'm at a loss. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290285,290291#msg-290291 From nginx-forum at forum.nginx.org Wed Dec 23 18:12:52 2020 From: nginx-forum at forum.nginx.org (kenneth.s.brooks) Date: Wed, 23 Dec 2020 13:12:52 -0500 Subject: least_conn not working for me In-Reply-To: <20201223171442.GG1147@mdounin.ru> References: <20201223171442.GG1147@mdounin.ru> Message-ID: <71fcaeeb9d7565bf10ac97f6151b3980.NginxMailingListEnglish@forum.nginx.org> Perhaps another question that might help me debug it. Is there a way to see active connection counts to upstream servers? I have the status endpoint enabled, but that just shows me total active connections for the worker process as a whole, correct? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290285,290292#msg-290292 From peter_booth at me.com Wed Dec 23 19:06:29 2020 From: peter_booth at me.com (Peter Booth) Date: Wed, 23 Dec 2020 14:06:29 -0500 Subject: least_conn not working for me In-Reply-To: <71fcaeeb9d7565bf10ac97f6151b3980.NginxMailingListEnglish@forum.nginx.org> References: <20201223171442.GG1147@mdounin.ru> <71fcaeeb9d7565bf10ac97f6151b3980.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0EFC06EB-7846-4614-9CF1-8BF571FE5179@me.com> From a shell on your nginx host you can run something like netstat -ant | egrep ?ESTAB? to see all the open TCP connections. If you run your command line with watch you will see it update each two seconds, etc .. FWIW A long time ago I did a bunch of experiments with different load balancer strategies using both f5 LTM and nginx. This suggested that the simplest strategy, round-robin was optimal in most real world scenarios with heavy loads > On 23 Dec 2020, at 1:12 PM, kenneth.s.brooks wrote: > > Perhaps another question that might help me debug it. Is there a way to see > active connection counts to upstream servers? I have the status endpoint > enabled, but that just shows me total active connections for the worker > process as a whole, correct? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290285,290292#msg-290292 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Dec 23 21:26:10 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 24 Dec 2020 00:26:10 +0300 Subject: least_conn not working for me In-Reply-To: References: <20201223171442.GG1147@mdounin.ru> Message-ID: <20201223212610.GH1147@mdounin.ru> Hello! On Wed, Dec 23, 2020 at 12:36:16PM -0500, kenneth.s.brooks wrote: > Thanks for the response. > > I understand what you are saying about the worker processes. We have only a > single worker process. > > I have 2 upstream servers. > > To validate: > I am sending a request for a LARGE file. I see it hit server1. Server1 is > now serving that request for the next couple of minutes. Note that the fact that server1 is actually serving the request needs some additional verification. As a web accelerator nginx normally ensures that upstream servers are free to serve additional requests as soon as possible, so if the limiting factor is the client speed rather than connection between nginx and the upstream server, nginx will happily buffer the response and will serve it to the client itself. And there will be no active connections to server1, so least_conn will behave much like round-robin. > I send a request for a very tiny file. I see it hit server2. It finishes > (server1 is still processing request #1) > I send a request for a very tiny file. I see it hit server1 (even tho it is > still serving request #1 and server2 is not serving anything) > > I repeat that over and over, and I'll see all the subsequent requests being > routed to server1, then 2, then 1 then 2. > If I submit another LARGE file request, if the last request went to server2, > then now I have 2 LARGE file requests being served by server1. > > If I submit more requests, they all continue to be equally distributed to > server1 and server2 (even though server1 has 2 active things it is > serving). This behaviour corresponds to no active connections to server1, as might happen if the file is not large enough and instead buffered by nginx. > Is there some sort of a 'fudge factor' or threshold? That there has to be n > number of requests that one server is handling more than another server? I > wouldn't think so, but I'm at a loss. No, nothing like this. Just in case, here is a simple configuration which demonstrates how least_conn works (by using limit_rate to slow down responses of one of the upstream servers): upstream u { server 127.0.0.1:8081; server 127.0.0.1:8082; least_conn; } server { listen 8080; location / { proxy_pass http://u; } } server { listen 8081; limit_rate 10; return 200 "slow\n"; } server { listen 8082; return 200 "fast\n"; } And a test: $ for i in 1 2 3 4; do curl -q http://127.0.0.1:8080/ & sleep 0.1; done; sleep 15 fast fast fast slow [1] Done curl -q http://127.0.0.1:8080/ [2] Done curl -q http://127.0.0.1:8080/ [3] Done curl -q http://127.0.0.1:8080/ [4] Done curl -q http://127.0.0.1:8080/ Note that requests are started with some delay ("sleep 0.1") to make sure fast backend will be able to respond before the next request starts. Note that only one of the requests is routed to the slow backed - all other requests are routed to the fast one. That is, least_conn works as expected. -- Maxim Dounin http://mdounin.ru/ From kenneth.s.brooks at gmail.com Wed Dec 23 21:42:49 2020 From: kenneth.s.brooks at gmail.com (Kenneth Brooks) Date: Wed, 23 Dec 2020 16:42:49 -0500 Subject: least_conn not working for me In-Reply-To: <20201223212610.GH1147@mdounin.ru> References: <20201223171442.GG1147@mdounin.ru> <20201223212610.GH1147@mdounin.ru> Message-ID: Thanks for the detailed response and taking the time to show some test examples. We did think that perhaps it was buffering. However, in our case, the "large" request is gigs in size, so there is no way that it is buffering that whole thing. I think our buffers are pretty small. Unless there is some absolute black magic that will buffer what it can, close the upstream, then open it again to ask for more of that file. :) I'm going to try what Peter suggested and check the netstat to at least confirm whether it is actually connected to the upstream still. Worst case I'll then simulate the same on our end with the limit_rate. On Wed, Dec 23, 2020 at 4:26 PM Maxim Dounin wrote: > Hello! > > On Wed, Dec 23, 2020 at 12:36:16PM -0500, kenneth.s.brooks wrote: > > > Thanks for the response. > > > > I understand what you are saying about the worker processes. We have > only a > > single worker process. > > > > I have 2 upstream servers. > > > > To validate: > > I am sending a request for a LARGE file. I see it hit server1. Server1 is > > now serving that request for the next couple of minutes. > > Note that the fact that server1 is actually serving the request > needs some additional verification. As a web accelerator nginx > normally ensures that upstream servers are free to serve additional > requests as soon as possible, so if the limiting factor is the > client speed rather than connection between nginx and the upstream > server, nginx will happily buffer the response and will serve it > to the client itself. And there will be no active connections to > server1, so least_conn will behave much like round-robin. > > > I send a request for a very tiny file. I see it hit server2. It finishes > > (server1 is still processing request #1) > > I send a request for a very tiny file. I see it hit server1 (even tho it > is > > still serving request #1 and server2 is not serving anything) > > > > I repeat that over and over, and I'll see all the subsequent requests > being > > routed to server1, then 2, then 1 then 2. > > If I submit another LARGE file request, if the last request went to > server2, > > then now I have 2 LARGE file requests being served by server1. > > > > If I submit more requests, they all continue to be equally distributed to > > server1 and server2 (even though server1 has 2 active things it is > > serving). > > This behaviour corresponds to no active connections to server1, as > might happen if the file is not large enough and instead buffered > by nginx. > > > Is there some sort of a 'fudge factor' or threshold? That there has to > be n > > number of requests that one server is handling more than another > server? I > > wouldn't think so, but I'm at a loss. > > No, nothing like this. > > Just in case, here is a simple configuration which demonstrates > how least_conn works (by using limit_rate to slow down responses > of one of the upstream servers): > > upstream u { > server 127.0.0.1:8081; > server 127.0.0.1:8082; > least_conn; > } > > server { > listen 8080; > > location / { > proxy_pass http://u; > } > } > > server { > listen 8081; > limit_rate 10; > return 200 "slow\n"; > } > > server { > listen 8082; > return 200 "fast\n"; > } > > And a test: > > $ for i in 1 2 3 4; do curl -q http://127.0.0.1:8080/ & sleep 0.1; done; > sleep 15 > fast > fast > fast > slow > [1] Done curl -q http://127.0.0.1:8080/ > [2] Done curl -q http://127.0.0.1:8080/ > [3] Done curl -q http://127.0.0.1:8080/ > [4] Done curl -q http://127.0.0.1:8080/ > > Note that requests are started with some delay ("sleep 0.1") to > make sure fast backend will be able to respond before the next > request starts. Note that only one of the requests is routed to > the slow backed - all other requests are routed to the fast > one. That is, least_conn works as expected. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From philwinfield at gmail.com Wed Dec 23 21:59:35 2020 From: philwinfield at gmail.com (Phil Winfield) Date: Wed, 23 Dec 2020 21:59:35 +0000 Subject: least_conn not working for me In-Reply-To: References: <20201223171442.GG1147@mdounin.ru> <20201223212610.GH1147@mdounin.ru> Message-ID: Unsubscribe On Wed, 23 Dec 2020, 21:43 Kenneth Brooks, wrote: > Thanks for the detailed response and taking the time to show some test > examples. > > We did think that perhaps it was buffering. > However, in our case, the "large" request is gigs in size, so there is no > way that it is buffering that whole thing. I think our buffers are pretty > small. > Unless there is some absolute black magic that will buffer what it can, > close the upstream, then open it again to ask for more of that file. :) > > I'm going to try what Peter suggested and check the netstat to at least > confirm whether it is actually connected to the upstream still. > Worst case I'll then simulate the same on our end with the limit_rate. > > > On Wed, Dec 23, 2020 at 4:26 PM Maxim Dounin wrote: > >> Hello! >> >> On Wed, Dec 23, 2020 at 12:36:16PM -0500, kenneth.s.brooks wrote: >> >> > Thanks for the response. >> > >> > I understand what you are saying about the worker processes. We have >> only a >> > single worker process. >> > >> > I have 2 upstream servers. >> > >> > To validate: >> > I am sending a request for a LARGE file. I see it hit server1. Server1 >> is >> > now serving that request for the next couple of minutes. >> >> Note that the fact that server1 is actually serving the request >> needs some additional verification. As a web accelerator nginx >> normally ensures that upstream servers are free to serve additional >> requests as soon as possible, so if the limiting factor is the >> client speed rather than connection between nginx and the upstream >> server, nginx will happily buffer the response and will serve it >> to the client itself. And there will be no active connections to >> server1, so least_conn will behave much like round-robin. >> >> > I send a request for a very tiny file. I see it hit server2. It finishes >> > (server1 is still processing request #1) >> > I send a request for a very tiny file. I see it hit server1 (even tho >> it is >> > still serving request #1 and server2 is not serving anything) >> > >> > I repeat that over and over, and I'll see all the subsequent requests >> being >> > routed to server1, then 2, then 1 then 2. >> > If I submit another LARGE file request, if the last request went to >> server2, >> > then now I have 2 LARGE file requests being served by server1. >> > >> > If I submit more requests, they all continue to be equally distributed >> to >> > server1 and server2 (even though server1 has 2 active things it is >> > serving). >> >> This behaviour corresponds to no active connections to server1, as >> might happen if the file is not large enough and instead buffered >> by nginx. >> >> > Is there some sort of a 'fudge factor' or threshold? That there has to >> be n >> > number of requests that one server is handling more than another >> server? I >> > wouldn't think so, but I'm at a loss. >> >> No, nothing like this. >> >> Just in case, here is a simple configuration which demonstrates >> how least_conn works (by using limit_rate to slow down responses >> of one of the upstream servers): >> >> upstream u { >> server 127.0.0.1:8081; >> server 127.0.0.1:8082; >> least_conn; >> } >> >> server { >> listen 8080; >> >> location / { >> proxy_pass http://u; >> } >> } >> >> server { >> listen 8081; >> limit_rate 10; >> return 200 "slow\n"; >> } >> >> server { >> listen 8082; >> return 200 "fast\n"; >> } >> >> And a test: >> >> $ for i in 1 2 3 4; do curl -q http://127.0.0.1:8080/ & sleep 0.1; done; >> sleep 15 >> fast >> fast >> fast >> slow >> [1] Done curl -q http://127.0.0.1:8080/ >> [2] Done curl -q http://127.0.0.1:8080/ >> [3] Done curl -q http://127.0.0.1:8080/ >> [4] Done curl -q http://127.0.0.1:8080/ >> >> Note that requests are started with some delay ("sleep 0.1") to >> make sure fast backend will be able to respond before the next >> request starts. Note that only one of the requests is routed to >> the slow backed - all other requests are routed to the fast >> one. That is, least_conn works as expected. >> >> -- >> Maxim Dounin >> http://mdounin.ru/ >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Dec 23 22:15:29 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 24 Dec 2020 01:15:29 +0300 Subject: least_conn not working for me In-Reply-To: References: <20201223171442.GG1147@mdounin.ru> <20201223212610.GH1147@mdounin.ru> Message-ID: <20201223221529.GJ1147@mdounin.ru> Hello! On Wed, Dec 23, 2020 at 04:42:49PM -0500, Kenneth Brooks wrote: > We did think that perhaps it was buffering. > However, in our case, the "large" request is gigs in size, so there is no > way that it is buffering that whole thing. I think our buffers are pretty > small. > Unless there is some absolute black magic that will buffer what it can, > close the upstream, then open it again to ask for more of that file. :) By default nginx can buffer up to slightly more than 1 gigabyte (proxy_max_temp_file_size + various in-memory buffers). Further, with proxy_cache (or proxy_store) the proxy_max_temp_file_size limit is ignored, so nginx can buffer arbitrary responses. -- Maxim Dounin http://mdounin.ru/ From kenneth.s.brooks at gmail.com Wed Dec 23 22:38:02 2020 From: kenneth.s.brooks at gmail.com (Kenneth Brooks) Date: Wed, 23 Dec 2020 17:38:02 -0500 Subject: least_conn not working for me In-Reply-To: <20201223221529.GJ1147@mdounin.ru> References: <20201223171442.GG1147@mdounin.ru> <20201223212610.GH1147@mdounin.ru> <20201223221529.GJ1147@mdounin.ru> Message-ID: Oh. Ok, good to know about the default temp file and buffers. Just checked and I think the 'large' file we are downloading is 800mb. We don't have proxy_cache or proxy_store set. We do have proxy_temp_file_write_size 250m; We ended up doing a test where 9 of those large files were all on server1, and it continued to round robin requests. Is that temp_file_size essentially per connection? If so, then if the file is only 800mb, then perhaps that makes sense and we are indeed closing the upstream and it is just buffered waiting to finish sending to client. I'll try: 1) using a file larger than 1gb (just to see if we can force it to be larger than the possible buffer). 2) Still do the netstat.. I think that will tell us a whole lot. 3) simulate with limit_rate Thanks again! I think we might be on to something. On Wed, Dec 23, 2020 at 5:15 PM Maxim Dounin wrote: > Hello! > > On Wed, Dec 23, 2020 at 04:42:49PM -0500, Kenneth Brooks wrote: > > > We did think that perhaps it was buffering. > > However, in our case, the "large" request is gigs in size, so there is no > > way that it is buffering that whole thing. I think our buffers are pretty > > small. > > Unless there is some absolute black magic that will buffer what it can, > > close the upstream, then open it again to ask for more of that file. :) > > By default nginx can buffer up to slightly more than 1 gigabyte > (proxy_max_temp_file_size + various in-memory buffers). Further, > with proxy_cache (or proxy_store) the proxy_max_temp_file_size limit > is ignored, so nginx can buffer arbitrary responses. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Dec 25 07:20:33 2020 From: nginx-forum at forum.nginx.org (brianbotkiller) Date: Fri, 25 Dec 2020 02:20:33 -0500 Subject: Cannot get RTMP to work Message-ID: <9261d6474cfa30d08af4c8d0da472a2b.NginxMailingListEnglish@forum.nginx.org> Hello, I have followed the tutorial found here: https://www.nginx.com/blog/video-streaming-for-remote-learning-with-nginx/#installing-nginx-dependencies To the best of my ability (it has some distinct holes in it), and some other tutorials that I have found. I cannot get RTMP to work with nginx. I am running a VPS with Ubuntu. I have installed the RTMP module. I have worked to edit the nginx.conf file as best as I can, but I don't understand why this file seems to reside in multiple locations, and I don't understand which of them I am to edit to add the RTMP configuration -- this is totally missing from the tutorial and I cannot find a straight answer about this. As it stands right now, I've added rtmp { server { listen 1935; chunk_size 4096; application live { live on; record off; } } } to the end of the config file found in /usr/local/nginx/conf, and I've ran nginx -t to ensure that it is configured properly -- nginx spits back that it is. I have stopped/started the service. I've run a port scan to ensure that it is listening on port 80 -- it is. I am trying to stream using OBS via RTMP. I set OBS to use my server IP/live and no matter what, I get an error from OBS that it cannot connect to server. I am lost as to what to do here and am looking for help. Thanks, Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290317,290317#msg-290317 From nginx-forum at forum.nginx.org Sat Dec 26 01:04:21 2020 From: nginx-forum at forum.nginx.org (kenneth.s.brooks) Date: Fri, 25 Dec 2020 20:04:21 -0500 Subject: least_conn not working for me In-Reply-To: <20201223221529.GJ1147@mdounin.ru> References: <20201223221529.GJ1147@mdounin.ru> Message-ID: <5ee080d76c33032ce95908343e82a397.NginxMailingListEnglish@forum.nginx.org> Wanted to follow up to say that you were spot on. The file was ~800mb and it was indeed caching the whole thing. We could see the file get served to nginx and then nginx continued to serve to the client. We were able to show that it was indeed doing least_conn by downloading a file that was multiple gigabytes in size. This exceeded the buffer and would tie up that connection. When that happened we would then see connections get routed to the other upstream. Thanks so much for helping educate us. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290285,290320#msg-290320 From francis at daoine.org Sat Dec 26 18:38:27 2020 From: francis at daoine.org (Francis Daly) Date: Sat, 26 Dec 2020 18:38:27 +0000 Subject: Cannot get RTMP to work In-Reply-To: <9261d6474cfa30d08af4c8d0da472a2b.NginxMailingListEnglish@forum.nginx.org> References: <9261d6474cfa30d08af4c8d0da472a2b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20201226183827.GH23032@daoine.org> On Fri, Dec 25, 2020 at 02:20:33AM -0500, brianbotkiller wrote: Hi there, > I have followed the tutorial found here: > https://www.nginx.com/blog/video-streaming-for-remote-learning-with-nginx/#installing-nginx-dependencies > > To the best of my ability (it has some distinct holes in it), and some other > tutorials that I have found. I cannot get RTMP to work with nginx. It does not matter just yet, but in case it remains a problem: it will probably be useful if you can show exactly what you did do, and what responses you got when you did things; just in case one of the distinct holes is directly relevant to making things work. > I am running a VPS with Ubuntu. I have installed the RTMP module. > > I have worked to edit the nginx.conf file as best as I can, but I don't > understand why this file seems to reside in multiple locations, and I don't > understand which of them I am to edit to add the RTMP configuration -- this > is totally missing from the tutorial and I cannot find a straight answer > about this. nginx, when run, uses exactly one conf file. If you do not specify which conf file to use when you are calling the nginx binary, it will use its compile-time value. "nginx -V" should show the compile-time options that were provided; "--conf-path=" is the conf file if it is set; otherwise it is something based on "--prefix=" or on code defaults. "nginx -t" or "nginx -T" will usually name the compile-time default conf file that your nginx binary has. "ps -ef | grep [n]ginx" should show the running "master process" and whatever run-time arguments it was given -- "-c" there names the conf file that this nginx instance is using. So you will want to identify what one conf file your nginx is currently using, and make changes there before reloading it. > As it stands right now, I've added > > rtmp { > server { > listen 1935; > chunk_size 4096; > > application live { > live on; > record off; > } > } > } > > to the end of the config file found in /usr/local/nginx/conf, and I've ran > nginx -t to ensure that it is configured properly -- nginx spits back that > it is. I have stopped/started the service. I've run a port scan to ensure > that it is listening on port 80 -- it is. Does "nginx -t" show the name of the config file that you edited as being the config file that it is using? Your rtmp system wants to listen on port 1935. Does "netstat -pant | grep 1935" (or your system's equivalent) show that there is a port 1935 listener? If not, something is wrong. > I am trying to stream using OBS via RTMP. I set OBS to use my server IP/live > and no matter what, I get an error from OBS that it cannot connect to > server. If you do not have a port-1935 listener on your server, that's the first thing to fix. If you do have a listener, then probably you should "tcpdump" on your system to see if you can see what traffic is happening -- do your client requests get as far as your server, or are they blocked somewhere before it? If they get to your server, does your server block them from getting to your nginx? > I am lost as to what to do here and am looking for help. Basically, take it step by step. Decide (or learn) what one specific thing should happen, and then look and see if it does happen. If it does not happen, find out why, and fix it. It's hard to give more specific advice than that, without knowing what state things are in at the start. Hopefully this at least points you in the right direction. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Dec 26 18:45:34 2020 From: francis at daoine.org (Francis Daly) Date: Sat, 26 Dec 2020 18:45:34 +0000 Subject: Unable to reverse proxy requests to Nifi running in the backend using client auth mechanism In-Reply-To: References: Message-ID: <20201226184534.GI23032@daoine.org> On Mon, Dec 21, 2020 at 06:48:54AM -0500, balu wrote: Hi there, the error log says: > 2020/12/21 11:46:45 [info] 14165#0: *6 client SSL certificate verify error: > (2:unable to get issuer certificate) while reading client request headers, > client: 49.207.211.47, server: nifi-test-nginx.insights.io, request: "GET > /favicon.ico HTTP/1.1", host: "nifi-test-nginx.insights.io", referrer: > "https://nifi-test-nginx.insights.io/nifi/?processGroupId=root&componentIds=87a087ca-0175-1000-ca56-1d437d733fb0" that nginx failed to verify the presented client certificate. You do have > ssl_verify_client optional_no_ca; in the provided server{} block, which includes > server_name nifi-test-nginx.insights.np.vocera.io; while the error log above refers to a different "server" and "host" value. Is there any chance that you have more than one port-443 listener configured in this nginx, and this request is being handled by something other than the config that you showed? Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Dec 26 19:10:19 2020 From: francis at daoine.org (Francis Daly) Date: Sat, 26 Dec 2020 19:10:19 +0000 Subject: limit requests and CORS Policy In-Reply-To: References: Message-ID: <20201226191019.GJ23032@daoine.org> On Fri, Dec 18, 2020 at 06:54:57PM +0500, Ali Mohsin wrote: Hi there, > I have achieved this with the following code > > limit_req_zone $binary_remote_addr$v1 zone=mylimit:10m rate=3r/s; > > location ~ "^/api/(?)$" { > limit_req zone=mylimit; > > but i'm unable to set my CORS in headers and my APIs are inaccessible. Where are you trying to set your CORS in headers? In this location{} block, or in a different one? Or in a server{} block? What request are you making that is handled in this location{} block? I'd expect something like a ".*" in the regex to have it match everything that starts with /api/. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Dec 26 19:18:27 2020 From: francis at daoine.org (Francis Daly) Date: Sat, 26 Dec 2020 19:18:27 +0000 Subject: help need (convert vhost to location) In-Reply-To: References: Message-ID: <20201226191827.GK23032@daoine.org> On Tue, Dec 15, 2020 at 07:42:00PM +0200, Gregory Edigarov wrote: Hi there, > I have this server section: ... > now I need to convert this virtual server to location. > i.e. to be called from postmaster.example.com/control/ you will probably want to wrap your current relevant location{} blocks within a "location ^~ /control/ {}" block, and then probably also include "alias /var/www/vexim/vexim;" within that block so that nginx will be able to tell the fastcgi server what file it should process. Depending on the application, you may need to tell it that it is below /control/, so that it can adjust any internal links that it publishes. Good luck with it, f -- Francis Daly francis at daoine.org From jazzman at misalpina.net Mon Dec 28 22:49:23 2020 From: jazzman at misalpina.net (Claudiu) Date: Tue, 29 Dec 2020 00:49:23 +0200 Subject: proxy_cache control via HTTP/2 trailing headers Message-ID: <37be6e7b-0fe0-de2d-7f49-a5e25c2419f8@misalpina.net> Hi, I'm wondering if there is a way to instruct nginx to cache or not a backend response based on trailing headers? Use-case is that backend does some heavy longer running streaming work that in some edge cases may fail midway. As the response is already streaming I need to tell nginx to not cache that response as it is failed. Status code is already sent and I don't think I can change it later via trailing headers, thus, proxy_cache_valid directive seems of no use. However, docs mention the X-Accel-Expires header that technically could be sent as trailing header. What I don't know yet is if nginx supports X-Accel-Expires as trailing header or if it supports trailing headers in backend responses at all. Or, is there another way to accomplish something like this? -- Claudiu From aliofthemohsins at gmail.com Tue Dec 29 07:33:02 2020 From: aliofthemohsins at gmail.com (Ali Mohsin) Date: Tue, 29 Dec 2020 12:33:02 +0500 Subject: limit requests and CORS Policy In-Reply-To: <20201226191019.GJ23032@daoine.org> References: <20201226191019.GJ23032@daoine.org> Message-ID: Hello, I have solved the issue, the problem was because of low request limit, I had to add burst of 10 to make it work and I also changed my configuration to the following. limit_req_zone $binary_remote_addr$request_uri zone=mylimit:10m rate=5r/s; and then limit_req zone=mylimit burst=10 nodelay; no other settings is changed. On Sun, Dec 27, 2020 at 12:10 AM Francis Daly wrote: > On Fri, Dec 18, 2020 at 06:54:57PM +0500, Ali Mohsin wrote: > > Hi there, > > > I have achieved this with the following code > > > > limit_req_zone $binary_remote_addr$v1 zone=mylimit:10m rate=3r/s; > > > > location ~ "^/api/(?)$" { > > limit_req zone=mylimit; > > > > but i'm unable to set my CORS in headers and my APIs are inaccessible. > > Where are you trying to set your CORS in headers? In this location{} > block, or in a different one? Or in a server{} block? > > What request are you making that is handled in this location{} block? > > I'd expect something like a ".*" in the regex to have it match everything > that starts with /api/. > > Cheers, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ffjr at hotmail.com Tue Dec 29 14:16:06 2020 From: ffjr at hotmail.com (Federico Felman) Date: Tue, 29 Dec 2020 14:16:06 +0000 Subject: Getting started with a Module Message-ID: Hello everyone and happy holidays. I?ve been asked to do a specific module for NGINX, I need to ?log? all the requests and responses using some specific web services. For what I?ve seen I can read the REQ easily but I don?t see any way to access the response. Can you guys help me? Thanks in advance!!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Tue Dec 29 19:23:44 2020 From: lagged at gmail.com (Andrei) Date: Tue, 29 Dec 2020 21:23:44 +0200 Subject: Getting started with a Module In-Reply-To: References: Message-ID: Happy Holidays! You mean something like this? https://serverfault.com/questions/361556/is-it-possible-to-log-the-response-data-in-nginx-access-log Either way, you're probably looking at OpenResty Lua ( https://github.com/openresty/lua-nginx-module), cosockets ( https://github.com/openresty/lua-nginx-module#cosockets-not-available-everywhere), and maybe mlcache (https://github.com/thibaultcha/lua-resty-mlcache). Instead of writing a full blown module, consider using Lua. I know this isn't an OpenResty forum, but... https://opm.openresty.org/ also has a bunch of goodies that might help :) gl! On Tue, Dec 29, 2020 at 4:16 PM Federico Felman wrote: > Hello everyone and happy holidays. > > I?ve been asked to do a specific module for NGINX, I need to ?log? all the > requests and responses using some specific web services. > > For what I?ve seen I can read the REQ easily but I don?t see any way to > access the response. > > Can you guys help me? > > > > Thanks in advance!!! > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From ffjr at hotmail.com Wed Dec 30 13:57:59 2020 From: ffjr at hotmail.com (Federico Felman) Date: Wed, 30 Dec 2020 13:57:59 +0000 Subject: Getting started with a Module In-Reply-To: References: , Message-ID: Hello Andrei, Thanks for writing back. I wanted to take enough time check all the links. I?ve read about OpenResty but my concern is how to access the data I need. So basically I?m diving my process in stages. Stage 0: research and get the correct approach (this current stage) Stage 1: get all the request and response to go through a custom functions (processReq, processResp). Stage 2: write those functions to call the appropriate APIs. Stage 3: stress test and optimization. I think OpenResty solves the Stage 2 but I still don?t know how to handle properly the REQ/RESP, mostly the response since the REQ could be the first module called. But I couldn?t find anything clear on catching the response. Maybe this is pretty straight forward but I?m messing this up. Thanks you everyone for reading this. From: Andrei Sent: Tuesday, December 29, 2020 4:24 PM To: nginx at nginx.org Subject: Re: Getting started with a Module Happy Holidays! You mean something like this? https://serverfault.com/questions/361556/is-it-possible-to-log-the-response-data-in-nginx-access-log Either way, you're probably looking at OpenResty Lua (https://github.com/openresty/lua-nginx-module), cosockets (https://github.com/openresty/lua-nginx-module#cosockets-not-available-everywhere), and maybe mlcache (https://github.com/thibaultcha/lua-resty-mlcache). Instead of writing a full blown module, consider using Lua. I know this isn't an OpenResty forum, but... https://opm.openresty.org/ also has a bunch of goodies that might help :) gl! On Tue, Dec 29, 2020 at 4:16 PM Federico Felman > wrote: Hello everyone and happy holidays. I?ve been asked to do a specific module for NGINX, I need to ?log? all the requests and responses using some specific web services. For what I?ve seen I can read the REQ easily but I don?t see any way to access the response. Can you guys help me? Thanks in advance!!! _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From grzegorz.czesnik at hotmail.com Wed Dec 30 16:05:08 2020 From: grzegorz.czesnik at hotmail.com (Grzegorz Cze?nik) Date: Wed, 30 Dec 2020 16:05:08 +0000 Subject: Nginx 1.19.6 snippets directory (Ubuntu Server 20.04) Message-ID: Hi, I installed Nginx 1.19.6 from the official repository http://nginx.org/en/linux_packages.html#Ubuntu Compared to the official Ubuntu repository version 1.19.6, I cannot see the snippets directory in /etc/nginx. Has anything changed in this case? Can I create this catalog myself? -------------- next part -------------- An HTML attachment was scrubbed... URL: From teward at thomas-ward.net Thu Dec 31 16:50:50 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Thu, 31 Dec 2020 11:50:50 -0500 Subject: SPAM: Nginx 1.19.6 snippets directory (Ubuntu Server 20.04) In-Reply-To: References: Message-ID: Hi, Grzegorz.? I'm with the Ubuntu Server Team and can answer this directly. The NGINX upstream repository does NOT follow the structure of the package as it is in Ubuntu and Debian.? The snippets directory and sites-available and sites-enabled directories and includes as part of the default configuration are Ubuntu-isms and Debian-isms.? These are only present in Debian/Ubuntu variants (and possibly other distropackages of NGINX) and are NOT from nginx.org's repositories. The snippets/ directory in Ubuntu and Debian I believe is prepopulated with the snakeoil.conf which is only for the ssl-cert package generated self-signed SSL certificate configuration to be included.? It is, however, simply a directory that holds files, and you simply include configurations with `include snippets/blah;` (to include the file 'blah' which is in 'snippets'. You can create that directory yourself and then just include individual snippets from there.? In Ubuntu/Debian it is not automatically included *anywhere* in the package shipped configuration, it's used to hold handy snippets you might want to include in many places. Thomas On 12/30/20 11:05 AM, Grzegorz Cze?nik wrote: > > Hi, > > I installed Nginx 1.19.6 from the official repository > http://nginx.org/en/linux_packages.html#Ubuntu > > Compared to the official Ubuntu repository version 1.19.6, I cannot > see the snippets directory in /etc/nginx. Has anything changed in this > case? Can I create this catalog myself? > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Thu Dec 31 18:30:17 2020 From: lagged at gmail.com (Andrei) Date: Thu, 31 Dec 2020 20:30:17 +0200 Subject: Getting started with a Module In-Reply-To: References: Message-ID: Hi, You can cover all those stages in Lua. As for response body, check for ngx.arg[1] https://github.com/openresty/lua-nginx-module#body_filter_by_lua (it's available in certain stages only, but you can hack around that and pass it using ngx.ctx) https://stackoverflow.com/a/54432177/2388324 On Wed, Dec 30, 2020 at 3:58 PM Federico Felman wrote: > Hello Andrei, > > Thanks for writing back. > > I wanted to take enough time check all the links. > > I?ve read about OpenResty but my concern is how to access the data I need. > > So basically I?m diving my process in stages. > > Stage 0: research and get the correct approach (this current stage) > > Stage 1: get all the request and response to go through a custom functions > (processReq, processResp). > > Stage 2: write those functions to call the appropriate APIs. > > Stage 3: stress test and optimization. > > I think OpenResty solves the Stage 2 but I still don?t know how to handle > properly the REQ/RESP, mostly the response since the REQ could be the first > module called. But I couldn?t find anything clear on catching the response. > > Maybe this is pretty straight forward but I?m messing this up. > > > > Thanks you everyone for reading this. > > > > > > *From: *Andrei > *Sent: *Tuesday, December 29, 2020 4:24 PM > *To: *nginx at nginx.org > *Subject: *Re: Getting started with a Module > > > > Happy Holidays! > > > > You mean something like this? > > > https://serverfault.com/questions/361556/is-it-possible-to-log-the-response-data-in-nginx-access-log > > > > Either way, you're probably looking at OpenResty Lua ( > https://github.com/openresty/lua-nginx-module), cosockets ( > https://github.com/openresty/lua-nginx-module#cosockets-not-available-everywhere), > and maybe mlcache (https://github.com/thibaultcha/lua-resty-mlcache). > Instead of writing a full blown module, consider using Lua. I know this > isn't an OpenResty forum, but... https://opm.openresty.org/ also has a > bunch of goodies that might help :) > > > > gl! > > > > On Tue, Dec 29, 2020 at 4:16 PM Federico Felman wrote: > > Hello everyone and happy holidays. > > I?ve been asked to do a specific module for NGINX, I need to ?log? all the > requests and responses using some specific web services. > > For what I?ve seen I can read the REQ easily but I don?t see any way to > access the response. > > Can you guys help me? > > > > Thanks in advance!!! > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: