From nginx-forum at forum.nginx.org Wed Apr 1 13:10:20 2020 From: nginx-forum at forum.nginx.org (teward) Date: Wed, 01 Apr 2020 09:10:20 -0400 Subject: Configure NGINX to deny web socket connections except for certain paths Message-ID: This will sound a little odd, but we have an NGINX reverse proxy acting as an SSL termination point for a remote desktop web gateway from Microsoft. Currently, the primary Web Client ingress point is protected by SSL Client Certificates - you must have a valid SSL CLient Certificate to get to the web component. However, RDWeb from Microsoft still has to establish WSS connections (`wss://...`) to the RD Gateway component - a separate server. The tricky part about this is it uses *only* `wss`. This works fine if the web frontend is open to all, but we want to restrict it so that only one WSS pathway can actually be used and no other WSS requests work. When attempting to make this work, we've been trying various configurations of location matching ultimately ending with the WSS connections all failing except when passed through directly WITHOUT any restrictions (that is, `location / { ... }` is globally permitted for the gateway component.) Is there a way to configure NGINX so that it tests the requested wss path *first* before it hands off to the backend, thereby determining if it's permitted or rejected? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287519,287519#msg-287519 From nginx-forum at forum.nginx.org Wed Apr 1 13:44:21 2020 From: nginx-forum at forum.nginx.org (teward) Date: Wed, 01 Apr 2020 09:44:21 -0400 Subject: Configure NGINX to deny web socket connections except for certain paths In-Reply-To: References: Message-ID: teward Wrote: ------------------------------------------------------- > This works fine if > the web frontend is open to all, but we want to restrict it so that > only one WSS pathway can actually be used and no other WSS requests > work. To clarify, there's a separate `server { }` block handling the gateway separate from the RDWeb ingress point. This is necessary for the wss links to work. Unfortunately, we need to control what wss / request paths are used on there and currently don't have a way to do this that I'm aware of - can we configure that nginx server block for the gateway component to do what we need? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287519,287522#msg-287522 From lee.iitb at gmail.com Thu Apr 2 05:45:11 2020 From: lee.iitb at gmail.com (Thomas Stephen Lee) Date: Thu, 2 Apr 2020 11:15:11 +0530 Subject: openssl 1.1.1e 14095126:SSL routines:ssl3_read_n In-Reply-To: <20200323123451.GA1578@mdounin.ru> References: <7407492cf399e3fa9048b961ff88748c.NginxMailingListEnglish@forum.nginx.org> <92f6dafa927e3afc97a5fc6b69748643.NginxMailingListEnglish@forum.nginx.org> <20200323123451.GA1578@mdounin.ru> Message-ID: On Mon, Mar 23, 2020 at 6:05 PM Maxim Dounin wrote: > Hello! > > On Mon, Mar 23, 2020 at 02:04:36PM +0300, Sergey Kandaurov wrote: > > > > > > On 22 Mar 2020, at 21:39, itpp2012 > wrote: > > > > > > How about this as this catches all 3 while conditions: > > > > > > +++ src/event/ngx_event_openssl.c > > > @@ -2318, > > > > > > c->ssl->no_wait_shutdown = 1; > > > c->ssl->no_send_shutdown = 1; > > > > > > if (sslerr == SSL_ERROR_ZERO_RETURN || ERR_peek_error() == 0) { > > > ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, > > > "peer shutdown SSL cleanly"); > > > return NGX_DONE; > > > } > > > > > > + /* https://forum.nginx.org/read.php?2,287377 */ > > > + /* https://github.com/openssl/openssl/issues/11381 */ > > > +#ifdef SSL_R_UNEXPECTED_EOF_WHILE_READING > > > + if (sslerr == SSL_ERROR_SSL && ERR_GET_REASON(ERR_peek_error()) > > > + == SSL_R_UNEXPECTED_EOF_WHILE_READING) { > > > + ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, > > > + "ssl3_read_n:unexpected eof while reading"); > > > + return NGX_DONE; > > > + } > > > +#endif > > > + > > > ngx_ssl_connection_error(c, sslerr, err, "SSL_read() failed"); > > > > How would this catch the reported error in SSL_do_handshake() ? > > I'd replicate this check in ngx_ssl_handshake(). > > And probably for SSL_read_early_data, SSL_shutdown, SSL_peak, > > (ok, we don't use SSL_peak), but this is a moot point. > > Given the session resumption issue[1], I tend to think the best > solution for now is to recommend to avoid using OpenSSL 1.1.1e. > > [1] https://github.com/openssl/openssl/issues/11378 > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Hi, does OpenSSL 1.1.1f. fix the issue ? thanks. --- Lee -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Apr 2 07:24:18 2020 From: nginx-forum at forum.nginx.org (itpp2012) Date: Thu, 02 Apr 2020 03:24:18 -0400 Subject: openssl 1.1.1e 14095126:SSL routines:ssl3_read_n In-Reply-To: References: Message-ID: Thomas Stephen Lee Wrote: ------------------------------------------------------- > Hi, > > does > > OpenSSL 1.1.1f. > > fix the issue ? Yes. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287377,287532#msg-287532 From liam at moncur.me.uk Thu Apr 2 13:26:02 2020 From: liam at moncur.me.uk (Liam Moncur) Date: Thu, 02 Apr 2020 13:26:02 +0000 Subject: (SSL: error:1409441A:SSL routines:ssl3_read_bytes:tlsv1 alert decode error:SSL alert number 50) while reading response header from upstream Message-ID: <0gEStczOpdpcX1MNgkZINbXpNc2JU_Po6l9hKduXFx9IHNM0E2FCMiu4buOgCXt5gJiTRDEbXmFLhL9HTS4_ROnwjYTLmQbOmvQJ5bqZsG8=@moncur.me.uk> Hey, I am seeing an issue where nginx seems to get stuck in a loop soon after the above error. From the debug I am seeing: 2020/04/02 14:09:10 [error] 12875#12875: *338 SSL_read() failed (SSL: error:1409441A:SSL routines:ssl3_read_bytes:tlsv1 alert decode error:SSL alert number 50) while reading response header from upstream, client: 2a00:23c6:8238:6501:54e9:28f4:54e:1a91, server: www.findafishingboat.com, request: "GET /boat-list/fishing-boats-for-sale-over-15m HTTP/2.0", upstream: "https://194.39.167.98:443/boat-list/fishing-boats-for-sale-over-15m", host: "www.findafishingboat.com" Then shortly after I get a loop of the following: 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter 0000000000000000 2020/04/02 14:09:10 [debug] 12875#12875: *338 http copy filter: -2 "/boat-list/fishing-boats-for-sale-over-15m?" 2020/04/02 14:09:10 [debug] 12875#12875: *338 http output filter "/boat-list/fishing-boats-for-sale-over-15m?" 2020/04/02 14:09:10 [debug] 12875#12875: *338 http copy filter: "/boat-list/fishing-boats-for-sale-over-15m?" 2020/04/02 14:09:10 [debug] 12875#12875: *338 lua capture body filter, uri "/boat-list/fishing-boats-for-sale-over-15m" 2020/04/02 14:09:10 [debug] 12875#12875: *338 http postpone filter "/boat-list/fishing-boats-for-sale-over-15m?" 0000000000000000 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter: l:0 f:0 s:0 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter limit 0 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter 0000000000000000 2020/04/02 14:09:10 [debug] 12875#12875: *338 http copy filter: -2 "/boat-list/fishing-boats-for-sale-over-15m?" 2020/04/02 14:09:10 [debug] 12875#12875: *338 http output filter "/boat-list/fishing-boats-for-sale-over-15m?" 2020/04/02 14:09:10 [debug] 12875#12875: *338 http copy filter: "/boat-list/fishing-boats-for-sale-over-15m?" 2020/04/02 14:09:10 [debug] 12875#12875: *338 lua capture body filter, uri "/boat-list/fishing-boats-for-sale-over-15m" 2020/04/02 14:09:10 [debug] 12875#12875: *338 http postpone filter "/boat-list/fishing-boats-for-sale-over-15m?" 0000000000000000 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter: l:0 f:0 s:0 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter limit 0 Any thoughts would be lovely. Thanks, Liam From themadbeaker at gmail.com Thu Apr 2 16:30:10 2020 From: themadbeaker at gmail.com (J.R.) Date: Thu, 2 Apr 2020 11:30:10 -0500 Subject: proxy_cache_path 'inactive' vs http cache-control / expires headers? Message-ID: I've been doing some experimenting with nginx's proxy caching and slowly working the kinks out. >From what I read, the cache-control & expires headers take precedence over the 'proxy_cache_valid' setting, which is great as certain pages are valid for several hours at a time. However, I am noticing still a high amount of cache misses... Upon further investigation I'm thinking (haven't tested it yet) that the 'proxy_cache_path' inactive setting (currently at its default of 10m) is taking precedence over the above cache-control settings... Is there any way to tie the 'inactive' time to the cache-control header expiration time so that pages that are cached in a certain time-window are always kept and not deleted until after the header expiration time? From r at roze.lv Thu Apr 2 17:00:58 2020 From: r at roze.lv (Reinis Rozitis) Date: Thu, 2 Apr 2020 20:00:58 +0300 Subject: proxy_cache_path 'inactive' vs http cache-control / expires headers? In-Reply-To: References: Message-ID: <000001d60910$484fbbc0$d8ef3340$@roze.lv> > Is there any way to tie the 'inactive' time to the cache-control header > expiration time so that pages that are cached in a certain time-window are > always kept and not deleted until after the header expiration time? You can just set the inactive time longer than your possible maximum expire time for the objects then the cache manager won't purge the cache files even the object is still valid but not accessed. rr From themadbeaker at gmail.com Thu Apr 2 17:42:12 2020 From: themadbeaker at gmail.com (J.R.) Date: Thu, 2 Apr 2020 12:42:12 -0500 Subject: proxy_cache_path 'inactive' vs http cache-control / expires headers? Message-ID: > You can just set the inactive time longer than your possible maximum > expire time for the objects then the cache manager won't purge the > cache files even the object is still valid but not accessed. That's what I ended up doing. Thanks for the suggestion though. From liam at moncur.me.uk Fri Apr 3 06:20:44 2020 From: liam at moncur.me.uk (Liam Moncur) Date: Fri, 03 Apr 2020 06:20:44 +0000 Subject: (SSL: error:1409441A:SSL routines:ssl3_read_bytes:tlsv1 alert decode error:SSL alert number 50) while reading response header from upstream In-Reply-To: <0gEStczOpdpcX1MNgkZINbXpNc2JU_Po6l9hKduXFx9IHNM0E2FCMiu4buOgCXt5gJiTRDEbXmFLhL9HTS4_ROnwjYTLmQbOmvQJ5bqZsG8=@moncur.me.uk> References: <0gEStczOpdpcX1MNgkZINbXpNc2JU_Po6l9hKduXFx9IHNM0E2FCMiu4buOgCXt5gJiTRDEbXmFLhL9HTS4_ROnwjYTLmQbOmvQJ5bqZsG8=@moncur.me.uk> Message-ID: We were able to resolve this by enabling proxy_buffering. The root cause for why it started happening is still being investigated. Thanks, Liam Sent with ProtonMail Secure Email. ??????? Original Message ??????? On Thursday, April 2, 2020 2:26 PM, Liam Moncur wrote: > Hey, > I am seeing an issue where nginx seems to get stuck in a loop soon after the above error. From the debug I am seeing: > > 2020/04/02 14:09:10 [error] 12875#12875: *338 SSL_read() failed (SSL: error:1409441A:SSL routines:ssl3_read_bytes:tlsv1 alert decode error:SSL alert number 50) while reading response header from upstream, client: 2a00:23c6:8238:6501:54e9:28f4:54e:1a91, server: www.findafishingboat.com, request: "GET /boat-list/fishing-boats-for-sale-over-15m HTTP/2.0", upstream: "https://194.39.167.98:443/boat-list/fishing-boats-for-sale-over-15m", host: "www.findafishingboat.com" > > Then shortly after I get a loop of the following: > > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter 0000000000000000 > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http copy filter: -2 "/boat-list/fishing-boats-for-sale-over-15m?" > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http output filter "/boat-list/fishing-boats-for-sale-over-15m?" > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http copy filter: "/boat-list/fishing-boats-for-sale-over-15m?" > 2020/04/02 14:09:10 [debug] 12875#12875: *338 lua capture body filter, uri "/boat-list/fishing-boats-for-sale-over-15m" > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http postpone filter "/boat-list/fishing-boats-for-sale-over-15m?" 0000000000000000 > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter: l:0 f:0 s:0 > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter limit 0 > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter 0000000000000000 > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http copy filter: -2 "/boat-list/fishing-boats-for-sale-over-15m?" > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http output filter "/boat-list/fishing-boats-for-sale-over-15m?" > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http copy filter: "/boat-list/fishing-boats-for-sale-over-15m?" > 2020/04/02 14:09:10 [debug] 12875#12875: *338 lua capture body filter, uri "/boat-list/fishing-boats-for-sale-over-15m" > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http postpone filter "/boat-list/fishing-boats-for-sale-over-15m?" 0000000000000000 > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter: l:0 f:0 s:0 > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter limit 0 > > Any thoughts would be lovely. > > Thanks, > Liam From mdounin at mdounin.ru Fri Apr 3 10:45:19 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 3 Apr 2020 13:45:19 +0300 Subject: (SSL: error:1409441A:SSL routines:ssl3_read_bytes:tlsv1 alert decode error:SSL alert number 50) while reading response header from upstream In-Reply-To: <0gEStczOpdpcX1MNgkZINbXpNc2JU_Po6l9hKduXFx9IHNM0E2FCMiu4buOgCXt5gJiTRDEbXmFLhL9HTS4_ROnwjYTLmQbOmvQJ5bqZsG8=@moncur.me.uk> References: <0gEStczOpdpcX1MNgkZINbXpNc2JU_Po6l9hKduXFx9IHNM0E2FCMiu4buOgCXt5gJiTRDEbXmFLhL9HTS4_ROnwjYTLmQbOmvQJ5bqZsG8=@moncur.me.uk> Message-ID: <20200403104519.GH20357@mdounin.ru> Hello! On Thu, Apr 02, 2020 at 01:26:02PM +0000, Liam Moncur wrote: > Hey, > I am seeing an issue where nginx seems to get stuck in a loop soon after the above error. From the debug I am seeing: > > 2020/04/02 14:09:10 [error] 12875#12875: *338 SSL_read() failed (SSL: error:1409441A:SSL routines:ssl3_read_bytes:tlsv1 alert decode error:SSL alert number 50) while reading response header from upstream, client: 2a00:23c6:8238:6501:54e9:28f4:54e:1a91, server: www.findafishingboat.com, request: "GET /boat-list/fishing-boats-for-sale-over-15m HTTP/2.0", upstream: "https://194.39.167.98:443/boat-list/fishing-boats-for-sale-over-15m", host: "www.findafishingboat.com" > > Then shortly after I get a loop of the following: > > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter 0000000000000000 > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http copy filter: -2 "/boat-list/fishing-boats-for-sale-over-15m?" > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http output filter "/boat-list/fishing-boats-for-sale-over-15m?" > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http copy filter: "/boat-list/fishing-boats-for-sale-over-15m?" > 2020/04/02 14:09:10 [debug] 12875#12875: *338 lua capture body filter, uri "/boat-list/fishing-boats-for-sale-over-15m" > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http postpone filter "/boat-list/fishing-boats-for-sale-over-15m?" 0000000000000000 > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter: l:0 f:0 s:0 > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter limit 0 > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter 0000000000000000 > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http copy filter: -2 "/boat-list/fishing-boats-for-sale-over-15m?" > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http output filter "/boat-list/fishing-boats-for-sale-over-15m?" > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http copy filter: "/boat-list/fishing-boats-for-sale-over-15m?" > 2020/04/02 14:09:10 [debug] 12875#12875: *338 lua capture body filter, uri "/boat-list/fishing-boats-for-sale-over-15m" > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http postpone filter "/boat-list/fishing-boats-for-sale-over-15m?" 0000000000000000 > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter: l:0 f:0 s:0 > 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter limit 0 > > Any thoughts would be lovely. First of all, check OpenSSL version you are using. Running "nginx -V" will show all needed details. -- Maxim Dounin http://mdounin.ru/ From martin.grigorov at gmail.com Fri Apr 3 13:47:30 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Fri, 3 Apr 2020 16:47:30 +0300 Subject: aarch64 packages for other Linux flavors In-Reply-To: <4e388ac4-8291-9e19-0774-351af78a4445@nginx.com> References: <4e388ac4-8291-9e19-0774-351af78a4445@nginx.com> Message-ID: Hi Konstantin, On Tue, Mar 10, 2020 at 3:31 PM Konstantin Pavlov wrote: > Hello, > > 10.03.2020 15:50, Emilio Fernandes wrote: > > Hi Konstantin, > > Thanks for your interest in our packages! > > > > By CentOS, do you want/need packages built for 8? Asking because I > > believe 7 is not officially released for Aarch64 - it's rather a > > community build which doesnt fall into something we can support. > > > > > > Yes, CentOS 8 is fine for us! > > At http://isoredirect.centos.org/centos/7/isos/ there is "for CentOS 7 > > AltArch AArch64" [1]. Is this the one you prefer not to support ? > > > > 1. https://wiki.centos.org/SpecialInterestGroup/AltArch > > Our policy is to provide packages for officially upstream-supported > distributions. > > https://wiki.centos.org/FAQ/General#What_architectures_are_supported.3F > states that they only support x86_64, and aarch64 is unofficial. > Here is something you may find interesting. https://github.com/varnishcache/varnish-cache/pull/3263 - a PR I've created for Varnish Cache project. It is based on Docker + QEMU and builds packages for different versions of Debian/Ubuntu/Centos/Alpine for both x64 and aarch64. Regards, Martin > -- > Konstantin Pavlov > https://www.nginx.com/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From liam at moncur.me.uk Fri Apr 3 14:07:00 2020 From: liam at moncur.me.uk (Liam Moncur) Date: Fri, 03 Apr 2020 14:07:00 +0000 Subject: (SSL: error:1409441A:SSL routines:ssl3_read_bytes:tlsv1 alert decode error:SSL alert number 50) while reading response header from upstream In-Reply-To: References: <0gEStczOpdpcX1MNgkZINbXpNc2JU_Po6l9hKduXFx9IHNM0E2FCMiu4buOgCXt5gJiTRDEbXmFLhL9HTS4_ROnwjYTLmQbOmvQJ5bqZsG8=@moncur.me.uk> Message-ID: It seems to related to apache set ups on the origin that were running openssl 1.1.1e since the upgrade to 1.1.1f it seems better and the SSL_read errors are gone. https://github.com/openssl/openssl/issues/11381#issuecomment-607732081 Liam Sent from ProtonMail mobile -------- Original Message -------- On 3 Apr 2020, 07:20, Liam Moncur wrote: > We were able to resolve this by enabling proxy_buffering. The root cause for why it started happening is still being investigated. > > Thanks, > Liam > > Sent with ProtonMail Secure Email. > > ??????? Original Message ??????? > On Thursday, April 2, 2020 2:26 PM, Liam Moncur wrote: > >> Hey, >> I am seeing an issue where nginx seems to get stuck in a loop soon after the above error. From the debug I am seeing: >> >> 2020/04/02 14:09:10 [error] 12875#12875: *338 SSL_read() failed (SSL: error:1409441A:SSL routines:ssl3_read_bytes:tlsv1 alert decode error:SSL alert number 50) while reading response header from upstream, client: 2a00:23c6:8238:6501:54e9:28f4:54e:1a91, server: www.findafishingboat.com, request: "GET /boat-list/fishing-boats-for-sale-over-15m HTTP/2.0", upstream: "https://194.39.167.98:443/boat-list/fishing-boats-for-sale-over-15m", host: "www.findafishingboat.com" >> >> Then shortly after I get a loop of the following: >> >> 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter 0000000000000000 >> 2020/04/02 14:09:10 [debug] 12875#12875: *338 http copy filter: -2 "/boat-list/fishing-boats-for-sale-over-15m?" >> 2020/04/02 14:09:10 [debug] 12875#12875: *338 http output filter "/boat-list/fishing-boats-for-sale-over-15m?" >> 2020/04/02 14:09:10 [debug] 12875#12875: *338 http copy filter: "/boat-list/fishing-boats-for-sale-over-15m?" >> 2020/04/02 14:09:10 [debug] 12875#12875: *338 lua capture body filter, uri "/boat-list/fishing-boats-for-sale-over-15m" >> 2020/04/02 14:09:10 [debug] 12875#12875: *338 http postpone filter "/boat-list/fishing-boats-for-sale-over-15m?" 0000000000000000 >> 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter: l:0 f:0 s:0 >> 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter limit 0 >> 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter 0000000000000000 >> 2020/04/02 14:09:10 [debug] 12875#12875: *338 http copy filter: -2 "/boat-list/fishing-boats-for-sale-over-15m?" >> 2020/04/02 14:09:10 [debug] 12875#12875: *338 http output filter "/boat-list/fishing-boats-for-sale-over-15m?" >> 2020/04/02 14:09:10 [debug] 12875#12875: *338 http copy filter: "/boat-list/fishing-boats-for-sale-over-15m?" >> 2020/04/02 14:09:10 [debug] 12875#12875: *338 lua capture body filter, uri "/boat-list/fishing-boats-for-sale-over-15m" >> 2020/04/02 14:09:10 [debug] 12875#12875: *338 http postpone filter "/boat-list/fishing-boats-for-sale-over-15m?" 0000000000000000 >> 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter: l:0 f:0 s:0 >> 2020/04/02 14:09:10 [debug] 12875#12875: *338 http write filter limit 0 >> >> Any thoughts would be lovely. >> >> Thanks, >> Liam -------------- next part -------------- An HTML attachment was scrubbed... URL: From roger at netskrt.io Fri Apr 3 15:33:43 2020 From: roger at netskrt.io (Roger Fischer) Date: Fri, 3 Apr 2020 08:33:43 -0700 Subject: proxy_cache_path 'inactive' vs http cache-control / expires headers? In-Reply-To: <000001d60910$484fbbc0$d8ef3340$@roze.lv> References: <000001d60910$484fbbc0$d8ef3340$@roze.lv> Message-ID: > You can just set the inactive time longer than your possible maximum expire time for the objects then the cache manager won't purge the cache files even the object is still valid but not accessed. That may only have a small impact. As far as I understand: NGINX will remove an item only when the cache is full (ie. it needs space for a new item). Items are removed based on the least-recently used (LRU) queue. The least-recently-used (last) item in the LRU queue is unconditionally removed. The second and third last items are removed if they are past the invalid time. The expiry of an item has no influence on the removal of items. It only affects if the item is delivered from the cache, or revalidated with an upstream request. Restarting NGINX does have an impact. The LRU queue is not persisted (for performance reasons). On a restart, the LRU queue is based on the order that the cache loader finds the cached files in the file system. Roger -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Apr 3 16:26:14 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 3 Apr 2020 19:26:14 +0300 Subject: proxy_cache_path 'inactive' vs http cache-control / expires headers? In-Reply-To: References: <000001d60910$484fbbc0$d8ef3340$@roze.lv> Message-ID: <20200403162614.GJ20357@mdounin.ru> Hello! On Fri, Apr 03, 2020 at 08:33:43AM -0700, Roger Fischer wrote: > > You can just set the inactive time longer than your possible maximum expire time for the objects then the cache manager won't purge the cache files even the object is still valid but not accessed. > > That may only have a small impact. > > As far as I understand: > NGINX will remove an item only when the cache is full (ie. it needs space for a new item). > Items are removed based on the least-recently used (LRU) queue. > The least-recently-used (last) item in the LRU queue is unconditionally removed. > The second and third last items are removed if they are past the invalid time. Your understanding is wrong. Cache manager always removes all items which were not access for the "inactive=" period of time. Quoting docs (http://nginx.org/r/proxy_cache_path): : Cached data that are not accessed during the time specified by the : inactive parameter get removed from the cache regardless of their : freshness. By default, inactive is set to 10 minutes. See ngx_http_file_cache_expire() function for details. Additionally, cache items can be removed based on the "max_size=" parameter of the "proxy_cache_path" directive, or if there isn't enough room in the "keys_zone=" shared memory zone. > The expiry of an item has no influence on the removal of items. > It only affects if the item is delivered from the cache, or > revalidated with an upstream request. That's correct, as long as "expire" is meant to be the time from the Expires / X-Accel-Expires / Cache-Control / proxy_cache_valid. In nginx documentation this is called "caching time". -- Maxim Dounin http://mdounin.ru/ From roger at netskrt.io Sat Apr 4 03:25:14 2020 From: roger at netskrt.io (Roger Fischer) Date: Fri, 3 Apr 2020 20:25:14 -0700 Subject: proxy_cache_path 'inactive' vs http cache-control / expires headers? In-Reply-To: <20200403162614.GJ20357@mdounin.ru> References: <000001d60910$484fbbc0$d8ef3340$@roze.lv> <20200403162614.GJ20357@mdounin.ru> Message-ID: Thanks, Maxim, for correcting my misunderstanding. With what frequency is the cache manger run? Roger > On Apr 3, 2020, at 9:26 AM, Maxim Dounin wrote: > > Hello! > > On Fri, Apr 03, 2020 at 08:33:43AM -0700, Roger Fischer wrote: > >>> You can just set the inactive time longer than your possible maximum expire time for the objects then the cache manager won't purge the cache files even the object is still valid but not accessed. >> >> That may only have a small impact. >> >> As far as I understand: >> NGINX will remove an item only when the cache is full (ie. it needs space for a new item). >> Items are removed based on the least-recently used (LRU) queue. >> The least-recently-used (last) item in the LRU queue is unconditionally removed. >> The second and third last items are removed if they are past the invalid time. > > Your understanding is wrong. Cache manager always removes all > items which were not access for the "inactive=" period of time. > Quoting docs (http://nginx.org/r/proxy_cache_path): > > : Cached data that are not accessed during the time specified by the > : inactive parameter get removed from the cache regardless of their > : freshness. By default, inactive is set to 10 minutes. > > See ngx_http_file_cache_expire() function for details. > > Additionally, cache items can be removed based on the "max_size=" > parameter of the "proxy_cache_path" directive, or if there isn't > enough room in the "keys_zone=" shared memory zone. > >> The expiry of an item has no influence on the removal of items. >> It only affects if the item is delivered from the cache, or >> revalidated with an upstream request. > > That's correct, as long as "expire" is meant to be the time from the > Expires / X-Accel-Expires / Cache-Control / proxy_cache_valid. In > nginx documentation this is called "caching time". > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Sun Apr 5 12:10:39 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 5 Apr 2020 15:10:39 +0300 Subject: proxy_cache_path 'inactive' vs http cache-control / expires headers? In-Reply-To: References: <000001d60910$484fbbc0$d8ef3340$@roze.lv> <20200403162614.GJ20357@mdounin.ru> Message-ID: <20200405121039.GK20357@mdounin.ru> Hello! On Fri, Apr 03, 2020 at 08:25:14PM -0700, Roger Fischer wrote: > With what frequency is the cache manger run? Cache manager monitors least recently used cache items, and wakes up when there are items to be removed, at the expected removal time. Additionally, even if there are no inactive items to remove, it still wakes up every 10 seconds to monitor "max_size=". -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Sun Apr 5 15:42:18 2020 From: nginx-forum at forum.nginx.org (lsces) Date: Sun, 05 Apr 2020 11:42:18 -0400 Subject: Prevent direct access to files but allow download from site In-Reply-To: References: <1dbba4ce438d08435c3fd0712095ea20.NginxMailingListEnglish@forum.nginx.org> <65a6791023ed7c002df75b6fb22dd5c6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <973b43706779b0e7142df44174f8c3ba.NginxMailingListEnglish@forum.nginx.org> MAXMAXarena Wrote: ------------------------------------------------------- > How can I find out with Nginx if the username and password are real or > that the user/unique_value is still active? > Should I somehow access the database or am I wrong? MAXMAXarena I've just come across this thread looking to answer almost the same question. In my situation I am running the website on PHP using a framework called bitweaver. This handles the user login to the dynamic pages and downloading images and pdf files via the framework, but the thumbnail images are linked to directly by nginx and can be viewed even if not logged in. I've spent the last couple of days playing with http_auth_request_module and the auth_request entry. I've got it crudely working and I can manually switch the access on and off using the auth.php script which has access to the database, but I've hit a snag I'm still trying to crack. The storage structure is /storage/515/1515/thumbs/ where the second number is the file I want to access ( the first number just breaks down the storage into smaller groups of folders ) ... What I'm stuck with is how to get the file number into auth.php so I can sort out if the current user ID has access to that file, allowing 'anonymous' users to see as subset of files. You can probably get away without that bit and just confirm the user ID and at the moment I'd be happy with just that as well but I'm missing something when nginx runs auth.php :( Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287297,287559#msg-287559 From themadbeaker at gmail.com Mon Apr 6 14:55:44 2020 From: themadbeaker at gmail.com (J.R.) Date: Mon, 6 Apr 2020 09:55:44 -0500 Subject: Confused between proxy_socket_keepalive & (upstream) keepalive? Message-ID: For my setup I use the 'upstream' directive, and in that module there is the 'keepalive' syntax: https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive I just noticed today in the proxy module there is the 'proxy_socket_keepalive' syntax: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_socket_keepalive I'm a little confused... The upstream you set the # of keepalive connections you want. The proxy module is just an on/off... From themadbeaker at gmail.com Mon Apr 6 15:26:04 2020 From: themadbeaker at gmail.com (J.R.) Date: Mon, 6 Apr 2020 10:26:04 -0500 Subject: Nginx proxy cache doesn't update cache-control max-age time! Message-ID: This was driving me crazy and I think I've figured out the problem. I started using the proxy cache (which is great, saves regenerating a lot of dynamic pages), except a bunch of my pages expire at a very specific time, at the start of the hour, and my cache-control / expires headers reflect that, because that's when the data is updated. I started noticing stale pages shortly thereafter. Watching the headers I realized that the 'max-age' time wasn't decreasing like it should be, thus pages would end up being cached by clients longer than they should be as I guess browsers consider this the most 'modern'. Is there a setting I'm missing, or is there a way to have nginx dynamically update the max-age while still maintaining the proxy cache? From mdounin at mdounin.ru Mon Apr 6 15:33:05 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 6 Apr 2020 18:33:05 +0300 Subject: Confused between proxy_socket_keepalive & (upstream) keepalive? In-Reply-To: References: Message-ID: <20200406153305.GN20357@mdounin.ru> Hello! On Mon, Apr 06, 2020 at 09:55:44AM -0500, J.R. wrote: > For my setup I use the 'upstream' directive, and in that module there > is the 'keepalive' syntax: > > https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive > > I just noticed today in the proxy module there is the > 'proxy_socket_keepalive' syntax: > > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_socket_keepalive > > I'm a little confused... The upstream you set the # of keepalive > connections you want. The proxy module is just an on/off... The "proxy_socket_keepalive" directive is to set the SO_KEEPALIVE socket option, which is to detect broken connections by sending TCP keepalive probes periodically. It is may make sense to turn this on in complex setups if there are upstream connections which does not transfer anything for a long time (for example, when proxying WebSockets with large timeouts). For client-side connections, the same option can be set using the "so_keepalive" parameter of the "listen" directive. While the name is somewhat similar, it is unrelated to keeping connections alive between requests. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Apr 6 15:58:31 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 6 Apr 2020 18:58:31 +0300 Subject: Nginx proxy cache doesn't update cache-control max-age time! In-Reply-To: References: Message-ID: <20200406155831.GO20357@mdounin.ru> Hello! On Mon, Apr 06, 2020 at 10:26:04AM -0500, J.R. wrote: > This was driving me crazy and I think I've figured out the problem. > > I started using the proxy cache (which is great, saves regenerating a > lot of dynamic pages), except a bunch of my pages expire at a very > specific time, at the start of the hour, and my cache-control / > expires headers reflect that, because that's when the data is updated. > > I started noticing stale pages shortly thereafter. Watching the > headers I realized that the 'max-age' time wasn't decreasing like it > should be, thus pages would end up being cached by clients longer than > they should be as I guess browsers consider this the most 'modern'. > > Is there a setting I'm missing, or is there a way to have nginx > dynamically update the max-age while still maintaining the proxy > cache? There is no Age header support in nginx as of now (relevant ticket in Trac: https://trac.nginx.org/nginx/ticket/146). If you want pages to expire at a specific time regardless of intermediate caching, consider using the "Expires" header. -- Maxim Dounin http://mdounin.ru/ From themadbeaker at gmail.com Mon Apr 6 17:25:44 2020 From: themadbeaker at gmail.com (J.R.) Date: Mon, 6 Apr 2020 12:25:44 -0500 Subject: Nginx proxy cache doesn't update cache-control max-age time! Message-ID: > There is no Age header support in nginx as of now (relevant ticket > in Trac: https://trac.nginx.org/nginx/ticket/146). If you want > pages to expire at a specific time regardless of intermediate > caching, consider using the "Expires" header. The 'age' header appears to be something else... What I'm talking about specifically is part of the 'cache-control' header... For example: "cache-control: max-age=9848, public, must-revalidate" Without max-age decrementing while in the nginx proxy cache, all client will receive the same cached number until the cache is refreshed. Since the proxy cache is storing the cache time internally (so it knows when a page expires from its cache), one would think there could be some way to get the max-age value to be updated from that internal data. From mdounin at mdounin.ru Mon Apr 6 18:35:14 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 6 Apr 2020 21:35:14 +0300 Subject: Nginx proxy cache doesn't update cache-control max-age time! In-Reply-To: References: Message-ID: <20200406183514.GQ20357@mdounin.ru> Hello! On Mon, Apr 06, 2020 at 12:25:44PM -0500, J.R. wrote: > > There is no Age header support in nginx as of now (relevant ticket > > in Trac: https://trac.nginx.org/nginx/ticket/146). If you want > > pages to expire at a specific time regardless of intermediate > > caching, consider using the "Expires" header. > > The 'age' header appears to be something else... What I'm talking > about specifically is part of the 'cache-control' header... > > For example: "cache-control: max-age=9848, public, must-revalidate" > > Without max-age decrementing while in the nginx proxy cache, all > client will receive the same cached number until the cache is > refreshed. The Age header is the HTTP/1.1 way to decrement effective value of max-age, see here: https://tools.ietf.org/html/rfc7234#section-4.2.3 -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon Apr 6 21:31:32 2020 From: nginx-forum at forum.nginx.org (lsces) Date: Mon, 06 Apr 2020 17:31:32 -0400 Subject: auth_request with vhost conf files Message-ID: <361572ac7d6cb78671ad2da2b439bb7d.NginxMailingListEnglish@forum.nginx.org> After a few false starts I've got auth_request passing parameters to php-fpm and my firebird database is allowing control of access to files in the storage filing system. Somewhat defeats the "This is cool because no php is touched for static content" and I have had to produce a slimline version of the access control but it works well with the dynamic pages. Problem is this is all working on a single site http setup and when I move the setup to the target vhost domain I'm struggling to get this working with the https live site. location /storage/attachments/ { root /srv/website/domain/; auth_request /authin; auth_request_set $auth_status $upstream_status; } location = /authin { internal; set $query ''; if ($request_uri ~* "\/storage\/attachments\/([0-9]+)\/([0-9]+)\/([A-Za-z.]+).*") { set $query $2; } proxy_pass /auth/auth.php?content_id=$query; proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Original-URI $request_uri; } is working fine on the http setup, I've tried resolver 8.8.8.8; proxy_pass https://indiastudycircle.org/auth/auth.php?content_id=$query; But I'm not sure if $query is being set at all ... on the simple setup I can see errors and that helped me set it all up, but on the vhost setup while I can create php errors on the logs there is nothing for the auth processing? Where do I head next? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287570,287570#msg-287570 From themadbeaker at gmail.com Mon Apr 6 23:56:26 2020 From: themadbeaker at gmail.com (J.R.) Date: Mon, 6 Apr 2020 18:56:26 -0500 Subject: Nginx proxy cache doesn't update cache-control max-age time! Message-ID: > The Age header is the HTTP/1.1 way to decrement effective value of > max-age, see here: > > https://tools.ietf.org/html/rfc7234#section-4.2.3 Interesting... Well, I solved the issue by simply removing the 'max-age' portion from the 'cache-control' header, keeping the other portion. Expiration is handled strictly from the 'expires' tag, which seems to be valid according to one of those RFC's. Testing things out and caching expires exactly when it is supposed to! So happy I can keep the proxy cache enabled now! From nginx-forum at forum.nginx.org Tue Apr 7 00:35:58 2020 From: nginx-forum at forum.nginx.org (lsces) Date: Mon, 06 Apr 2020 20:35:58 -0400 Subject: auth_request with vhost conf files In-Reply-To: <361572ac7d6cb78671ad2da2b439bb7d.NginxMailingListEnglish@forum.nginx.org> References: <361572ac7d6cb78671ad2da2b439bb7d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Working ... the live .conf file had an extra block controlling the image caching which overrides the auth block ... easy when you know how ... The question now is do I have the right setup for proxy_pass do need the resolver 8.8.8.8; proxy_pass https://indiastudycircle.org/auth/auth.php?content_id=$query; but is there another way of getting it to use a local link to the vhost defined server? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287570,287572#msg-287572 From jmedina at mardom.com Tue Apr 7 01:52:21 2020 From: jmedina at mardom.com (Johan Gabriel Medina Capois) Date: Tue, 7 Apr 2020 01:52:21 +0000 Subject: Configure nginx a reverse proxy https for IIS backend Message-ID: Hello everyone We are noob on nginx and we are trying to configure a site that is in Windows IIS, we could configure the site with http:// but with https:// we can't IIS Backend server 10.228.20.113 application running on port 80 and 443 Nginx reverse proxy 10.228.20.99 Version 1.17.9 on ubuntu bionic This is our config files nginx.conf user nginx; worker_processes auto; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } HTTP site config server { listen 80; server_name dev-kronos.mardom.com; location / { proxy_pass http://10.228.20.113; proxy_max_temp_file_size 0; } } HTTPS site config server { listen 443; server_name dev-kronos.mardom.com; location / { proxy_pass https://10.228.20.113; proxy_ssl_certificate /etc/nginx/certificados/dev-kronoscerts/cert.crt; proxy_ssl_certificate_key /etc/nginx/certificados/dev-kronoscerts/key.rsa; } } Could you help us please Regards Johan Medina Administrador de Sistemas e Infraestructura [Logo] Departamento: TECNOLOGIA Central Tel: 809-539-600 Ext: 8139 Flota: (809) 974-4954 Directo: 809 974-4954 Email: jmedina at mardom.com Web:www.mardom.com [Facebook icon] [Instagram icon] [Linkedin icon] [Youtube icon] [Banner] Sea amable con el medio ambiente: no imprima este correo a menos que sea completamente necesario. -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Tue Apr 7 21:51:52 2020 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 7 Apr 2020 23:51:52 +0200 Subject: Configure nginx a reverse proxy https for IIS backend In-Reply-To: References: Message-ID: <6629ba12-cee0-83e2-cf77-1bb54c99d7e4@none.at> Hi. On 07.04.20 03:52, Johan Gabriel Medina Capois wrote: > Hello everyone > > We are noob on nginx and we are trying to configure a site that is in Windows IIS, we could configure the site with http:// but with https:// we can?t What's in the nginx and IIS error log? > IIS Backend server > > 10.228.20.113 application running on port 80 and 443 > > Nginx reverse proxy > > 10.228.20.99 > > Version 1.17.9 on ubuntu bionic > > This is our config files > > nginx.conf [snipp] > HTTP site config [snipp] > HTTPS site config > > server { > > listen 443; This does not looks to a proper TLS/SSL setup. http://nginx.org/en/docs/http/configuring_https_servers.html > server_name dev-kronos.mardom.com; > > location / { > > proxy_pass https://10.228.20.113; > > proxy_ssl_certificate /etc/nginx/certificados/dev-kronoscerts/cert.crt; > > proxy_ssl_certificate_key /etc/nginx/certificados/dev-kronoscerts/key.rsa; > > } > > } > > Could you help us please > > Regards > > Johan Medina > Administrador de Sistemas e Infraestructura Logo > Departamento: *TECNOLOGIA* > Central Tel: 809-539-600 *Ext: 8139* > Flota: *(809) 974-4954* > Directo: *809 974-4954* > Email: *jmedina at mardom.com* > Web:*www.mardom.com * > Facebook icon Instagram icon Linkedin icon Youtube icon > Banner > Sea amable con el medio ambiente: no imprima este correo a menos que sea completamente necesario. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at forum.nginx.org Wed Apr 8 05:15:05 2020 From: nginx-forum at forum.nginx.org (q1548) Date: Wed, 08 Apr 2020 01:15:05 -0400 Subject: Empty file "off" under /usr/local/nginx/ Message-ID: <8afdf8a035774c52de133479b6489485.NginxMailingListEnglish@forum.nginx.org> Hello, I found an empty file "off" under /usr/local/nginx/, then I deleted it, this empty file be automatically recreated, why? Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287577,287577#msg-287577 From francis at daoine.org Wed Apr 8 05:58:41 2020 From: francis at daoine.org (Francis Daly) Date: Wed, 8 Apr 2020 06:58:41 +0100 Subject: Empty file "off" under /usr/local/nginx/ In-Reply-To: <8afdf8a035774c52de133479b6489485.NginxMailingListEnglish@forum.nginx.org> References: <8afdf8a035774c52de133479b6489485.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200408055841.GL20939@daoine.org> On Wed, Apr 08, 2020 at 01:15:05AM -0400, q1548 wrote: Hi there, > I found an empty file "off" under /usr/local/nginx/, then I deleted it, this > empty file be automatically recreated, why? Somewhere in your config, you set "off" as the name of the file to write, probably thinking that you are disabling the facility instead. Perhaps "error_log off;" is used? nginx -T | grep off may show it; then repeating "nginx -T" and finding the context should show you which file/line is involved. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Apr 8 06:47:45 2020 From: nginx-forum at forum.nginx.org (q1548) Date: Wed, 08 Apr 2020 02:47:45 -0400 Subject: Empty file "off" under /usr/local/nginx/ In-Reply-To: <20200408055841.GL20939@daoine.org> References: <20200408055841.GL20939@daoine.org> Message-ID: Thank you, Francis. Oh,.... yes, you are right, thank you so much. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287577,287579#msg-287579 From francis at daoine.org Wed Apr 8 07:14:50 2020 From: francis at daoine.org (Francis Daly) Date: Wed, 8 Apr 2020 08:14:50 +0100 Subject: auth_request with vhost conf files In-Reply-To: References: <361572ac7d6cb78671ad2da2b439bb7d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200408071450.GM20939@daoine.org> On Mon, Apr 06, 2020 at 08:35:58PM -0400, lsces wrote: Hi there, > The question now is do I have the right setup for proxy_pass > > do need the > resolver 8.8.8.8; > proxy_pass https://indiastudycircle.org/auth/auth.php?content_id=$query; > > but is there another way of getting it to use a local link to the vhost > defined server? I'm not quite sure where "the thing that handles the /auth/auth.php request" is running. "proxy_pass" is for "something other than this server{} block", so if this "local link" is effectively remote, then proxy_pass is probably good to use. If you control the IP address of the proxy_pass'ed server, you could define an "upstream" of that name, with the suitable "server" address; or you could use the IP address directly here, and then use "proxy_ssl_name" and/or "proxy_set_header" and friends, to ensure validation work as it should. Good luck with it, f -- Francis Daly francis at daoine.org From oleg at mamontov.net Wed Apr 8 07:34:45 2020 From: oleg at mamontov.net (Oleg A. Mamontov) Date: Wed, 8 Apr 2020 10:34:45 +0300 Subject: Empty file "off" under /usr/local/nginx/ In-Reply-To: <8afdf8a035774c52de133479b6489485.NginxMailingListEnglish@forum.nginx.org> References: <8afdf8a035774c52de133479b6489485.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200408073445.5cn2i3jgh3er7hd5@xenon.mamontov.net> On Wed, Apr 08, 2020 at 01:15:05AM -0400, q1548 wrote: >Hello, > >I found an empty file "off" under /usr/local/nginx/, then I deleted it, this >empty file be automatically recreated, why? Sounds like there is 'error_log off;' somewhere in your configuration. >Thanks. > >Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287577,287577#msg-287577 > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -- Cheers, Oleg A. Mamontov mailto: oleg at mamontov.net skype: lonerr11 cell: +7 (903) 798-1352 From nginx-forum at forum.nginx.org Wed Apr 8 08:46:06 2020 From: nginx-forum at forum.nginx.org (q1548) Date: Wed, 08 Apr 2020 04:46:06 -0400 Subject: Empty file "off" under /usr/local/nginx/ In-Reply-To: <20200408073445.5cn2i3jgh3er7hd5@xenon.mamontov.net> References: <20200408073445.5cn2i3jgh3er7hd5@xenon.mamontov.net> Message-ID: Thanks, Oleg. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287577,287582#msg-287582 From nginx-forum at forum.nginx.org Wed Apr 8 09:12:59 2020 From: nginx-forum at forum.nginx.org (lsces) Date: Wed, 08 Apr 2020 05:12:59 -0400 Subject: auth_request with vhost conf files In-Reply-To: <20200408071450.GM20939@daoine.org> References: <20200408071450.GM20939@daoine.org> Message-ID: <3c3e6b1cfaeda6353be3d29320d1a11c.NginxMailingListEnglish@forum.nginx.org> Francis Daly Wrote: > > do need the > > resolver 8.8.8.8; > > proxy_pass > https://indiastudycircle.org/auth/auth.php?content_id=$query; > > > > but is there another way of getting it to use a local link to the > vhost > > defined server? > > I'm not quite sure where "the thing that handles the /auth/auth.php > request" is running. "proxy_pass" is for "something other than this > server{} block", so if this "local link" is effectively remote, then > proxy_pass is probably good to use. This is where I am struggling a bit ;) and is probably the real question. The web side is handled by nginx, and the dynamic stuff by php-fpm, so I need 'auth' to run an instance of php-fpm ... or at least that is where I think I am ... except of cause auth is processing requests that would not normally use php at all. So perhaps all I need to do is simply run it like a php file? proxy_pass was working on the local test setups ... but using 'localhost' while the vhost system does not have a single 'localhost' ... I just need to use the right root. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287570,287583#msg-287583 From jmedina at mardom.com Wed Apr 8 12:51:26 2020 From: jmedina at mardom.com (Johan Gabriel Medina Capois) Date: Wed, 8 Apr 2020 12:51:26 +0000 Subject: Configure nginx a reverse proxy https for IIS backend In-Reply-To: <6629ba12-cee0-83e2-cf77-1bb54c99d7e4@none.at> References: <6629ba12-cee0-83e2-cf77-1bb54c99d7e4@none.at> Message-ID: Thank for your interest We make it run well, the request parameter missing was ignore_invalid_headers off; Regards -----Original Message----- From: Aleksandar Lazic Sent: Tuesday, April 7, 2020 5:52 PM To: nginx at nginx.org; Johan Gabriel Medina Capois Subject: Re: Configure nginx a reverse proxy https for IIS backend Hi. On 07.04.20 03:52, Johan Gabriel Medina Capois wrote: > Hello everyone > > We are noob on nginx and we are trying to configure a site that is in > Windows IIS, we could configure the site with http:// but with > https:// we can?t What's in the nginx and IIS error log? > IIS Backend server > > 10.228.20.113 application running on port 80 and 443 > > Nginx reverse proxy > > 10.228.20.99 > > Version 1.17.9 on ubuntu bionic > > This is our config files > > nginx.conf [snipp] > HTTP site config [snipp] > HTTPS site config > > server { > > listen 443; This does not looks to a proper TLS/SSL setup. http://nginx.org/en/docs/http/configuring_https_servers.html > server_name dev-kronos.mardom.com; > > location / { > > proxy_pass https://10.228.20.113; > > proxy_ssl_certificate > /etc/nginx/certificados/dev-kronoscerts/cert.crt; > > proxy_ssl_certificate_key > /etc/nginx/certificados/dev-kronoscerts/key.rsa; > > } > > } > > Could you help us please > > Regards > > Johan Medina > Administrador de Sistemas e Infraestructura Logo > Departamento: *TECNOLOGIA* > Central Tel: 809-539-600 *Ext: 8139* > Flota: *(809) 974-4954* > Directo: *809 974-4954* > Email: *jmedina at mardom.com* > Web:*www.mardom.com * Facebook icon > Instagram icon > Linkedin icon > Youtube icon Banner Sea amable con el medio ambiente: no imprima este correo a menos que sea completamente necesario. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Johan Medina Administrador de Sistemas e Infraestructura [Logo] Departamento: TECNOLOGIA Central Tel: 809-539-600 Ext: 8139 Flota: (809) 974-4954 Directo: 809 974-4954 Email: jmedina at mardom.com Web:www.mardom.com [Facebook icon] [Instagram icon] [Linkedin icon] [Youtube icon] [Banner] Sea amable con el medio ambiente: no imprima este correo a menos que sea completamente necesario. From francis at daoine.org Wed Apr 8 20:13:13 2020 From: francis at daoine.org (Francis Daly) Date: Wed, 8 Apr 2020 21:13:13 +0100 Subject: auth_request with vhost conf files In-Reply-To: <3c3e6b1cfaeda6353be3d29320d1a11c.NginxMailingListEnglish@forum.nginx.org> References: <20200408071450.GM20939@daoine.org> <3c3e6b1cfaeda6353be3d29320d1a11c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200408201313.GN20939@daoine.org> On Wed, Apr 08, 2020 at 05:12:59AM -0400, lsces wrote: > Francis Daly Wrote: Hi there, > > I'm not quite sure where "the thing that handles the /auth/auth.php > > request" is running. "proxy_pass" is for "something other than this > > server{} block", so if this "local link" is effectively remote, then > > proxy_pass is probably good to use. > > This is where I am struggling a bit ;) and is probably the real question. > The web side is handled by nginx, and the dynamic stuff by php-fpm, so I > need 'auth' to run an instance of php-fpm I think you are wondering if you should "fastcgi_pass php-fpm-service" instead of "proxy_pass this-wb-service", and I suspect the answer is "yes". In nginx, you fastcgi_pass to a service and set some fastcgi_param values that your fastcgi server cares about. In the "common" case, that is based on the incoming request details and suitable variables are already populated. In this case, that may or may not happen, so you may need to set things like SCRIPT_FILENAME manually -- I have not tested to see what is needed. > proxy_pass was working on the local test setups ... but using > 'localhost' while the vhost system does not have a single 'localhost' ... I > just need to use the right root. I don't understand what you mean there. The config you showed previously had no "localhost" that I could see. Possibly it does not matter now. f -- Francis Daly francis at daoine.org From mahmood.nt at gmail.com Thu Apr 9 12:12:45 2020 From: mahmood.nt at gmail.com (Mahmood Naderan) Date: Thu, 9 Apr 2020 16:42:45 +0430 Subject: Testing with number of connections Message-ID: Hi, I have compiled 1.14.2 from source and for some binary analysis, I want to measure the response time under multiple connections, e.g. 1000 tcp connections. I am talking about sbin/nginx file. I didn't find a clear document on that. Does anybody know? Regards, Mahmood -------------- next part -------------- An HTML attachment was scrubbed... URL: From themadbeaker at gmail.com Thu Apr 9 18:29:42 2020 From: themadbeaker at gmail.com (J.R.) Date: Thu, 9 Apr 2020 13:29:42 -0500 Subject: Testing with number of connections Message-ID: > I have compiled 1.14.2 from source and for some binary analysis, I want to > measure the response time under multiple connections, e.g. 1000 tcp > connections. I am talking about sbin/nginx file. > I didn't find a clear document on that. Does anybody know? ab (apache bench) siege httperf It's not hard to google 'web benchmark software'... Might want to consider upgrading, 1.14 branch is not a current version... From 7149144120 at txt.att.net Thu Apr 9 21:30:28 2020 From: 7149144120 at txt.att.net (7149144120 at txt.att.net) Date: Thu, 09 Apr 2020 21:30:28 -0000 Subject: Testing with number of connections In-Reply-To: CADa2P2WHYpPCAjYf4fexC4fs11TiHw06SJzucm14bP0_aoWQGA@mail.gmail.com Message-ID: Messages I'm going to report -----Original Message----- From: Sent: Thu, 9 Apr 2020 16:42:45 +0430 To: 7149144120 at txt.att.net Subject: Testing with number of connections >Hi, >I have compiled 1.14.2 from source and for some binary analysis, I want to >measure the response time under multiple connections, e.g. 1000 tcp >connections. I am talking ab ================================================================== This mobile text message is brought to you by AT&T From 7149144120 at txt.att.net Thu Apr 9 21:30:28 2020 From: 7149144120 at txt.att.net (7149144120 at txt.att.net) Date: Thu, 09 Apr 2020 21:30:28 -0000 Subject: Testing with number of connections In-Reply-To: CADa2P2WHYpPCAjYf4fexC4fs11TiHw06SJzucm14bP0_aoWQGA@mail.gmail.com Message-ID: Stop sending me savages -----Original Message----- From: Sent: Thu, 9 Apr 2020 16:42:45 +0430 To: 7149144120 at txt.att.net Subject: Testing with number of connections >Hi, >I have compiled 1.14.2 from source and for some binary analysis, I want to >measure the response time under multiple connections, e.g. 1000 tcp >connections. I am talking ab ================================================================== This mobile text message is brought to you by AT&T From 7149144120 at txt.att.net Thu Apr 9 21:31:06 2020 From: 7149144120 at txt.att.net (7149144120 at txt.att.net) Date: Thu, 09 Apr 2020 21:31:06 -0000 Subject: Testing with number of connections In-Reply-To: QlWL2200i0gesu301lWLiL@txt.att.net Message-ID: Stop -----Original Message----- From: Sent: Thu, 9 Apr 2020 21:30:30 +0000 (UTC) To: 7149144120 at txt.att.net Subject: RE: Testing with number of connections >Stop sending me savages > > -----Original Message----- > From: > Sent: Thu, 9 Apr 2020 16:42:45 +0430 > To: 7149144120 at txt.att.net > Subject: Testing with ================================================================== This mobile text message is brought to you by AT&T From nginx-forum at forum.nginx.org Fri Apr 10 09:44:59 2020 From: nginx-forum at forum.nginx.org (xav) Date: Fri, 10 Apr 2020 05:44:59 -0400 Subject: Variable interpolation in a regex. In-Reply-To: References: Message-ID: <6f80b1bd0977561bb1185d176b131582.NginxMailingListEnglish@forum.nginx.org> Hi Is it still true in 2020 ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,230507,287601#msg-287601 From nginx-forum at forum.nginx.org Fri Apr 10 09:55:40 2020 From: nginx-forum at forum.nginx.org (lsces) Date: Fri, 10 Apr 2020 05:55:40 -0400 Subject: auth_request with vhost conf files In-Reply-To: <20200408201313.GN20939@daoine.org> References: <20200408201313.GN20939@daoine.org> Message-ID: <2c9b1cf226dd8e7b97c6786d8cdd9f52.NginxMailingListEnglish@forum.nginx.org> Thanks Francis ... Your prods have pointed me in the right direction. My initial problem was not being able to include a parameter in the auth_request and that is where the examples brought up the proxy_pass 'solution' ... of cause what I was missing is that the request for the images are already independent requests, so there is no problem simply calling php-fpm directly. The 'localhost' question is a red herring as php-fpm simply uses the correct root while my proxy_pass setup was using the 'default' localhost settings ... I'm getting my head around the various twists and turns but finding examples that cross the various boundaries is difficult. I will run a crib sheet once I've tidied up what I do have, but my less than optimal setup is working on three sites currently. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287570,287602#msg-287602 From nginx-forum at forum.nginx.org Fri Apr 10 14:11:06 2020 From: nginx-forum at forum.nginx.org (patademahesh) Date: Fri, 10 Apr 2020 10:11:06 -0400 Subject: massive deleted open files in proxy cache In-Reply-To: <28D26B9C-7FA8-4B31-9B38-4DCC650F3878@me.com> References: <28D26B9C-7FA8-4B31-9B38-4DCC650F3878@me.com> Message-ID: <233b9292a3d65a0c0c2d39c796fb82bf.NginxMailingListEnglish@forum.nginx.org> We are facing the same issue. File gets deleted but it holds the FD and disk space is never released until we restart the nginx server. Most of the files we is from proxy_temp_path directory. This is causing filesystem to go out of space. We tried this with tmpfs and normal disk. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272519,287604#msg-287604 From nginx-forum at forum.nginx.org Fri Apr 10 15:26:17 2020 From: nginx-forum at forum.nginx.org (patademahesh) Date: Fri, 10 Apr 2020 11:26:17 -0400 Subject: Too many deleted open files in proxy_temp_path Message-ID: <0435cac743554482b0f48fe0d0cc2d2a.NginxMailingListEnglish@forum.nginx.org> Hi everyone, We are using nginx as reverse proxy to cache static content for a moodle LMS site. The caching part is working fine but we started facing the cache path disk full issues. When we checked using du, it was reporting very low used space.Then we checked lsof output we found too many deleted file entries. We realized, that file gets deleted but it holds the FD and disk space is never released until we restart the nginx server. Most of the files were from proxy_temp_path location. We tried this with tmpfs and normal disk but the end result was same. # nginx -v nginx version: nginx/1.10.3 (Ubuntu) # lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 16.04.6 LTS Release: 16.04 Codename: xenial =================== df output ================= # df -h /mnt/ Filesystem Size Used Avail Use% Mounted on /dev/sdb1 252G 8.7G 231G 4% /mnt # du -sh /mnt/nginx/ 4.0K /mnt/nginx/ ================== nginx.conf ================== user www-data; worker_processes auto; pid /run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 0; types_hash_max_size 2048; client_max_body_size 2048M; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # SSL Settings ## ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } ==================== SSL site config ===================== log_format rt_cache '$remote_addr - $upstream_cache_status [$time_local] ' '"$request" $status "$sent_http_content_type" $sent_http_content_encoding $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; proxy_cache_path /var/cache/tmpfs levels=1:2 keys_zone=my_cache:1000m max_size=18g inactive=1d use_temp_path=off;; proxy_temp_path /mnt/nginx; server { listen 443; server_name xxx.yyy.com; ssl_certificate /etc/apache2/ssl/xxx.yyy.com-nginx.crt; ssl_certificate_key /etc/apache2/ssl/xxx.yyy.com.key; ssl on; ssl_session_cache builtin:1000 shared:SSL:10m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_ciphers on; access_log /var/log/nginx/xxx.yyy.com-access.log rt_cache; #access_log off; proxy_buffer_size 512k; proxy_buffers 4 512k; proxy_headers_hash_max_size 1024; location /nginx-status { stub_status on; allow all; } location ~* \.(?:ico|jpg|css|png|js|swf|woff|eot|svg|ttf|html|gif|jpeg)$ { aio threads; proxy_cache my_cache; add_header X-Proxy-Cache $upstream_cache_status; expires 7d; proxy_ignore_headers Cache-Control; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://localhost; proxy_set_header Connection ""; proxy_connect_timeout 600s; proxy_send_timeout 600s; proxy_read_timeout 600s; send_timeout 600s; proxy_http_version 1.1; } location / { #proxy_buffering off; proxy_ignore_headers Set-Cookie; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://localhost; proxy_set_header Connection ""; proxy_connect_timeout 600s; proxy_send_timeout 600s; proxy_read_timeout 600s; send_timeout 600s; proxy_http_version 1.1; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287605,287605#msg-287605 From themadbeaker at gmail.com Sat Apr 11 03:18:27 2020 From: themadbeaker at gmail.com (J.R.) Date: Fri, 10 Apr 2020 22:18:27 -0500 Subject: Too many deleted open files in proxy_temp_path Message-ID: > # nginx -v > nginx version: nginx/1.10.3 (Ubuntu) The last update for that version was over 3 years ago... Try updating to 1.17.9... From nginx-forum at forum.nginx.org Sat Apr 11 05:01:02 2020 From: nginx-forum at forum.nginx.org (patademahesh) Date: Sat, 11 Apr 2020 01:01:02 -0400 Subject: Too many deleted open files in proxy_temp_path In-Reply-To: References: Message-ID: <68c9a98069ac13f1a7b558ed109af4a9.NginxMailingListEnglish@forum.nginx.org> Oh.. yes. i thought ubuntu gives latest package. Okay i'll try that and update here. Thank You for pointing. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287605,287607#msg-287607 From mat999 at gmail.com Sat Apr 11 14:14:44 2020 From: mat999 at gmail.com (Mathew Heard) Date: Sun, 12 Apr 2020 00:14:44 +1000 Subject: All I want for easter is a working module Message-ID: Could anyone help me out with the problem here? ngx_module_t ngx_http_slow_module = { NGX_MODULE_V1, &ngx_http_slow_module_ctx, /* module context */ ngx_http_slow_commands, /* module directives */ NGX_HTTP_MODULE, /* module type */ NULL, /* init master */ NULL, /* init module */ ngx_http_slow_init_worker, /* init process */ NULL, /* init thread */ NULL, /* exit thread */ NULL, /* exit process */ NULL, /* exit master */ NGX_MODULE_V1_PADDING }; [...] void ngx_http_slow_handler(ngx_event_t *ev){ ngx_log_error(NGX_LOG_ERR, ev->log, 0, "run timer"); } static ngx_event_t ngx_http_slow_timer; static ngx_connection_t dumb; static ngx_int_t ngx_http_slow_init_worker(ngx_cycle_t *cycle){ if (ngx_process != NGX_PROCESS_WORKER){ return NGX_OK; } ngx_log_error(NGX_LOG_ERR, ngx_cycle->log, 0, "start timer"); memset(&ngx_http_slow_timer, 0, sizeof(ngx_http_slow_timer)); ngx_http_slow_timer.log = ngx_cycle->log; ngx_http_slow_timer.handler = ngx_http_slow_handler; ngx_http_slow_timer.data = &dumb; dumb.fd = (ngx_socket_t) -1; ngx_add_timer(&ngx_http_slow_timer, (ngx_msec_t)NGX_HTTP_SLOW_INTERVAL); return NGX_OK; } "Start timer" is output in the logs, but not "run timer". And I can't see why. Regards, Mathew -------------- next part -------------- An HTML attachment was scrubbed... URL: From themadbeaker at gmail.com Sat Apr 11 15:32:47 2020 From: themadbeaker at gmail.com (J.R.) Date: Sat, 11 Apr 2020 10:32:47 -0500 Subject: All I want for easter is a working module Message-ID: I've never heard of 'ngx_http_slow_module'... Is there a github page or similar with the source code? It's going to take more than just selective snippets if you really want someone to help debug it... From proxy.trash at gmail.com Sat Apr 11 23:21:19 2020 From: proxy.trash at gmail.com (Stefan Christ) Date: Sat, 11 Apr 2020 23:21:19 +0000 Subject: Global basic auth for multiple servers Message-ID: Hello, today I tried to install and setup nginx and it worked great so far. I was able to add some servers (server sections) for each of my subdomains and forward them to the specific web interface in my network (reverse proxy). I wanted to add some extra security so I used basic auth in the http section and turned it off for one single subdomain. Now I get asked to auth for each subdomain. Is it possible to configure nginx so that I only have to auth on one subdomain and have access to all others subdomains without being forced to auth again? Still so happy how easy the setup was! Have a nice day, Stefan -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at mheard.com Sun Apr 12 00:58:54 2020 From: me at mheard.com (Mathew Heard) Date: Sun, 12 Apr 2020 10:58:54 +1000 Subject: All I want for easter is a working module In-Reply-To: References: Message-ID: J.R, You won't find it publicly as all it is a testing module with the goal of establishing a working timer. If you want the source code for it you need only ask. Here you go. /* * Copyright (C) Mathew Heard. */ #include #include #include #include #define NGX_HTTP_SLOW_INTERVAL 7000 typedef struct { ngx_flag_t enable; } ngx_http_slow_conf_t; static void *ngx_http_slow_create_conf(ngx_conf_t *cycle); static char *ngx_http_slow_init_conf(ngx_conf_t *cycle, void *conf); static char *ngx_http_slow_enable(ngx_conf_t *cf, void *post, void *data); static ngx_conf_post_t ngx_http_slow_enable_post = { ngx_http_slow_enable }; static ngx_command_t ngx_http_slow_commands[] = { { ngx_string("slow"), NGX_HTTP_MAIN_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, NGX_HTTP_MAIN_CONF_OFFSET, offsetof(ngx_http_slow_conf_t, enable), &ngx_http_slow_enable_post }, ngx_null_command }; ngx_http_module_t ngx_http_slow_module_ctx = { NULL, /* preconfiguration */ NULL, /* postconfiguration */ ngx_http_slow_create_conf, /* create main configuration */ ngx_http_slow_init_conf, /* init main configuration */ NULL, /* create server configuration */ NULL, /* merge server configuration */ NULL, /* create location configuration */ NULL /* merge location configuration */ }; static ngx_int_t ngx_http_slow_init_worker(ngx_cycle_t *cycle); ngx_module_t ngx_http_slow_module = { NGX_MODULE_V1, &ngx_http_slow_module_ctx, /* module context */ ngx_http_slow_commands, /* module directives */ NGX_HTTP_MODULE, /* module type */ NULL, /* init master */ NULL, /* init module */ ngx_http_slow_init_worker, /* init process */ NULL, /* init thread */ NULL, /* exit thread */ NULL, /* exit process */ NULL, /* exit master */ NGX_MODULE_V1_PADDING }; void ngx_http_slow_handler(ngx_event_t *ev){ ngx_log_error(NGX_LOG_ERR, ev->log, 0, "run timer"); // set up the next tick in n seconds if (ngx_exiting) { return; } ngx_add_timer(ev, (ngx_msec_t)NGX_HTTP_SLOW_INTERVAL); } static ngx_event_t ngx_http_slow_timer; static ngx_connection_t dumb; static ngx_int_t ngx_http_slow_init_worker(ngx_cycle_t *cycle){ if (ngx_process != NGX_PROCESS_WORKER){ return NGX_OK; } ngx_log_error(NGX_LOG_ERR, ngx_cycle->log, 0, "start timer"); memset(&ngx_http_slow_timer, 0, sizeof(ngx_http_slow_timer)); ngx_http_slow_timer.log = ngx_cycle->log; ngx_http_slow_timer.handler = ngx_http_slow_handler; ngx_http_slow_timer.data = &dumb; dumb.fd = (ngx_socket_t) -1; ngx_add_timer(&ngx_http_slow_timer, (ngx_msec_t)NGX_HTTP_SLOW_INTERVAL); return NGX_OK; } static void * ngx_http_slow_create_conf(ngx_conf_t *cycle) { ngx_http_slow_conf_t *fcf; fcf = ngx_pcalloc(cycle->pool, sizeof(ngx_http_slow_conf_t)); if (fcf == NULL) { return NULL; } fcf->enable = NGX_CONF_UNSET; return fcf; } static char * ngx_http_slow_init_conf(ngx_conf_t *cycle, void *conf) { ngx_http_slow_conf_t *fcf = conf; ngx_conf_init_value(fcf->enable, 0); return NGX_CONF_OK; } static char * ngx_http_slow_enable(ngx_conf_t *cf, void *post, void *data) { ngx_flag_t *fp = data; if (*fp == 0) { return NGX_CONF_OK; } return NGX_CONF_OK; } On Sun, 12 Apr 2020 at 01:33, J.R. wrote: > I've never heard of 'ngx_http_slow_module'... Is there a github page > or similar with the source code? > > It's going to take more than just selective snippets if you really > want someone to help debug it... > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at samad.com.au Sun Apr 12 02:54:46 2020 From: alex at samad.com.au (Alex Samad) Date: Sun, 12 Apr 2020 12:54:46 +1000 Subject: nginx and atlassian crowd Message-ID: Hi Whats considered the best way to auth again crowd. I see some old module - 6-7 year untouched https://github.com/kare/ngx_http_auth_crowd_module trying this one but can't compile it also noted crowd does openid https://www.nginx.com/blog/authenticating-users-existing-applications-openid-connect-nginx-plus/ but .. what are others doing ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bee.lists at gmail.com Sun Apr 12 04:55:10 2020 From: bee.lists at gmail.com (Bee.Lists) Date: Sun, 12 Apr 2020 00:55:10 -0400 Subject: All I want for easter is a working module In-Reply-To: References: Message-ID: <07642E3B-A32B-47AB-A735-878019971CB5@gmail.com> I think he?s saying that there?s more to this than posting code and saying ?no worky?. State what you want, what you have, what you?ve tried, and what you think might be going wrong. Simply posting your code and expecting other troubleshooters to solve your issue, is the wrong approach. HE?nor anybody...doesn?t need to ask for your private code to answer your questions. At the best of times, the world is challenging. Currently, it?s not so good. You?ll attract more bees with honey. > On Apr 11, 2020, at 8:58 PM, Mathew Heard wrote: > > J.R, > > You won't find it publicly as all it is a testing module with the goal of establishing a working timer. If you want the source code for it you need only ask. Here you go. Cheers, Bee From mahmood.nt at gmail.com Sun Apr 12 06:21:03 2020 From: mahmood.nt at gmail.com (Mahmood Naderan) Date: Sun, 12 Apr 2020 10:51:03 +0430 Subject: Using perf command for nginx process Message-ID: Hi, I wonder if anybody here have tried perf command with nginx service. While the service is up, I ran a wrk command from another computer as below ./wrk -t1 -c100 -d200s http://10.1.1.130 On the server, the nginx process is 100% and then I attached perf command like $ sudo perf record -e instructions:u --branch-filter any,u -o perf.data -p 32594 ^C[ perf record: Woken up 465 times to write data ] [ perf record: Captured and wrote 117.326 MB perf.data (284800 samples) ] Now, when I check the buildid-list, I don't see any sign for the nginx binary $ sudo perf buildid-list -i perf.data 3a2171019937a2070663f3b6419330223bd64e96 [kernel.kallsyms] b5381a457906d279073822a5ceb24c4bfef94ddb /lib/x86_64-linux-gnu/libc-2.23.so ce17e023542265fc11d9bc8f534bb4f070493d30 /lib/x86_64-linux-gnu/libpthread-2.23.so 631c69f65c00d3e6d3ee6108202e9c323a21ce28 [vdso] Any idea to more investigation? Regards, Mahmood -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at mheard.com Sun Apr 12 06:32:07 2020 From: me at mheard.com (Mathew Heard) Date: Sun, 12 Apr 2020 16:32:07 +1000 Subject: All I want for easter is a working module In-Reply-To: <07642E3B-A32B-47AB-A735-878019971CB5@gmail.com> References: <07642E3B-A32B-47AB-A735-878019971CB5@gmail.com> Message-ID: Gee, glad I don't need to ask for help often. Usually providing a minimal working example to replicate a problem and a concise description of the problem is a good thing. I'm a member of quite a few communities that I provide support for and this would have been an easy case of "your usage of that method looks correct to me". Anyway it turns out that this is undocumented quirk. The default config ( http://nginx.org/en/docs/dev/development_guide.html ) provided for building a module builds a CORE type module which is initialized before the event system (add has no return value, and debug logs the correct output regardless). The documentation should probably mention that the ngx_add_timer is not available by default, also it would probably also be a good idea for the ngx_add_timer example to actually include a call to ngx_add_timer. Anyway enjoy Easter everyone, it sounds like you all need to destress. On Sun, 12 Apr 2020 at 14:55, Bee.Lists wrote: > I think he?s saying that there?s more to this than posting code and saying > ?no worky?. > > State what you want, what you have, what you?ve tried, and what you think > might be going wrong. Simply posting your code and expecting other > troubleshooters to solve your issue, is the wrong approach. > > HE?nor anybody...doesn?t need to ask for your private code to answer your > questions. > > At the best of times, the world is challenging. Currently, it?s not so > good. You?ll attract more bees with honey. > > > > On Apr 11, 2020, at 8:58 PM, Mathew Heard wrote: > > > > J.R, > > > > You won't find it publicly as all it is a testing module with the goal > of establishing a working timer. If you want the source code for it you > need only ask. Here you go. > > > > Cheers, Bee > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Apr 12 14:49:38 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 12 Apr 2020 17:49:38 +0300 Subject: Too many deleted open files in proxy_temp_path In-Reply-To: <0435cac743554482b0f48fe0d0cc2d2a.NginxMailingListEnglish@forum.nginx.org> References: <0435cac743554482b0f48fe0d0cc2d2a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200412144938.GC20357@mdounin.ru> Hello! On Fri, Apr 10, 2020 at 11:26:17AM -0400, patademahesh wrote: > We are using nginx as reverse proxy to cache static content for a moodle LMS > site. The caching part is working fine but we started facing the cache path > disk full issues. When we checked using du, it was reporting very low used > space.Then we checked lsof output we found too many deleted file entries. We > realized, that file gets deleted but it holds the FD and disk space is never > released until we restart the nginx server. Most of the files were from > proxy_temp_path location. We tried this with tmpfs and normal disk but the > end result was same. > > # nginx -v > nginx version: nginx/1.10.3 (Ubuntu) Note that when writing temporary files when proxying (without caching, but there is no caching configured in "location /"), it is quite normal that temporary files are unlinked (deleted) right after creation and cannot be seen by "du". This way temporary files are automatically removed by the system as long as a file is closed - or if nginx is killed or crashes. If you think that temporary files are not removed from disk even if corresponding client connections are closed - this might indicate a socket leak. Usually, socket leaks can be seen by other symptoms as well - such as connections in the CLOSED state as shown by "netstat -an", or "open socket ... left in connection ..." alerts during graceful shutdown of worker processes. Given that you are using an ancient nginx version, socket leaks might be the case - there are at least some fixed since 1.10.3, mostly in HTTP/2. On the other hand, if deleted files is the only symptom you are seeing, it might simply indicate that corresponding client connection is still open and the client is slowly downloading the response in question. If that's true, you may want to either use a filesystem with more space for temporary files, or tune proxy buffering, see here: http://nginx.org/r/proxy_max_temp_file_size -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Sun Apr 12 14:55:59 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 12 Apr 2020 17:55:59 +0300 Subject: Global basic auth for multiple servers In-Reply-To: References: Message-ID: <20200412145559.GD20357@mdounin.ru> Hello! On Sat, Apr 11, 2020 at 11:21:19PM +0000, Stefan Christ wrote: > today I tried to install and setup nginx and it worked great so > far. I was able to add some servers (server sections) for each > of my subdomains and forward them to the specific web interface > in my network (reverse proxy). > > I wanted to add some extra security so I used basic auth in the > http section and turned it off for one single subdomain. Now I > get asked to auth for each subdomain. Is it possible to > configure nginx so that I only have to auth on one subdomain and > have access to all others subdomains without being forced to > auth again? No. Unfortunately, Basic HTTP authentication only provides implicit authentication scope, and automatic reuse of credentials is not possible across different [sub]domains. Further details can be found in RFC 7617, "2.2. Reusing Credentials", here: https://tools.ietf.org/html/rfc7617#section-2.2 -- Maxim Dounin http://mdounin.ru/ From proxy.trash at gmail.com Sun Apr 12 15:23:27 2020 From: proxy.trash at gmail.com (proxy.trash at gmail.com) Date: Sun, 12 Apr 2020 17:23:27 +0200 Subject: AW: Global basic auth for multiple servers In-Reply-To: <20200412145559.GD20357@mdounin.ru> References: <20200412145559.GD20357@mdounin.ru> Message-ID: <028501d610de$5110fbe0$f332f3a0$@googlemail.com> Hi Maxim, thank you for the information! -----Urspr?ngliche Nachricht----- Von: nginx Im Auftrag von Maxim Dounin Gesendet: Sonntag, 12. April 2020 16:56 An: nginx at nginx.org Betreff: Re: Global basic auth for multiple servers Hello! On Sat, Apr 11, 2020 at 11:21:19PM +0000, Stefan Christ wrote: > today I tried to install and setup nginx and it worked great so far. I > was able to add some servers (server sections) for each of my > subdomains and forward them to the specific web interface in my > network (reverse proxy). > > I wanted to add some extra security so I used basic auth in the http > section and turned it off for one single subdomain. Now I get asked to > auth for each subdomain. Is it possible to configure nginx so that I > only have to auth on one subdomain and have access to all others > subdomains without being forced to auth again? No. Unfortunately, Basic HTTP authentication only provides implicit authentication scope, and automatic reuse of credentials is not possible across different [sub]domains. Further details can be found in RFC 7617, "2.2. Reusing Credentials", here: https://tools.ietf.org/html/rfc7617#section-2.2 -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Apr 13 03:39:38 2020 From: nginx-forum at forum.nginx.org (renoirb) Date: Sun, 12 Apr 2020 23:39:38 -0400 Subject: Too many deleted open files in proxy_temp_path In-Reply-To: <0435cac743554482b0f48fe0d0cc2d2a.NginxMailingListEnglish@forum.nginx.org> References: <0435cac743554482b0f48fe0d0cc2d2a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <69b644ffdc9107f781b52c9349dd711a.NginxMailingListEnglish@forum.nginx.org> [[ Written a few days ago, but email bounced ]] [[ Had to re-register to re-post. ]] [[ sorry if this is repeating somebody else ]] Hi all, First time poster here, doing it on a lazy morning. Have you tried to halt and start the service? Linux filesystem doesn't release deleted files when the process still has process and pointer references to them (i.e. most likely the reason for this inquiry). Probably a bug. Stop/start the service and all child threads could fix your filesystem issue. If you already have more than one Linux servers, each with distinct IP addresses, and your DNS with A record to each of them (? la round-robin), with no HTTP client "stickyness" affinity to one node (e.g. cookie or way to tell that one particular Browser must only go to node0). You won't create downtime. Hope this helps. Stay safe. Renoir-- Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287605,287623#msg-287623 From mdounin at mdounin.ru Tue Apr 14 14:34:15 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 14 Apr 2020 17:34:15 +0300 Subject: nginx-1.17.10 Message-ID: <20200414143415.GK20357@mdounin.ru> Changes with nginx 1.17.10 14 Apr 2020 *) Feature: the "auth_delay" directive. -- Maxim Dounin http://nginx.org/ From paul at stormy.ca Tue Apr 14 20:38:51 2020 From: paul at stormy.ca (Paul) Date: Tue, 14 Apr 2020 16:38:51 -0400 Subject: Rewrite -- failure Message-ID: <131f8eb9-986d-73ba-e606-200154fc1624@stormy.ca> New to this list (lurked for a couple of weeks), so hope you'll bear with me. I'm trying to get a charity's volunteers set up to work from home. Using nginx 1.14.0 (latest on Ubuntu 14.04LTS -- all up to date; #nginx -V below) as a front end for a number of servers using Apache 2.4. My problem is that I need to split serv1.example.com to two physical servers (both fully functional on LAN). The first (192.168.aaa.bbb) serving static https works fine. But I cannot "rewrite" (redirect, re-proxy?) to the second server (192.168.xxx.yyy, Perl cgi) where the request comes in as https://serv1.example.com/foo and I need to get rid of "foo" "rewrite ^(.*serv1\.example\.com\/)foo\/(.*) $1$2 permanent;" (tried permanent, break, last and no flags) is valid as a PCRE regex, but logs give me a 404 trying to find "foo" which has nothing to do with the cgi root: [14/Apr/2020:16:14:19 -0400] "GET /foo HTTP/1.1" 404 2471 What I am trying for is "GET / HTTP/1.1" 200 Here's my server config. Any all assistance would be greatly appreciated -- many thanks and stay well -- Paul server { listen 443 ssl; # [4 lines managed by Certbot, working perfectly] server_name serv1.example.com; access_log /var/log/nginx/access.log; error_log /var/log/nginx/mysite-error_log; proxy_buffering off; location / { # static server, html, works perfectly, proxy_pass http://192.168.aaa.bbb; proxy_set_header Host $host; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /foo { # big db server, perfect on LAN, PERL, cgi # rewrite ^/foo(.*) /$1 break; #tried permanent, break, last and no flags # rewrite ^/foo/(.*)$ /$1 last; #tried permanent, break, last and no flags rewrite ^(.*serv1\.example\.com\/)foo\/(.*) $1$2 permanent; #tried permanent, break, last and no flags proxy_pass http://192.168.xxx.yyy:8084; proxy_set_header Host $host; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } server { if ($host = serv1.example.com) { return 301 https://$host$request_uri; } # managed by Certbot # automatically sets to https if someone comes in on http listen 80; listen 8084; server_name serv1.example.com; rewrite ^ https://$host$request_uri? permanent; } _________ nginx -V nginx version: nginx/1.14.0 (Ubuntu) built with OpenSSL 1.1.1 11 Sep 2018 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fdebug-prefix-map=/build/nginx-GkiujU/nginx-1.14.0=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-mail=dynamic --with-mail_ssl_module \\\||// (@ @) ooO_(_)_Ooo__________________________________ |______|_____|_____|_____|_____|_____|_____|_____| |___|____|_____|_____|_____|_____|_____|_____|____| |_____|_____| mailto:paul at stormy.ca _|____|____| From lists at lazygranch.com Tue Apr 14 22:03:32 2020 From: lists at lazygranch.com (lists) Date: Tue, 14 Apr 2020 15:03:32 -0700 Subject: Rewrite -- failure In-Reply-To: <131f8eb9-986d-73ba-e606-200154fc1624@stormy.ca> Message-ID: Wouldn't it be less work to set up subdomains and handle this with DNS? I for one will never qualify for this T shirt. https://store.xkcd.com/products/i-know-regular-expressions ? Original Message ? From: paul at stormy.ca Sent: April 14, 2020 1:39 PM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Rewrite -- failure New to this list (lurked for a couple of weeks), so hope you'll bear with me. I'm trying to get a charity's volunteers set up to work from home. Using nginx 1.14.0 (latest on Ubuntu 14.04LTS -- all up to date; #nginx -V below) as a front end for a number of servers using Apache 2.4. My problem is that I need to split serv1.example.com to two physical servers (both fully functional on LAN). The first (192.168.aaa.bbb) serving static https works fine. But I cannot "rewrite" (redirect, re-proxy?) to the second server (192.168.xxx.yyy, Perl cgi) where the request comes in as https://serv1.example.com/foo and I need to get rid of "foo" "rewrite ^(.*serv1\.example\.com\/)foo\/(.*) $1$2 permanent;" (tried permanent, break, last and no flags) is valid as a PCRE regex, but logs give me a 404 trying to find "foo" which has nothing to do with the cgi root: [14/Apr/2020:16:14:19 -0400] "GET /foo HTTP/1.1" 404 2471 What I am trying for is "GET / HTTP/1.1" 200 Here's my server config.? Any all assistance would be greatly appreciated -- many thanks and stay well -- Paul server { ???? listen 443 ssl; ???? # [4 lines managed by Certbot, working perfectly] ???? server_name serv1.example.com; ???? access_log /var/log/nginx/access.log; ???? error_log? /var/log/nginx/mysite-error_log; ???? proxy_buffering off; ???? location / {????????????? # static server, html, works perfectly, ???????? proxy_pass http://192.168.aaa.bbb; ???????? proxy_set_header Host $host; ???????? proxy_http_version 1.1; ???????? proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; ??? } ???? location /foo {?????????? # big db server, perfect on LAN, PERL, cgi ???????? # rewrite ^/foo(.*) /$1 break;?? #tried permanent, break, last and no flags ???????? # rewrite ^/foo/(.*)$ /$1 last;?? #tried permanent, break, last and no flags ???????? rewrite ^(.*serv1\.example\.com\/)foo\/(.*) $1$2 permanent; #tried permanent, break, last and no flags ???????? proxy_pass http://192.168.xxx.yyy:8084; ???????? proxy_set_header Host $host; ???????? proxy_http_version 1.1; ???????? proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; ??? } } server { ???? if ($host = serv1.example.com) { ???????? return 301 https://$host$request_uri; ???? } # managed by Certbot # automatically sets to https if someone comes in on http ???? listen 80; ???? listen 8084; ???? server_name serv1.example.com; ???? rewrite???? ^?? https://$host$request_uri? permanent; } _________ nginx -V nginx version: nginx/1.14.0 (Ubuntu) built with OpenSSL 1.1.1? 11 Sep 2018 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fdebug-prefix-map=/build/nginx-GkiujU/nginx-1.14.0=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-mail=dynamic --with-mail_ssl_module ?? \\\||// ??? (@ @) ooO_(_)_Ooo__________________________________ |______|_____|_____|_____|_____|_____|_____|_____| |___|____|_____|_____|_____|_____|_____|_____|____| |_____|_____| mailto:paul at stormy.ca _|____|____| _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Tue Apr 14 22:19:28 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 14 Apr 2020 23:19:28 +0100 Subject: auth_request with vhost conf files In-Reply-To: <2c9b1cf226dd8e7b97c6786d8cdd9f52.NginxMailingListEnglish@forum.nginx.org> References: <20200408201313.GN20939@daoine.org> <2c9b1cf226dd8e7b97c6786d8cdd9f52.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200414221928.GP20939@daoine.org> On Fri, Apr 10, 2020 at 05:55:40AM -0400, lsces wrote: Hi there, > Your prods have pointed me in the right direction. My initial problem was > not being able to include a parameter in the auth_request... > ...setup is working on three sites currently. Great that you got a solution that does what you need it to. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Apr 14 22:39:39 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 14 Apr 2020 23:39:39 +0100 Subject: Rewrite -- failure In-Reply-To: <131f8eb9-986d-73ba-e606-200154fc1624@stormy.ca> References: <131f8eb9-986d-73ba-e606-200154fc1624@stormy.ca> Message-ID: <20200414223939.GQ20939@daoine.org> On Tue, Apr 14, 2020 at 04:38:51PM -0400, Paul wrote: Hi there, > My problem is that I need to split serv1.example.com to two physical servers > (both fully functional on LAN). The first (192.168.aaa.bbb) serving static > https works fine. But I cannot "rewrite" (redirect, re-proxy?) to the second > server (192.168.xxx.yyy, Perl cgi) where the request comes in as > https://serv1.example.com/foo and I need to get rid of "foo" http://nginx.org/r/proxy_pass -- proxy_pass can (probably) do what you want, without rewrites. The documentation phrase to look for is "specified with a URI". > "rewrite ^(.*serv1\.example\.com\/)foo\/(.*) $1$2 permanent;" (tried > permanent, break, last and no flags) "rewrite" (http://nginx.org/r/rewrite) works on the "/foo" part, not the "https://" or the "serv1.example.com" parts of the request, which is why that won't match your requests. > location /foo { # big db server, perfect on LAN, PERL, cgi > # rewrite ^/foo(.*) /$1 break; #tried permanent, break, last and > no flags That one looks to me to be most likely to work; but you probably need to be very clear about what you mean when you think "it doesn't work". In general - show the request, show the response, and describe the response that you want instead. > # rewrite ^/foo/(.*)$ /$1 last; #tried permanent, break, last and > no flags > rewrite ^(.*serv1\.example\.com\/)foo\/(.*) $1$2 permanent; #tried > permanent, break, last and no flags > proxy_pass http://192.168.xxx.yyy:8084; > proxy_set_header Host $host; > proxy_http_version 1.1; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > } I suggest trying location /foo/ { proxy_pass http://192.168.xxx.yyy:8084/; } (note the trailing / in both places) and then seeing what else needs to be added. Note also that, in any case, if you request /foo/one.cgi which is really upstream's /one.cgi, and the response body includes a link to /two.png, then the browser will look for /two.png not /foo/two.png, which will be sought on the other server. That may or may not be what you want, depending on how you have set things up. That is: it is in general non-trivial to reverse-proxy a service at a different places in the url hierarchy from where the service believes it is located. Sometimes a different approach is simplest. > server { > > # automatically sets to https if someone comes in on http > listen 80; > listen 8084; Hmm. Is this 8084 the same as 192.168.xxx.yyy:8084 above? If so, things might get a bit confused. Good luck with it, f -- Francis Daly francis at daoine.org From mat999 at gmail.com Wed Apr 15 00:21:09 2020 From: mat999 at gmail.com (Mathew Heard) Date: Wed, 15 Apr 2020 10:21:09 +1000 Subject: Nginx Brunzip Message-ID: Hi all, I'm the maintainer of an open source module ngx_brunzip_module ( https://github.com/splitice/ngx_brunzip_module/ ). Effectively the same as the gunzip module (and based off that source) but with Brotli. I've been scratching my head for 2 days regarding some high CPU usage within the chain code. It appears that some spinning is possible. I must admit I only have a basic understanding of the filter chain in nginx (still gaining experience). 1. I was wondering if someone could take a look at the code and give me some pointers? 2. Also I've added some code to prevent further filling of mostly full buffers (as it appears brotli is quite expensive to start) at https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L408 is this valid? How does nginx determine when backpressure from full output chains is relieved? Is there any in-depth documentation of the filter chain architecture? Regards, Mathew -------------- next part -------------- An HTML attachment was scrubbed... URL: From lawrence at begame.nl Wed Apr 15 10:52:59 2020 From: lawrence at begame.nl (Lawrence) Date: Wed, 15 Apr 2020 12:52:59 +0200 Subject: Nginx wp-admin access control Message-ID: <2104499898-31940@mail6.enem.nl> Greetings All, To start, I am very much a beginner to nginx and coding. I am a application support engineer, but got very little development skills. I hope that there is someone out there that can guide me through this maze. I have searched the web and have seen multiple solutions but none seem to work exactly how I want it to work. My nginx server setup, I am running and managing the config for nginx from the /etc/nginx/nginx.conf file I have 5 seperate sites under sites-enabled. Each site has it's own config file where I have tried to manage and block access to my? two wordpress sites on wp-admin/wp-login. The site www.atlantic-kids-academy.com and www.hockeysticks4clubs.com are running on wordpress. The issue I have is that literally thousands of attempts are made on the site everyday trying to access the wp-admin or wp-login My goal is to have the sites available but the access to all wp admin must be limited. below are a few of the solutions I found. Non seem to work fully. I assume it is my understanding of nginx configuration. method #1? -- test unsuccessfully. URL: https://graspingtech.com/block-access-wordpress-admin-area-nginx/ location ~ \.php$ { ? location ~ \wp-login.php$ { ??? allow 192.168.1.11; ??? deny all; ??? include fastcgi.conf; ??? fastcgi_intercept_errors on; ??? fastcgi_pass unix:/run/php/php7.0-fpm.sock; ? } ? include fastcgi.conf; ? fastcgi_intercept_errors on; ? fastcgi_pass unix:/run/php/php7.0-fpm.sock; } method #2 -- tested unsuccessfully. URL https://websiteforstudents.com/block-access-wordpress-wp-admin-via-nginx-ubuntu-17-04-17-10/ ?location ~ ^/(wp-admin|wp-login\.php) { ??????????????? try_files $uri $uri/ /index.php?$args; ??????????????? index index.html index.htm index.php; ??????????????? allow 68.66.XX.111; ??????????????? deny all; ??????????????? error_page 403 = @wp_admin_ban; ???? } ? ??? location @wp_admin_ban { ?????????? rewrite ^(.*) https://example.com permanent; ???? } ??? location /wp-admin/admin-ajax.php { ?????? allow all; ??? } method #3 -- tested and not fully functional. The issues that I have seen with this are listed below. it blocks on a countrylevel when opening the wp-admin page, I am first met with logging into the wordpress itself, and then after am I prompted with the .htpasswd authentication. Any help / advice would be very much appreciated. URL: https://www.openprogrammer.info/2013/07/12/protecting-wp-admin-wp-login-php-nginx/ location ~ ^/(wp-login\.php){ ? auth_basic "Administrator Login"; ? auth_basic_user_file /home/nginx/domains/yourlocation/private/.htpasswd; ? include /usr/local/nginx/conf/php.conf; } location /wp-admin { ? location ~ ^/(wp-admin/admin-ajax\.php) { ??? include /usr/local/nginx/conf/php.conf; ? } ? location ~* /wp-admin/.*\.php$ { ??? auth_basic "Administrator Login"; ??? auth_basic_user_file /home/nginx/domains/yourlocation/private/.htpasswd; ??? include /usr/local/nginx/conf/php.conf; ? } } location ~ .*\.(php|php4|php5|pl|py)?$ { ??? location ~ ^/(wp-comments-post\.php$) ?????? allow all; ?????? include? /usr/local/nginx/conf/php.conf; ??????? break; ??? } ?? #deny all; ?? rewrite? ^(.*)$ / redirect; } Thanks Lawrence -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilio.fernandes70 at gmail.com Wed Apr 15 11:21:49 2020 From: emilio.fernandes70 at gmail.com (Emilio Fernandes) Date: Wed, 15 Apr 2020 14:21:49 +0300 Subject: aarch64 packages for other Linux flavors In-Reply-To: References: <4e388ac4-8291-9e19-0774-351af78a4445@nginx.com> Message-ID: El vie., 3 abr. 2020 a las 16:48, Martin Grigorov (< martin.grigorov at gmail.com>) escribi?: > Hi Konstantin, > > On Tue, Mar 10, 2020 at 3:31 PM Konstantin Pavlov > wrote: > >> Hello, >> >> 10.03.2020 15:50, Emilio Fernandes wrote: >> > Hi Konstantin, >> > Thanks for your interest in our packages! >> > >> > By CentOS, do you want/need packages built for 8? Asking because I >> > believe 7 is not officially released for Aarch64 - it's rather a >> > community build which doesnt fall into something we can support. >> > >> > >> > Yes, CentOS 8 is fine for us! >> > At http://isoredirect.centos.org/centos/7/isos/ there is "for CentOS 7 >> > AltArch AArch64" [1]. Is this the one you prefer not to support ? >> > >> > 1. https://wiki.centos.org/SpecialInterestGroup/AltArch >> >> Our policy is to provide packages for officially upstream-supported >> distributions. >> >> https://wiki.centos.org/FAQ/General#What_architectures_are_supported.3F >> states that they only support x86_64, and aarch64 is unofficial. >> > > Here is something you may find interesting. > https://github.com/varnishcache/varnish-cache/pull/3263 - a PR I've > created for Varnish Cache project. > > It is based on Docker + QEMU and builds packages for different versions of > Debian/Ubuntu/Centos/Alpine for both x64 and aarch64. > Nice work, Martin! @Konstantin: any idea when the new aarch64 packages will be available ? May we help you somehow ? Gracias! Emilio > > Regards, > Martin > > >> -- >> Konstantin Pavlov >> https://www.nginx.com/ >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From hobson42 at gmail.com Wed Apr 15 11:33:55 2020 From: hobson42 at gmail.com (Ian Hobson) Date: Wed, 15 Apr 2020 12:33:55 +0100 Subject: Nginx wp-admin access control In-Reply-To: <2104499898-31940@mail6.enem.nl> References: <2104499898-31940@mail6.enem.nl> Message-ID: <6ed71e28-2d61-6dc8-4a44-9becd68c5872@gmail.com> Hi Lawrence, I installed WP fail2ban and Wordfence Security (free version). It doesn't stop them trying, but I run a "3 strikes and you are out for 60 minutes" regime. It means only 3-4 attempts an hour instead of thousands. I believe there is a plug in that moves the wp-admin location somewhere else as well, but I have not bothered. Regards Ian On 15/04/2020 11:52, Lawrence wrote: > Greetings All, > > To start, I am very much a beginner to nginx and coding. I am a > application support engineer, but got very little development skills. > I hope that there is someone out there that can guide me through this maze. > > I have searched the web and have seen multiple solutions but none seem > to work exactly how I want it to work. > > My nginx server setup, I am running and managing the config for nginx > from the /etc/nginx/nginx.conf file > > I have 5 seperate sites under sites-enabled. > Each site has it's own config file where I have tried to manage and > block access to my? two wordpress sites on wp-admin/wp-login. > > The site www.atlantic-kids-academy.com and www.hockeysticks4clubs.com > are running on wordpress. > > The issue I have is that literally thousands of attempts are made on the > site everyday trying to access the wp-admin or wp-login > > My goal is to have the sites available but the access to all wp admin > must be limited. > below are a few of the solutions I found. Non seem to work fully. I > assume it is my understanding of nginx configuration. > > method #1? -- test unsuccessfully. > URL: > https://graspingtech.com/block-access-wordpress-admin-area-nginx/ > > > location ~ \.php$ { > ? location ~ \wp-login.php$ { > ??? allow 192.168.1.11; > ??? deny all; > ??? include fastcgi.conf; > ??? fastcgi_intercept_errors on; > ??? fastcgi_pass unix:/run/php/php7.0-fpm.sock; > ? } > ? include fastcgi.conf; > ? fastcgi_intercept_errors on; > ? fastcgi_pass unix:/run/php/php7.0-fpm.sock; > } > > > method #2 -- tested unsuccessfully. > URL > https://websiteforstudents.com/block-access-wordpress-wp-admin-via-nginx-ubuntu-17-04-17-10/ > > > ?location ~ ^/(wp-admin|wp-login\.php) { > ??????????????? try_files $uri $uri/ /index.php?$args; > ??????????????? index index.html index.htm index.php; > ??????????????? allow 68.66.XX.111; > ??????????????? deny all; > ??????????????? error_page 403 = @wp_admin_ban; > ???? } > > ??? location @wp_admin_ban { > ?????????? rewrite ^(.*) https://example.com permanent; > ???? } > ??? location /wp-admin/admin-ajax.php { > ?????? allow all; > ??? } > > method #3 -- tested and not fully functional. The issues that I have > seen with this are listed below. > it blocks on a countrylevel > when opening the wp-admin page, I am first met with logging into the > wordpress itself, and then after am I prompted with the .htpasswd > authentication. > > Any help / advice would be very much appreciated. > > URL: > https://www.openprogrammer.info/2013/07/12/protecting-wp-admin-wp-login-php-nginx/ > > > location ~ ^/(wp-login\.php){ > ? auth_basic "Administrator Login"; > ? auth_basic_user_file /home/nginx/domains/yourlocation/private/.htpasswd; > ? include /usr/local/nginx/conf/php.conf; > } > > location /wp-admin { > ? location ~ ^/(wp-admin/admin-ajax\.php) { > ??? include /usr/local/nginx/conf/php.conf; > ? } > ? location ~* /wp-admin/.*\.php$ { > ??? auth_basic "Administrator Login"; > ??? auth_basic_user_file > /home/nginx/domains/yourlocation/private/.htpasswd; > ??? include /usr/local/nginx/conf/php.conf; > ? } > } > > > location ~ .*\.(php|php4|php5|pl|py)?$ { > ??? location ~ ^/(wp-comments-post\.php$) > ?????? allow all; > ?????? include? /usr/local/nginx/conf/php.conf; > ??????? break; > ??? } > ?? #deny all; > ?? rewrite? ^(.*)$ / redirect; > } > > Thanks > Lawrence > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Ian Hobson Tel (+351) 910 418 473 -- This email has been checked for viruses by AVG. https://www.avg.com From anthony at mindmedia.com.sg Wed Apr 15 11:36:41 2020 From: anthony at mindmedia.com.sg (P.V.Anthony) Date: Wed, 15 Apr 2020 19:36:41 +0800 Subject: Nginx wp-admin access control In-Reply-To: <2104499898-31940@mail6.enem.nl> References: <2104499898-31940@mail6.enem.nl> Message-ID: <53a56cc9-53d4-e530-ffd3-80bf0b95e631@mindmedia.com.sg> On 15/4/20 6:52 pm, Lawrence wrote: > I have 5 seperate sites under sites-enabled. > Each site has it's own config file where I have tried to manage and > block access to my? two wordpress sites on wp-admin/wp-login. > > The site www.atlantic-kids-academy.com and www.hockeysticks4clubs.com > are running on wordpress. > > The issue I have is that literally thousands of attempts are made on the > site everyday trying to access the wp-admin or wp-login Please note that I am not an expert. Just something that I am using currently and it works for me. if ( $request_uri = "/something?place=2" ) { rewrite ^ https://www.example.com${uri}?${args}? last; } Please check with others also. P.V.Anthony From alessio.medici at clickode.it Wed Apr 15 14:14:52 2020 From: alessio.medici at clickode.it (Alessio Medici) Date: Wed, 15 Apr 2020 16:14:52 +0200 Subject: nginx + keycdn + odoo Message-ID: <4691381.AkC6khLYpE@martina> Good morning, we have a dockerized odoo10 behind a dockerized nginx (jwilder/nginx-proxy). We have tried to use CDN, but all requests from KeyCDN get a 404 error. These are the logs from nginx. We asked for an image: with a direct call (https://54ds-test.clickode.it/web/image/10811/ umberto-oreglini-54-dean-street.JPG) we get: nginx.1 | 54ds-test.clickode.it 94.32.238.160 - - [25/Mar/2020:15:57:34 +0000] "GET /web/ image/10811/umberto-oreglini-54-dean-street.JPG HTTP/2.0" 200 54767 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/80.0.3987.87 Chrome/80.0.3987.87 Safari/537.36" If we ask the same image through KeyCDN (https://fly54deanstreet-13b2f.kxcdn.com/web/ image/10811/umberto-oreglini-54-dean-street.JPG) we get: nginx.1 | 54ds-test.clickode.it 185.172.149.101 - - [25/Mar/2020:11:51:49 +0000] "GET /web/ image/10811/umberto-oreglini-54-dean-street.JPG HTTP/1.1" 404 233 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/80.0.3987.87 Chrome/80.0.3987.87 Safari/537.36" Using cUrl and HTTP/1.1 we got: nginx.1 | 54ds-test.clickode.it 94.32.238.160 - - [25/Mar/2020:12:00:14 +0000] "GET /web/ image/10811/umberto-oreglini-54-dean-street.JPG HTTP/1.1" 200 54767 "-" "curl/7.58.0" We cannot understand why the request is not fulfilled if it comes from KeyCDN. Had someone got the same problem? Thanks in advance. Alessio -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Apr 15 17:12:36 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 15 Apr 2020 20:12:36 +0300 Subject: Nginx Brunzip In-Reply-To: References: Message-ID: <20200415171236.GQ20357@mdounin.ru> Hello! On Wed, Apr 15, 2020 at 10:21:09AM +1000, Mathew Heard wrote: > Hi all, > > I'm the maintainer of an open source module ngx_brunzip_module ( > https://github.com/splitice/ngx_brunzip_module/ > ). > Effectively the same as the gunzip module (and based off that source) but > with Brotli. If it's based on the gunzip module code, you are violating copyrights on the code, including mine. Please fix. > I've been scratching my head for 2 days regarding some high CPU usage > within the chain code. It appears that some spinning is possible. I must > admit I only have a basic understanding of the filter chain in nginx (still > gaining experience). > > 1. I was wondering if someone could take a look at the code and give me > some pointers? Likely unrelated, but "ctx->input" and "ctx->output" are meaningless and never used. Likely unrealted, but "ctx->flush = FLUSH_NOFLUSH" at https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L393 is meaningless. > 2. Also I've added some code to prevent further filling of mostly full > buffers (as it appears brotli is quite expensive to start) at > https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L408 > is > this valid? How does nginx determine when backpressure from full output > chains is relieved? Is there any in-depth documentation of the filter chain > architecture? No, it's not valid, and your code will throw away such mostly filled output buffers without linking them to the output chain as normally happens in ngx_http_brunzip_filter_inflate() at https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L485 Further, the test in question looks incorrect, as it doesn't take into account the edge case when amount of the output returned by BrotliDecoderDecompressStream() exactly matches the output buffer size, so ctx->available_out is 0, but rc is not BROTLI_DECODER_RESULT_NEEDS_MORE_OUTPUT. As for the documentation, it looks like you are looking for the documentation of the code in the module. You may want to re-read it to understand (and fix the copyright as requested above to admit that you aren't the one who wrote most of the code). Some basics about buffers and chains can be found here: http://nginx.org/en/docs/dev/development_guide.html#buffer Some simplified example of a code to work with buffers reuse as used by the module can be found here: http://nginx.org/en/docs/dev/development_guide.html#http_body_buffers_reuse Other chapters, such as "Code style", might be helpful as well. Also, don't hesitate to look into the code of the functions you are using, it often helps. -- Maxim Dounin http://mdounin.ru/ From me at mheard.com Thu Apr 16 00:49:34 2020 From: me at mheard.com (Mathew Heard) Date: Thu, 16 Apr 2020 10:49:34 +1000 Subject: Nginx Brunzip In-Reply-To: <20200415171236.GQ20357@mdounin.ru> References: <20200415171236.GQ20357@mdounin.ru> Message-ID: Maxim, Which clause of the 2 clause BSD license am I violating? It's not my intention to violate any. If need be I will remove this project from distribution and take it closed source. It would be a shame but if it needs to be done... >> Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Agreed, distribution with modification. >> Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. https://github.com/splitice/ngx_brunzip_module/blob/master/LICENSE.md >> Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. No binary distribution is performed. >> [etcetera] There is no express intent to apply any warranty to you or Nginx inc. I'll add some copyright lines at the top of that file as those should probably be there, I'm not sure they have any legal implication (copyright is inheint) but better to give credit for the functions used as a reference. As for the assistance I am digesting that now. I see what you mean though regarding available_out == 0, that's indeed a problem. I'll go through the path for my < 64 patch too, that was not the intent. I'm aware of that documentation, I guess I was hoping that there was more. Regards, Mathew On Thu, 16 Apr 2020 at 03:12, Maxim Dounin wrote: > Hello! > > On Wed, Apr 15, 2020 at 10:21:09AM +1000, Mathew Heard wrote: > > > Hi all, > > > > I'm the maintainer of an open source module ngx_brunzip_module ( > > https://github.com/splitice/ngx_brunzip_module/ > > < > https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c > >). > > Effectively the same as the gunzip module (and based off that source) but > > with Brotli. > > If it's based on the gunzip module code, you are violating > copyrights on the code, including mine. Please fix. > > > I've been scratching my head for 2 days regarding some high CPU usage > > within the chain code. It appears that some spinning is possible. I must > > admit I only have a basic understanding of the filter chain in nginx > (still > > gaining experience). > > > > 1. I was wondering if someone could take a look at the code and give me > > some pointers? > > Likely unrelated, but "ctx->input" and "ctx->output" are > meaningless and never used. > > Likely unrealted, but "ctx->flush = FLUSH_NOFLUSH" at > > https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L393 > is meaningless. > > > 2. Also I've added some code to prevent further filling of mostly full > > buffers (as it appears brotli is quite expensive to start) at > > > https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L408 > > is > > this valid? How does nginx determine when backpressure from full output > > chains is relieved? Is there any in-depth documentation of the filter > chain > > architecture? > > No, it's not valid, and your code will throw away such mostly > filled output buffers without linking them to the output chain as > normally happens in ngx_http_brunzip_filter_inflate() at > > > https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L485 > > Further, the test in question looks incorrect, as it doesn't > take into account the edge case when amount of the output returned > by BrotliDecoderDecompressStream() exactly matches the output > buffer size, so ctx->available_out is 0, but rc is not > BROTLI_DECODER_RESULT_NEEDS_MORE_OUTPUT. > > As for the documentation, it looks like you are looking for the > documentation of the code in the module. You may want to re-read > it to understand (and fix the copyright as requested above to > admit that you aren't the one who wrote most of the code). Some > basics about buffers and chains can be found here: > > http://nginx.org/en/docs/dev/development_guide.html#buffer > > Some simplified example of a code to work with buffers reuse as > used by the module can be found here: > > http://nginx.org/en/docs/dev/development_guide.html#http_body_buffers_reuse > > Other chapters, such as "Code style", might be helpful as well. > Also, don't hesitate to look into the code of the functions you > are using, it often helps. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at mheard.com Thu Apr 16 03:11:56 2020 From: me at mheard.com (Mathew Heard) Date: Thu, 16 Apr 2020 13:11:56 +1000 Subject: Nginx Brunzip In-Reply-To: References: <20200415171236.GQ20357@mdounin.ru> Message-ID: Maxim, I'm doing line by line documentation in my module. I hope by doing this I will get a good understanding of what is going on. If allowed I intend to commit this as a resource for others also intending to learn the body filter system (I will however check with this mailing list to see if anyone has an issue with my descriptions). While doing this I noticed that ctx->flush does not appear to ever be true in your gunzip module. Am I missing something here? Regards, Mathew On Thu, 16 Apr 2020 at 10:49, Mathew Heard wrote: > Maxim, > > Which clause of the 2 clause BSD license am I violating? It's not my > intention to violate any. If need be I will remove this project from > distribution and take it closed source. It would be a shame but if it needs > to be done... > > >> Redistribution and use in source and binary forms, with or without > modification, are permitted provided that the following conditions are met: > > Agreed, distribution with modification. > > >> Redistributions of source code must retain the above copyright notice, > this list of conditions and the following disclaimer. > > https://github.com/splitice/ngx_brunzip_module/blob/master/LICENSE.md > > >> Redistributions in binary form must reproduce the above copyright notice, > this list of conditions and the following disclaimer in the documentation > and/or other materials provided with the distribution. > > No binary distribution is performed. > > >> [etcetera] > > There is no express intent to apply any warranty to you or Nginx inc. > > I'll add some copyright lines at the top of that file as those should > probably be there, I'm not sure they have any legal implication (copyright > is inheint) but better to give credit for the functions used as a reference. > > As for the assistance I am digesting that now. I see what you mean though > regarding available_out == 0, that's indeed a problem. I'll go through the > path for my < 64 patch too, that was not the intent. > > I'm aware of that documentation, I guess I was hoping that there was more. > > Regards, > Mathew > > On Thu, 16 Apr 2020 at 03:12, Maxim Dounin wrote: > >> Hello! >> >> On Wed, Apr 15, 2020 at 10:21:09AM +1000, Mathew Heard wrote: >> >> > Hi all, >> > >> > I'm the maintainer of an open source module ngx_brunzip_module ( >> > https://github.com/splitice/ngx_brunzip_module/ >> > < >> https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c >> >). >> > Effectively the same as the gunzip module (and based off that source) >> but >> > with Brotli. >> >> If it's based on the gunzip module code, you are violating >> copyrights on the code, including mine. Please fix. >> >> > I've been scratching my head for 2 days regarding some high CPU usage >> > within the chain code. It appears that some spinning is possible. I must >> > admit I only have a basic understanding of the filter chain in nginx >> (still >> > gaining experience). >> > >> > 1. I was wondering if someone could take a look at the code and give me >> > some pointers? >> >> Likely unrelated, but "ctx->input" and "ctx->output" are >> meaningless and never used. >> >> Likely unrealted, but "ctx->flush = FLUSH_NOFLUSH" at >> >> https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L393 >> is meaningless. >> >> > 2. Also I've added some code to prevent further filling of mostly full >> > buffers (as it appears brotli is quite expensive to start) at >> > >> https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L408 >> > is >> > this valid? How does nginx determine when backpressure from full output >> > chains is relieved? Is there any in-depth documentation of the filter >> chain >> > architecture? >> >> No, it's not valid, and your code will throw away such mostly >> filled output buffers without linking them to the output chain as >> normally happens in ngx_http_brunzip_filter_inflate() at >> >> >> https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L485 >> >> Further, the test in question looks incorrect, as it doesn't >> take into account the edge case when amount of the output returned >> by BrotliDecoderDecompressStream() exactly matches the output >> buffer size, so ctx->available_out is 0, but rc is not >> BROTLI_DECODER_RESULT_NEEDS_MORE_OUTPUT. >> >> As for the documentation, it looks like you are looking for the >> documentation of the code in the module. You may want to re-read >> it to understand (and fix the copyright as requested above to >> admit that you aren't the one who wrote most of the code). Some >> basics about buffers and chains can be found here: >> >> http://nginx.org/en/docs/dev/development_guide.html#buffer >> >> Some simplified example of a code to work with buffers reuse as >> used by the module can be found here: >> >> >> http://nginx.org/en/docs/dev/development_guide.html#http_body_buffers_reuse >> >> Other chapters, such as "Code style", might be helpful as well. >> Also, don't hesitate to look into the code of the functions you >> are using, it often helps. >> >> -- >> Maxim Dounin >> http://mdounin.ru/ >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at mheard.com Thu Apr 16 03:14:08 2020 From: me at mheard.com (Mathew Heard) Date: Thu, 16 Apr 2020 13:14:08 +1000 Subject: Nginx Brunzip In-Reply-To: References: <20200415171236.GQ20357@mdounin.ru> Message-ID: Disregard the previous email, there was a typo there. Maxim, I'm doing line by line documentation in my module. I hope by doing this I will get a good understanding of what is going on. If allowed I intend to commit this as a resource for others also intending to learn the body filter system (I will however check with this mailing list to see if anyone has an issue with my descriptions). While doing this I noticed that ctx->busy does not appear to ever be true in your gunzip module. Am I missing something here? Regards, Mathew On Thu, 16 Apr 2020 at 13:11, Mathew Heard wrote: > Maxim, > > I'm doing line by line documentation in my module. I hope by doing this I > will get a good understanding of what is going on. If allowed I intend to > commit this as a resource for others also intending to learn the body > filter system (I will however check with this mailing list to see if anyone > has an issue with my descriptions). > > While doing this I noticed that ctx->flush does not appear to ever be true > in your gunzip module. Am I missing something here? > > Regards, > Mathew > > On Thu, 16 Apr 2020 at 10:49, Mathew Heard wrote: > >> Maxim, >> >> Which clause of the 2 clause BSD license am I violating? It's not my >> intention to violate any. If need be I will remove this project from >> distribution and take it closed source. It would be a shame but if it needs >> to be done... >> >> >> Redistribution and use in source and binary forms, with or without >> modification, are permitted provided that the following conditions are met: >> >> Agreed, distribution with modification. >> >> >> Redistributions of source code must retain the above copyright notice, >> this list of conditions and the following disclaimer. >> >> https://github.com/splitice/ngx_brunzip_module/blob/master/LICENSE.md >> >> >> Redistributions in binary form must reproduce the above copyright notice, >> this list of conditions and the following disclaimer in the documentation >> and/or other materials provided with the distribution. >> >> No binary distribution is performed. >> >> >> [etcetera] >> >> There is no express intent to apply any warranty to you or Nginx inc. >> >> I'll add some copyright lines at the top of that file as those should >> probably be there, I'm not sure they have any legal implication (copyright >> is inheint) but better to give credit for the functions used as a reference. >> >> As for the assistance I am digesting that now. I see what you mean though >> regarding available_out == 0, that's indeed a problem. I'll go through the >> path for my < 64 patch too, that was not the intent. >> >> I'm aware of that documentation, I guess I was hoping that there was more. >> >> Regards, >> Mathew >> >> On Thu, 16 Apr 2020 at 03:12, Maxim Dounin wrote: >> >>> Hello! >>> >>> On Wed, Apr 15, 2020 at 10:21:09AM +1000, Mathew Heard wrote: >>> >>> > Hi all, >>> > >>> > I'm the maintainer of an open source module ngx_brunzip_module ( >>> > https://github.com/splitice/ngx_brunzip_module/ >>> > < >>> https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c >>> >). >>> > Effectively the same as the gunzip module (and based off that source) >>> but >>> > with Brotli. >>> >>> If it's based on the gunzip module code, you are violating >>> copyrights on the code, including mine. Please fix. >>> >>> > I've been scratching my head for 2 days regarding some high CPU usage >>> > within the chain code. It appears that some spinning is possible. I >>> must >>> > admit I only have a basic understanding of the filter chain in nginx >>> (still >>> > gaining experience). >>> > >>> > 1. I was wondering if someone could take a look at the code and give me >>> > some pointers? >>> >>> Likely unrelated, but "ctx->input" and "ctx->output" are >>> meaningless and never used. >>> >>> Likely unrealted, but "ctx->flush = FLUSH_NOFLUSH" at >>> >>> https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L393 >>> is meaningless. >>> >>> > 2. Also I've added some code to prevent further filling of mostly full >>> > buffers (as it appears brotli is quite expensive to start) at >>> > >>> https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L408 >>> > is >>> > this valid? How does nginx determine when backpressure from full output >>> > chains is relieved? Is there any in-depth documentation of the filter >>> chain >>> > architecture? >>> >>> No, it's not valid, and your code will throw away such mostly >>> filled output buffers without linking them to the output chain as >>> normally happens in ngx_http_brunzip_filter_inflate() at >>> >>> >>> https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L485 >>> >>> Further, the test in question looks incorrect, as it doesn't >>> take into account the edge case when amount of the output returned >>> by BrotliDecoderDecompressStream() exactly matches the output >>> buffer size, so ctx->available_out is 0, but rc is not >>> BROTLI_DECODER_RESULT_NEEDS_MORE_OUTPUT. >>> >>> As for the documentation, it looks like you are looking for the >>> documentation of the code in the module. You may want to re-read >>> it to understand (and fix the copyright as requested above to >>> admit that you aren't the one who wrote most of the code). Some >>> basics about buffers and chains can be found here: >>> >>> http://nginx.org/en/docs/dev/development_guide.html#buffer >>> >>> Some simplified example of a code to work with buffers reuse as >>> used by the module can be found here: >>> >>> >>> http://nginx.org/en/docs/dev/development_guide.html#http_body_buffers_reuse >>> >>> Other chapters, such as "Code style", might be helpful as well. >>> Also, don't hesitate to look into the code of the functions you >>> are using, it often helps. >>> >>> -- >>> Maxim Dounin >>> http://mdounin.ru/ >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at mheard.com Thu Apr 16 03:37:00 2020 From: me at mheard.com (Mathew Heard) Date: Thu, 16 Apr 2020 13:37:00 +1000 Subject: Nginx Brunzip In-Reply-To: References: <20200415171236.GQ20357@mdounin.ru> Message-ID: Disregard * 2 I understand now. On Thu, 16 Apr 2020 at 13:14, Mathew Heard wrote: > Disregard the previous email, there was a typo there. > > Maxim, > > I'm doing line by line documentation in my module. I hope by doing this I > will get a good understanding of what is going on. If allowed I intend to > commit this as a resource for others also intending to learn the body > filter system (I will however check with this mailing list to see if anyone > has an issue with my descriptions). > > While doing this I noticed that ctx->busy does not appear to ever be true > in your gunzip module. Am I missing something here? > > Regards, > Mathew > > > On Thu, 16 Apr 2020 at 13:11, Mathew Heard wrote: > >> Maxim, >> >> I'm doing line by line documentation in my module. I hope by doing this I >> will get a good understanding of what is going on. If allowed I intend to >> commit this as a resource for others also intending to learn the body >> filter system (I will however check with this mailing list to see if anyone >> has an issue with my descriptions). >> >> While doing this I noticed that ctx->flush does not appear to ever be >> true in your gunzip module. Am I missing something here? >> >> Regards, >> Mathew >> >> On Thu, 16 Apr 2020 at 10:49, Mathew Heard wrote: >> >>> Maxim, >>> >>> Which clause of the 2 clause BSD license am I violating? It's not my >>> intention to violate any. If need be I will remove this project from >>> distribution and take it closed source. It would be a shame but if it needs >>> to be done... >>> >>> >> Redistribution and use in source and binary forms, with or without >>> modification, are permitted provided that the following conditions are met: >>> >>> Agreed, distribution with modification. >>> >>> >> Redistributions of source code must retain the above copyright >>> notice, this list of conditions and the following disclaimer. >>> >>> https://github.com/splitice/ngx_brunzip_module/blob/master/LICENSE.md >>> >>> >> Redistributions in binary form must reproduce the above copyright notice, >>> this list of conditions and the following disclaimer in the documentation >>> and/or other materials provided with the distribution. >>> >>> No binary distribution is performed. >>> >>> >> [etcetera] >>> >>> There is no express intent to apply any warranty to you or Nginx inc. >>> >>> I'll add some copyright lines at the top of that file as those should >>> probably be there, I'm not sure they have any legal implication (copyright >>> is inheint) but better to give credit for the functions used as a reference. >>> >>> As for the assistance I am digesting that now. I see what you mean >>> though regarding available_out == 0, that's indeed a problem. I'll go >>> through the path for my < 64 patch too, that was not the intent. >>> >>> I'm aware of that documentation, I guess I was hoping that there was >>> more. >>> >>> Regards, >>> Mathew >>> >>> On Thu, 16 Apr 2020 at 03:12, Maxim Dounin wrote: >>> >>>> Hello! >>>> >>>> On Wed, Apr 15, 2020 at 10:21:09AM +1000, Mathew Heard wrote: >>>> >>>> > Hi all, >>>> > >>>> > I'm the maintainer of an open source module ngx_brunzip_module ( >>>> > https://github.com/splitice/ngx_brunzip_module/ >>>> > < >>>> https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c >>>> >). >>>> > Effectively the same as the gunzip module (and based off that source) >>>> but >>>> > with Brotli. >>>> >>>> If it's based on the gunzip module code, you are violating >>>> copyrights on the code, including mine. Please fix. >>>> >>>> > I've been scratching my head for 2 days regarding some high CPU usage >>>> > within the chain code. It appears that some spinning is possible. I >>>> must >>>> > admit I only have a basic understanding of the filter chain in nginx >>>> (still >>>> > gaining experience). >>>> > >>>> > 1. I was wondering if someone could take a look at the code and give >>>> me >>>> > some pointers? >>>> >>>> Likely unrelated, but "ctx->input" and "ctx->output" are >>>> meaningless and never used. >>>> >>>> Likely unrealted, but "ctx->flush = FLUSH_NOFLUSH" at >>>> >>>> https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L393 >>>> is meaningless. >>>> >>>> > 2. Also I've added some code to prevent further filling of mostly full >>>> > buffers (as it appears brotli is quite expensive to start) at >>>> > >>>> https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L408 >>>> > is >>>> > this valid? How does nginx determine when backpressure from full >>>> output >>>> > chains is relieved? Is there any in-depth documentation of the filter >>>> chain >>>> > architecture? >>>> >>>> No, it's not valid, and your code will throw away such mostly >>>> filled output buffers without linking them to the output chain as >>>> normally happens in ngx_http_brunzip_filter_inflate() at >>>> >>>> >>>> https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L485 >>>> >>>> Further, the test in question looks incorrect, as it doesn't >>>> take into account the edge case when amount of the output returned >>>> by BrotliDecoderDecompressStream() exactly matches the output >>>> buffer size, so ctx->available_out is 0, but rc is not >>>> BROTLI_DECODER_RESULT_NEEDS_MORE_OUTPUT. >>>> >>>> As for the documentation, it looks like you are looking for the >>>> documentation of the code in the module. You may want to re-read >>>> it to understand (and fix the copyright as requested above to >>>> admit that you aren't the one who wrote most of the code). Some >>>> basics about buffers and chains can be found here: >>>> >>>> http://nginx.org/en/docs/dev/development_guide.html#buffer >>>> >>>> Some simplified example of a code to work with buffers reuse as >>>> used by the module can be found here: >>>> >>>> >>>> http://nginx.org/en/docs/dev/development_guide.html#http_body_buffers_reuse >>>> >>>> Other chapters, such as "Code style", might be helpful as well. >>>> Also, don't hesitate to look into the code of the functions you >>>> are using, it often helps. >>>> >>>> -- >>>> Maxim Dounin >>>> http://mdounin.ru/ >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at mheard.com Thu Apr 16 05:48:27 2020 From: me at mheard.com (Mathew Heard) Date: Thu, 16 Apr 2020 15:48:27 +1000 Subject: Nginx Brunzip In-Reply-To: <20200415171236.GQ20357@mdounin.ru> References: <20200415171236.GQ20357@mdounin.ru> Message-ID: Maxim, > Likely unrealted, but "ctx->flush = FLUSH_NOFLUSH" at > https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L393 > is meaningless. Is beause of https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L513 correct? Because FLUSH_FLUSH always resets state to FLUSH_NOFLUSH. I've been working on a heavily commented module. I welcome any feedback on the comments (or the areas marked with "???" that I am still trying to figure out). I'll continue working to figure out the filter system. Regards, Mathew On Thu, 16 Apr 2020 at 03:12, Maxim Dounin wrote: > Hello! > > On Wed, Apr 15, 2020 at 10:21:09AM +1000, Mathew Heard wrote: > > > Hi all, > > > > I'm the maintainer of an open source module ngx_brunzip_module ( > > https://github.com/splitice/ngx_brunzip_module/ > > < > https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c > >). > > Effectively the same as the gunzip module (and based off that source) but > > with Brotli. > > If it's based on the gunzip module code, you are violating > copyrights on the code, including mine. Please fix. > > > I've been scratching my head for 2 days regarding some high CPU usage > > within the chain code. It appears that some spinning is possible. I must > > admit I only have a basic understanding of the filter chain in nginx > (still > > gaining experience). > > > > 1. I was wondering if someone could take a look at the code and give me > > some pointers? > > Likely unrelated, but "ctx->input" and "ctx->output" are > meaningless and never used. > > Likely unrealted, but "ctx->flush = FLUSH_NOFLUSH" at > > https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L393 > is meaningless. > > > 2. Also I've added some code to prevent further filling of mostly full > > buffers (as it appears brotli is quite expensive to start) at > > > https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L408 > > is > > this valid? How does nginx determine when backpressure from full output > > chains is relieved? Is there any in-depth documentation of the filter > chain > > architecture? > > No, it's not valid, and your code will throw away such mostly > filled output buffers without linking them to the output chain as > normally happens in ngx_http_brunzip_filter_inflate() at > > > https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L485 > > Further, the test in question looks incorrect, as it doesn't > take into account the edge case when amount of the output returned > by BrotliDecoderDecompressStream() exactly matches the output > buffer size, so ctx->available_out is 0, but rc is not > BROTLI_DECODER_RESULT_NEEDS_MORE_OUTPUT. > > As for the documentation, it looks like you are looking for the > documentation of the code in the module. You may want to re-read > it to understand (and fix the copyright as requested above to > admit that you aren't the one who wrote most of the code). Some > basics about buffers and chains can be found here: > > http://nginx.org/en/docs/dev/development_guide.html#buffer > > Some simplified example of a code to work with buffers reuse as > used by the module can be found here: > > http://nginx.org/en/docs/dev/development_guide.html#http_body_buffers_reuse > > Other chapters, such as "Code style", might be helpful as well. > Also, don't hesitate to look into the code of the functions you > are using, it often helps. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Apr 16 07:31:20 2020 From: nginx-forum at forum.nginx.org (sachin.shetty@gmail.com) Date: Thu, 16 Apr 2020 03:31:20 -0400 Subject: Print current running connections in nginx Message-ID: <6a444fd6b10e1c1083ef7cd97d20f9ab.NginxMailingListEnglish@forum.nginx.org> Hi, Status module prints the count of active connections. Is there a way to fetch more details about currently running connections in nginx like request uri, started time similar to Apache's extended status. Thanks Sachin Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287673,287673#msg-287673 From me at mheard.com Thu Apr 16 07:32:43 2020 From: me at mheard.com (Mathew Heard) Date: Thu, 16 Apr 2020 17:32:43 +1000 Subject: Print current running connections in nginx In-Reply-To: <6a444fd6b10e1c1083ef7cd97d20f9ab.NginxMailingListEnglish@forum.nginx.org> References: <6a444fd6b10e1c1083ef7cd97d20f9ab.NginxMailingListEnglish@forum.nginx.org> Message-ID: Sachin, AFAIK Not easily. Each worker only knows about their own connections. You would need to build a module using a shared memory zone to track connections. Regards, Mathew On 16/04/2020, sachin.shetty at gmail.com wrote: > Hi, > > Status module prints the count of active connections. > > Is there a way to fetch more details about currently running connections in > > nginx like request uri, started time similar to Apache's extended status. > > Thanks > Sachin > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,287673,287673#msg-287673 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From me at mheard.com Thu Apr 16 07:32:43 2020 From: me at mheard.com (Mathew Heard) Date: Thu, 16 Apr 2020 17:32:43 +1000 Subject: Print current running connections in nginx In-Reply-To: <6a444fd6b10e1c1083ef7cd97d20f9ab.NginxMailingListEnglish@forum.nginx.org> References: <6a444fd6b10e1c1083ef7cd97d20f9ab.NginxMailingListEnglish@forum.nginx.org> Message-ID: Sachin, AFAIK Not easily. Each worker only knows about their own connections. You would need to build a module using a shared memory zone to track connections. Regards, Mathew On 16/04/2020, sachin.shetty at gmail.com wrote: > Hi, > > Status module prints the count of active connections. > > Is there a way to fetch more details about currently running connections in > > nginx like request uri, started time similar to Apache's extended status. > > Thanks > Sachin > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,287673,287673#msg-287673 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at forum.nginx.org Thu Apr 16 08:49:37 2020 From: nginx-forum at forum.nginx.org (sachin.shetty@gmail.com) Date: Thu, 16 Apr 2020 04:49:37 -0400 Subject: Print current running connections in nginx In-Reply-To: References: Message-ID: Thanks Mathew. I thought about it and even prototyped it with openresty, but I am concerned about ngx.shared.DICT.get_keys locking the whole dictionary and blocking connections that are trying to add new incoming connections. Is there some worker datastructure available that can be read and reported from? The worker obviously knows all the connections it is handling and the various states the connections are in. So it would be easy to iterate the internal data structure with an ngx.timer.every timer. Thanks Sachin Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287673,287676#msg-287676 From me at mheard.com Thu Apr 16 09:02:14 2020 From: me at mheard.com (Mathew Heard) Date: Thu, 16 Apr 2020 19:02:14 +1000 Subject: Print current running connections in nginx In-Reply-To: References: Message-ID: This is a snippet from something I was experimenting with. I can't recall it actually worked, but it might help for a start. Good Luck. ngx_cycle_t *cycle for (i = 0; i < cycle->connection_n; i++) { c = &conns[i]; if (c->fd == (ngx_socket_t) -1 || c->idle || c->listening) continue; hlc = (ngx_http_log_ctx_t*)c->log->data; if(!hlc) continue; r = hlc->current_request; ngx_log_error(NGX_LOG_ERR, c->log, 0, "has hlc"); if(!r) continue; ngx_log_error(NGX_LOG_ERR, c->log, 0, "has req"); On Thu, 16 Apr 2020 at 18:49, sachin.shetty at gmail.com < nginx-forum at forum.nginx.org> wrote: > Thanks Mathew. > > I thought about it and even prototyped it with openresty, but I am > concerned > about ngx.shared.DICT.get_keys locking the whole dictionary and blocking > connections that are trying to add new incoming connections. > > Is there some worker datastructure available that can be read and reported > from? The worker obviously knows all the connections it is handling and the > various states the connections are in. So it would be easy to iterate the > internal data structure with an ngx.timer.every timer. > > Thanks > Sachin > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,287673,287676#msg-287676 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Apr 16 09:09:07 2020 From: nginx-forum at forum.nginx.org (itpp2012) Date: Thu, 16 Apr 2020 05:09:07 -0400 Subject: Print current running connections in nginx In-Reply-To: References: Message-ID: Best sample code: https://github.com/vozlt/nginx-module-vts Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287673,287680#msg-287680 From francis at daoine.org Thu Apr 16 14:46:02 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 16 Apr 2020 15:46:02 +0100 Subject: Nginx wp-admin access control In-Reply-To: <2104499898-31940@mail6.enem.nl> References: <2104499898-31940@mail6.enem.nl> Message-ID: <20200416144602.GR20939@daoine.org> On Wed, Apr 15, 2020 at 12:52:59PM +0200, Lawrence wrote: Hi there, > To start, I am very much a beginner to nginx and coding. I am a application support engineer, but got very little development skills. I don't know WordPress; but on the nginx side, what matters is the request that is made (the url, handled in a "location") and the way that you want nginx to handle that request. In nginx (in general), one request is handled in one location; only the configuration in, or inherited into, that location matters. Location-matching does not include the request query string. Inheritance is per directive, and is either by replacement or not at all. The "*_pass" directives are not inherited; the others are. There are exceptions to this description, but it is probably a good enough starting point to understanding the configuration that is needed. The documentation for any directive X can be found from http://nginx.org/r/X > My goal is to have the sites available but the access to all wp admin must be limited. > below are a few of the solutions I found. Non seem to work fully. I assume it is my understanding of nginx configuration. > > method #1? -- test unsuccessfully. In this case, does "unsuccessful" mean: the php file is not handled when it should be; or the php file is handled when it should not be; or something else? In general, it is good to be specific -- what request was made, what response was returned, and what response was wanted instead. So, with me not knowing WordPress, your mail and some brief web searching suggests that you want your nginx to do the following: * allow any access to any request that ends in ".php", except * restrict access to the request /wp-login.php and * restrict access to any php request that starts with /wp-admin/, except * allow any access to /wp-admin/admin-ajax.php where "restrict" is to be based on an infrequently-changing list of IP addresses or address ranges. And this is in addition to the normal "try_files" config to just get wordpress working. Is that an accurate description of the desired request / response handling mapping? If so, something like (untested): === include fastcgi.conf; # has fastcgi_param, etc, but not fastcgi_pass # Can directly paste the relevant lines here instead location / { try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { location ~ ^/wp-admin/ { allow 192.168.1.0/24; deny all; fastcgi_pass unix:/run/php/php7.0-fpm.sock; } fastcgi_pass unix:/run/php/php7.0-fpm.sock; } location = /wp-login.php { allow 192.168.1.0/24; deny all; fastcgi_pass unix:/run/php/php7.0-fpm.sock; } location = /wp-admin/admin-ajax.php { fastcgi_pass unix:/run/php/php7.0-fpm.sock; } === looks like it should work. There are other ways to arrange things, and there is repetition here of the "allow" list; it may be simpler to maintain that list twice than to use another "include" file. If you are happy to test and report what fails, then it should be possible to end up with a suitable config. Good luck with it, f -- Francis Daly francis at daoine.org From mailinglist at unix-solution.de Thu Apr 16 14:54:17 2020 From: mailinglist at unix-solution.de (basti) Date: Thu, 16 Apr 2020 16:54:17 +0200 Subject: Nginx wp-admin access control In-Reply-To: <20200416144602.GR20939@daoine.org> References: <2104499898-31940@mail6.enem.nl> <20200416144602.GR20939@daoine.org> Message-ID: <9c2f0453-a45a-07df-c77a-77a62a45ee4e@unix-solution.de> I have not follow the entire discussion. What is the goal to do with wp-admin? There are several ways to limit access: - http basic auth - use a x509 cert to authenticate instead of user/pass - write a hook plugin to wp_login() to use you own / external login - just use fail2ban to keep bad guys out - ... On 16.04.20 16:46, Francis Daly wrote: > On Wed, Apr 15, 2020 at 12:52:59PM +0200, Lawrence wrote: > > Hi there, > >> To start, I am very much a beginner to nginx and coding. I am a application support engineer, but got very little development skills. > > I don't know WordPress; but on the nginx side, what matters is the > request that is made (the url, handled in a "location") and the way that > you want nginx to handle that request. > > In nginx (in general), one request is handled in one location; > only the configuration in, or inherited into, that location > matters. Location-matching does not include the request query > string. Inheritance is per directive, and is either by replacement or > not at all. The "*_pass" directives are not inherited; the others are. > > There are exceptions to this description, but it is probably a good > enough starting point to understanding the configuration that is needed. > > The documentation for any directive X can be found from > http://nginx.org/r/X > >> My goal is to have the sites available but the access to all wp admin must be limited. >> below are a few of the solutions I found. Non seem to work fully. I assume it is my understanding of nginx configuration. >> >> method #1? -- test unsuccessfully. > > In this case, does "unsuccessful" mean: the php file is not handled > when it should be; or the php file is handled when it should not be; or > something else? In general, it is good to be specific -- what request was > made, what response was returned, and what response was wanted instead. > > > So, with me not knowing WordPress, your mail and some brief web searching > suggests that you want your nginx to do the following: > > * allow any access to any request that ends in ".php", except > * restrict access to the request /wp-login.php and > * restrict access to any php request that starts with /wp-admin/, except > * allow any access to /wp-admin/admin-ajax.php > > where "restrict" is to be based on an infrequently-changing list of IP > addresses or address ranges. > > And this is in addition to the normal "try_files" config to just get > wordpress working. > > Is that an accurate description of the desired request / response > handling mapping? > > If so, something like (untested): > > === > include fastcgi.conf; # has fastcgi_param, etc, but not fastcgi_pass > # Can directly paste the relevant lines here instead > > location / { > try_files $uri $uri/ /index.php?$args; > } > location ~ \.php$ { > location ~ ^/wp-admin/ { > allow 192.168.1.0/24; > deny all; > fastcgi_pass unix:/run/php/php7.0-fpm.sock; > } > fastcgi_pass unix:/run/php/php7.0-fpm.sock; > } > location = /wp-login.php { > allow 192.168.1.0/24; > deny all; > fastcgi_pass unix:/run/php/php7.0-fpm.sock; > } > location = /wp-admin/admin-ajax.php { > fastcgi_pass unix:/run/php/php7.0-fpm.sock; > } > === > > looks like it should work. There are other ways to arrange things, > and there is repetition here of the "allow" list; it may be simpler to > maintain that list twice than to use another "include" file. > > If you are happy to test and report what fails, then it should be possible > to end up with a suitable config. > > Good luck with it, > > f > From lawrence at begame.nl Thu Apr 16 15:13:50 2020 From: lawrence at begame.nl (Lawrence) Date: Thu, 16 Apr 2020 17:13:50 +0200 Subject: Nginx wp-admin access control In-Reply-To: <9c2f0453-a45a-07df-c77a-77a62a45ee4e@unix-solution.de> Message-ID: <2207098659-12415@mail6.enem.nl> Greetings All, WOW, thanks for all the suggestions guys. Not many of them are understood, I will try the fail2ban and see how far I get. Thanks gaian. Lawrence From: basti To: Sent: 16/04/2020 4:54 PM Subject: Re: Nginx wp-admin access control I have not follow the entire discussion. What is the goal to do with wp-admin? There are several ways to limit access: - http basic auth - use a x509 cert to authenticate instead of user/pass - write a hook plugin to wp_login() to use you own / external login - just use fail2ban to keep bad guys out - ... On 16.04.20 16:46, Francis Daly wrote: > On Wed, Apr 15, 2020 at 12:52:59PM +0200, Lawrence wrote: > > Hi there, > >> To start, I am very much a beginner to nginx and ?coding. I am a application support engineer, but got very little ?development skills. > > I don't know WordPress; but on the nginx side, what matters is the > request that is made (the url, handled in a "location") and the way that > you want nginx to handle that request. > > In nginx (in general), one request is handled in one location; > only the configuration in, or inherited into, that location > matters. Location-matching does not include the request query > string. Inheritance is per directive, and is either by replacement or > not at all. The "*_pass" directives are not inherited; the others are. > > There are exceptions to this description, but it is probably a good > enough starting point to understanding the configuration that is needed. > > The documentation for any directive X can be found from > http://nginx.org/r/X > >> My goal is to have the sites available but the access to all wp admin must be limited. >> below are a few of the solutions I found. Non seem to work fully. I assume it is my understanding of nginx configuration. >> >> method #1? -- test unsuccessfully. > > In this case, does "unsuccessful" mean: the php file is not handled > when it should be; or the php file is handled when it should not be; or > something else? In general, it is good to be specific -- what request was > made, what response was returned, and what response was wanted instead. > > > So, with me not knowing WordPress, your mail and some brief web searching > suggests that you want your nginx to do the following: > > * allow any access to any request that ends in ".php", except > * restrict access to the request /wp-login.php and > * restrict access to any php request that starts with /wp-admin/, except > * allow any access to /wp-admin/admin-ajax.php > > where "restrict" is to be based on an infrequently-changing list of IP > addresses or address ranges. > > And this is in addition to the normal "try_files" config to just get > wordpress working. > > Is that an accurate description of the desired request / response > handling mapping? > > If so, something like (untested): > > === > ? include fastcgi.conf; # has fastcgi_param, etc, but not fastcgi_pass > ? # Can directly paste the relevant lines here instead > > ? location / { > ? ? try_files $uri $uri/ /index.php?$args; > ? } > ? location ~ \.php$ { > ? ? location ~ ^/wp-admin/ { > ? ? ? allow 192.168.1.0/24; > ? ? ? deny all; > ? ? ? fastcgi_pass unix:/run/php/php7.0-fpm.sock; > ? ? } > ? ? fastcgi_pass unix:/run/php/php7.0-fpm.sock; > ? } > ? location = /wp-login.php { > ? ? allow 192.168.1.0/24; > ? ? deny all; > ? ? fastcgi_pass unix:/run/php/php7.0-fpm.sock; > ? } > ? location = /wp-admin/admin-ajax.php { > ? ? fastcgi_pass unix:/run/php/php7.0-fpm.sock; > ? } > === > > looks like it should work. There are other ways to arrange things, > and there is repetition here of the "allow" list; it may be simpler to > maintain that list twice than to use another "include" file. > > If you are happy to test and report what fails, then it should be possible > to end up with a suitable config. > > Good luck with it, > > ? ? ?f > _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglist at unix-solution.de Thu Apr 16 16:04:48 2020 From: mailinglist at unix-solution.de (basti) Date: Thu, 16 Apr 2020 18:04:48 +0200 Subject: Nginx wp-admin access control In-Reply-To: <2207098659-12415@mail6.enem.nl> References: <2207098659-12415@mail6.enem.nl> Message-ID: <6ad022c5-c758-85f2-7dcc-a9e2b0835abe@unix-solution.de> when you use fail2ban have a look on ipset it performe better on large lists. Am 16.04.20 um 17:13 schrieb Lawrence: > Greetings All, > > WOW, thanks for all the suggestions guys. Not many of them are > understood, I will try the fail2ban and see how far I get. > > Thanks gaian. > Lawrence > > > *From: * basti > *To: * > *Sent: * 16/04/2020 4:54 PM > *Subject: * Re: Nginx wp-admin access control > > I have not follow the entire discussion. > > What is the goal to do with wp-admin? > > There are several ways to limit access: > - http basic auth > - use a x509 cert to authenticate instead of user/pass > - write a hook plugin to wp_login() to use you own / external login > > - just use fail2ban to keep bad guys out > - ... > > On 16.04.20 16:46, Francis Daly wrote: > > On Wed, Apr 15, 2020 at 12:52:59PM +0200, Lawrence wrote: > > > > Hi there, > > > >> To start, I am very much a beginner to nginx and ?coding. I am a > application support engineer, but got very little ?development skills. > > > > I don't know WordPress; but on the nginx side, what matters is the > > request that is made (the url, handled in a "location") and the > way that > > you want nginx to handle that request. > > > > In nginx (in general), one request is handled in one location; > > only the configuration in, or inherited into, that location > > matters. Location-matching does not include the request query > > string. Inheritance is per directive, and is either by replacement or > > not at all. The "*_pass" directives are not inherited; the others > are. > > > > There are exceptions to this description, but it is probably a good > > enough starting point to understanding the configuration that is > needed. > > > > The documentation for any directive X can be found from > > http://nginx.org/r/X > > > >> My goal is to have the sites available but the access to all wp > admin must be limited. > >> below are a few of the solutions I found. Non seem to work > fully. I assume it is my understanding of nginx configuration. > >> > >> method #1? -- test unsuccessfully. > > > > In this case, does "unsuccessful" mean: the php file is not handled > > when it should be; or the php file is handled when it should not > be; or > > something else? In general, it is good to be specific -- what > request was > > made, what response was returned, and what response was wanted > instead. > > > > > > So, with me not knowing WordPress, your mail and some brief web > searching > > suggests that you want your nginx to do the following: > > > > * allow any access to any request that ends in ".php", except > > * restrict access to the request /wp-login.php and > > * restrict access to any php request that starts with /wp-admin/, > except > > * allow any access to /wp-admin/admin-ajax.php > > > > where "restrict" is to be based on an infrequently-changing list > of IP > > addresses or address ranges. > > > > And this is in addition to the normal "try_files" config to just get > > wordpress working. > > > > Is that an accurate description of the desired request / response > > handling mapping? > > > > If so, something like (untested): > > > > === > > ? include fastcgi.conf; # has fastcgi_param, etc, but not > fastcgi_pass > > ? # Can directly paste the relevant lines here instead > > > > ? location / { > > ? ? try_files $uri $uri/ /index.php?$args; > > ? } > > ? location ~ \.php$ { > > ? ? location ~ ^/wp-admin/ { > > ? ? ? allow 192.168.1.0/24; > > ? ? ? deny all; > > ? ? ? fastcgi_pass unix:/run/php/php7.0-fpm.sock; > > ? ? } > > ? ? fastcgi_pass unix:/run/php/php7.0-fpm.sock; > > ? } > > ? location = /wp-login.php { > > ? ? allow 192.168.1.0/24; > > ? ? deny all; > > ? ? fastcgi_pass unix:/run/php/php7.0-fpm.sock; > > ? } > > ? location = /wp-admin/admin-ajax.php { > > ? ? fastcgi_pass unix:/run/php/php7.0-fpm.sock; > > ? } > > === > > > > looks like it should work. There are other ways to arrange things, > > and there is repetition here of the "allow" list; it may be > simpler to > > maintain that list twice than to use another "include" file. > > > > If you are happy to test and report what fails, then it should be > possible > > to end up with a suitable config. > > > > Good luck with it, > > > > ? ? ?f > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From vbart at nginx.com Thu Apr 16 18:14:04 2020 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 16 Apr 2020 21:14:04 +0300 Subject: Unit 1.17.0 release Message-ID: <3467379.MHq7AAxBmi@vbart-laptop> Hi, I'm glad to announce a new release of NGINX Unit. In addition to improved stability, this release introduces two handy features. The first one is configured using the "return" and "location" options of the action object. It can be used to immediately generate a simple HTTP response with an arbitrary status - for example, to deny access to some resources: { "match": { "uri": "*/.git/*" }, "action": { "return": 403 } } Or, you can redirect a client to another resource: { "match": { "host": "example.org", }, "action": { "return": 301, "location": "http://www.example.org" } } See the documentation for a detailed description of routing: - https://unit.nginx.org/configuration/#routes The second new feature of the release is mostly syntax sugar rather than new functionality. Now, you can specify servers' weights in an upstream group using fractional numbers. Say, you have a bunch of servers and want one of them to receive half as many requests as the others for some reason. Previously, the only way to achieve that was to double the weights of all the other servers: { "192.168.0.101:8080": { "weight": 2 }, "192.168.0.102:8080": { "weight": 2 }, "192.168.0.103:8080": { }, "192.168.0.104:8080": { "weight": 2 } } Using fractional weights, you can perform the update much easier by altering the weight of the server in question: { "192.168.0.101:8080": { }, "192.168.0.102:8080": { }, "192.168.0.103:8080": { "weight": 0.5 }, "192.168.0.104:8080": { } } For details of server groups, see here: - https://unit.nginx.org/configuration/#upstreams Changes with Unit 1.17.0 16 Apr 2020 *) Feature: a "return" action with optional "location" for immediate responses and external redirection. *) Feature: fractional weights support for upstream servers. *) Bugfix: accidental 502 "Bad Gateway" errors might have occurred in applications under high load. *) Bugfix: memory leak in the router; the bug had appeared in 1.13.0. *) Bugfix: segmentation fault might have occurred in the router process when reaching open connections limit. *) Bugfix: "close() failed (9: Bad file descriptor)" alerts might have appeared in the log while processing large request bodies; the bug had appeared in 1.16.0. *) Bugfix: existing application processes didn't reopen the log file. *) Bugfix: incompatibility with some Node.js applications. *) Bugfix: broken build on DragonFly BSD; the bug had appeared in 1.16.0. Please also see a blog post about the new features of our two previous releases: - https://www.nginx.com/blog/nginx-unit-1-16-0-now-available/ To keep the finger on the pulse, refer to our further plans in the roadmap here: - https://github.com/orgs/nginx/projects/1 Stay healthy, stay home! wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Sat Apr 18 15:05:36 2020 From: nginx-forum at forum.nginx.org (mustafa.chapal) Date: Sat, 18 Apr 2020 11:05:36 -0400 Subject: Nginx Lowercase URL and Redirection Message-ID: <1e11f9b98da725b4a6e4bbdbbdbd6ae3.NginxMailingListEnglish@forum.nginx.org> Hi, I am facing two issues. First, URLs like www.example.com/new-arrivalS which include a character like hyphen are not triggering on the following location. Second, URLs like www.example.com/dealS?p=2 get redirected to www.example.com/deals instead of www.example.com/deals?p=2 location ~ [A-Z] { return 307 $scheme://$host$my_uri_to_lowercase; } Kindly help me resolve both the issues. Thank you Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287707,287707#msg-287707 From nginx-forum at forum.nginx.org Sat Apr 18 21:39:38 2020 From: nginx-forum at forum.nginx.org (YAGA) Date: Sat, 18 Apr 2020 17:39:38 -0400 Subject: Websocket (wss) connection issue (status 200 instead 101) between two nginx systems Message-ID: <26a25fd1bb86f77ba9f58a06b2713174.NginxMailingListEnglish@forum.nginx.org> Hello, I?ve a websocket (wss) connection issue (status 200 instead 101) between a ?server? (running nginx/1.14.2 reverse proxy) and a ?black box? (running nginx/1.8.1 web server with websocket). The ?server? has access to Internet and to the local network where is connected the ?black box?. I called it ?black box? because I can?t change anything except the nginx config file. >From local network, the nginx web site of the ?black box? is working properly including websocket connection. >From Internet, the ?server?, the nginx reverse proxy gives me an access to the nginx web site of the ?black box? everything works except the websocket, from my web browser I receive a status 200 but I should get 101 switching protocol. I?ve tried different setup without success. Please let me know what you think, Thanks a lot, Regards, YAGA The ?black box? wss websocket uses 80 (which is not usual) and its https website uses 443. ?black box? web server nginx config (extract) server { listen 443 ssl; ssl_certificate /opt/xxx/cert.crt; ssl_certificate_key /opt/xxx/cert.key; server_name localhost; proxy_buffering off; location / { root /opt/xxx/web; try_files $uri $uri/ /index.html; } location /websocket { proxy_pass https://127.0.0.1:80; } location /api/ { proxy_pass https://127.0.0.1:80; } location /static/ { root /opt/xxx/website; expires 10d; } ?server? reverse proxy nginx config (extract) server { listen 443 ssl; listen 80 ssl; server_name my_server.xyz; client_max_body_size 100M; proxy_buffering off; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; location / { proxy_pass https://192.168.1.20/; auth_basic "Private"; auth_basic_user_file /etc/nginx/.htpasswd; } location /api/ { proxy_pass https://192.168.1.20:80/api/; auth_basic off; } location /websocket { proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection upgrade; proxy_http_version 1.1; proxy_set_header Origin ""; proxy_pass https://192.168.1.20:80/; auth_basic off; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287709,287709#msg-287709 From nginx-forum at forum.nginx.org Sun Apr 19 14:54:03 2020 From: nginx-forum at forum.nginx.org (bubunia2000ster) Date: Sun, 19 Apr 2020 10:54:03 -0400 Subject: F5 WAF UDP configuration with nginx LB Message-ID: <61a0829892112d6159da0306205a3073.NginxMailingListEnglish@forum.nginx.org> Hi all, I am looking for nginx configuration examples with the F5 WAF UDP . My scenario is as follows: Internet(user)-> F5 WAF BIGIP 14.04(UDP) --->nginx(LB which can handle UDP)-> Backend instances I got the F5 WAF UDP configurations from the f5 KB articles. I am looking for nginx(LB with UDP) configurations to handle the incoming connections from F5 WAF UDP? Can someone help me with some configuration examples? Regards Pradeep Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287711,287711#msg-287711 From jbiskofski at gmail.com Sun Apr 19 16:04:45 2020 From: jbiskofski at gmail.com (jbiskofski) Date: Sun, 19 Apr 2020 09:04:45 -0700 Subject: HTTP2 SETTINGS FRAME Denial of Service Message-ID: Hello everyone. I need to pass a security audit, For a PCI compliance process. A scan was performed on my servers and found a vulnerability in nginx "HTTP2 SETTINGS FRAME Denial of Service" I upgraded nginx to the latest stable 1.16.1 which supposedly fixes that issue. see : https://mailman.nginx.org/pipermail/nginx-announce/2019/000249.html But the security scan is still reporting the same problem. The scan report ends with - "technical details : sent HTTP2 request with 20 SETTINGS and received a valid response" I do have http2 enabled, and need it to stay enabled. Can someone please point me in the right direction about how to fix this. I have a few questions. Can I disable that "20 SETTINGS" request somehow? Will that mess up my http2 connections? Is there some other solution? Should I try to update to mainline? Here is the output of my nginx -V nginx version: nginx/1.16.1 built by clang 6.0.0 (tags/RELEASE_600/final 326565) (based on LLVM 6.0.0) built with OpenSSL 1.0.2o-freebsd 27 Mar 2018 TLS SNI support enabled configure arguments: --prefix=/usr/local/nginx --with-http_ssl_module --with-http_v2_module thanks! - Jose -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Apr 19 18:47:53 2020 From: francis at daoine.org (Francis Daly) Date: Sun, 19 Apr 2020 19:47:53 +0100 Subject: Nginx Lowercase URL and Redirection In-Reply-To: <1e11f9b98da725b4a6e4bbdbbdbd6ae3.NginxMailingListEnglish@forum.nginx.org> References: <1e11f9b98da725b4a6e4bbdbbdbd6ae3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200419184753.GU20939@daoine.org> On Sat, Apr 18, 2020 at 11:05:36AM -0400, mustafa.chapal wrote: Hi there, > First, URLs like www.example.com/new-arrivalS which include a character like > hyphen are not triggering on the following location. Works for me. What other location do you have that does handle this request? == location ~ [A-Z] { return 200 "Does match A-Z\n"; } location / { return 200 "Does not match A-Z\n"; } == $ curl http://127.0.0.1/new-arrivalS Does match A-Z $ curl http://127.0.0.1/new-arrivals Does not match A-Z > Second, URLs like www.example.com/dealS?p=2 get redirected to > www.example.com/deals instead of www.example.com/deals?p=2 > > location ~ [A-Z] { > return 307 $scheme://$host$my_uri_to_lowercase; > } > > Kindly help me resolve both the issues. Either change your $my_uri_to_lowercase to include the query string; or change the return line to be return 307 $scheme://$host$my_uri_to_lowercase$is_args$args; http://nginx.org/r/$is_args and http://nginx.org/r/$args for details. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Apr 19 18:55:40 2020 From: francis at daoine.org (Francis Daly) Date: Sun, 19 Apr 2020 19:55:40 +0100 Subject: Websocket (wss) connection issue (status 200 instead 101) between two nginx systems In-Reply-To: <26a25fd1bb86f77ba9f58a06b2713174.NginxMailingListEnglish@forum.nginx.org> References: <26a25fd1bb86f77ba9f58a06b2713174.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200419185540.GV20939@daoine.org> On Sat, Apr 18, 2020 at 05:39:38PM -0400, YAGA wrote: Hi there, > From local network, the nginx web site of the ?black box? is working > properly including websocket connection. > From Internet, the ?server?, the nginx reverse proxy gives me an access to > the nginx web site of the ?black box? everything works except the websocket, > from my web browser I receive a status 200 but I should get 101 switching > protocol. Your config for the "black box" does not show the normal proxy_* directives that are used for websockets. Your config for the "server" does. http://nginx.org/en/docs/http/websocket.html Does anything change if you add those directives to the "black box" system? > ?black box? web server nginx config (extract) > location /websocket { > proxy_pass https://127.0.0.1:80; > } > ?server? reverse proxy nginx config (extract) > location /websocket { > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection upgrade; > proxy_http_version 1.1; > proxy_set_header Origin ""; > proxy_pass https://192.168.1.20:80/; > auth_basic off; > } I don't actually know if a websocket connection will pass cleanly through two reverse proxies. I guess this is as good a time as any to learn if it can! Good luck with it, f -- Francis Daly francis at daoine.org From ellertalexandre at gmail.com Sun Apr 19 21:31:27 2020 From: ellertalexandre at gmail.com (Alexandre Ellert) Date: Sun, 19 Apr 2020 23:31:27 +0200 Subject: no events CREATE/UPDATE after ingress is created Message-ID: Hi, I have a strange behaviour on a kubernetes cluster where nginx ingress controller is deployed and managed by gitlab. When I create an Ingress object in a namespace, I notice that nginx.conf in the controller pod is not updated. Also I can't see any events when I do a 'kubectl describe ingress ...' or 'kubectl get events'. I tried with a fresh gitlab install on a new cluster and everything works like a charm. Can you tell me where I should investigate to solve this ? I spent a lot of time trying to debug by myself before without any success. Thank you. Regards. PS : This is my first post to this list -------------- next part -------------- An HTML attachment was scrubbed... URL: From robin at reportlab.com Mon Apr 20 07:08:34 2020 From: robin at reportlab.com (Robin Becker) Date: Mon, 20 Apr 2020 08:08:34 +0100 Subject: uwsgi like caching Message-ID: A python django app running under uwsgi like caching can directly control the uwsgi cache (via decorators etc). This requires two nginx sections see eg https://uwsgi-docs.readthedocs.io/en/latest/WebCaching.html. Is there a way to allow back ends to control the nginx cache directly? I see lots of schemes involving headers and or cookies, but I'm not sure they are as simple to understand as the uwsgi cache approach. -- Robin Becker From nginx-forum at forum.nginx.org Mon Apr 20 09:32:48 2020 From: nginx-forum at forum.nginx.org (Basanta) Date: Mon, 20 Apr 2020 05:32:48 -0400 Subject: Configuration of SSL Termination through NGINX Faillling Message-ID: <1098ccdbe73954bce8a30929142e1b8c.NginxMailingListEnglish@forum.nginx.org> Hi ALL, I am trying to configure SSL termination at NGINX LB so that the backend application can be accessed through the HTTP Port .this fails with 400 Bad Request The plain HTTP request was sent to HTTPS port Can some one please point what is wrong here ..Here NGINX is running on a K8S Instance . Here is my tls.yaml file .. ============== apiVersion: extensions/v1beta1 kind: Ingress metadata: name: soangssl-ingress namespace: soans spec: tls: - hosts: - k8sdev1-1.subnet2ad3phx.paasinfratoophx.oraclevcn.com secretName: domain1-tls-cert-nginx rules: - host: k8sdev1-1.subnet2ad3phx.paasinfratoophx.oraclevcn.com http: paths: - path: /con backend: serviceName: soainfra-adminserver servicePort: 7001 ===================== Regards, Basanta Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287720,287720#msg-287720 From mdounin at mdounin.ru Mon Apr 20 18:52:24 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 Apr 2020 21:52:24 +0300 Subject: Nginx Brunzip In-Reply-To: References: <20200415171236.GQ20357@mdounin.ru> Message-ID: <20200420185224.GR20357@mdounin.ru> Hello! On Thu, Apr 16, 2020 at 03:48:27PM +1000, Mathew Heard wrote: > Maxim, > > > Likely unrealted, but "ctx->flush = FLUSH_NOFLUSH" at > > > https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L393 > > is meaningless. > > Is beause of > https://github.com/splitice/ngx_brunzip_module/blob/master/ngx_http_brunzip_filter_module.c#L513 > correct? > Because FLUSH_FLUSH always resets state to FLUSH_NOFLUSH. No. Because of the "ctx->flush != FLUSH_NOFLUSH" condition at the very start of the same function. In the particular place the ctx->field is guaranteed to be set to FLUSH_NOFLUSH, and there is no need to set it again. As you can see in the original code, there is no assignment. Instead, it simply states that "ctx->flush == Z_NO_FLUSH" in a comment. (http://hg.nginx.org/nginx/file/3a860f22c879/src/http/modules/ngx_http_gunzip_filter_module.c#l365) -- Maxim Dounin http://mdounin.ru/ From sca at andreasschulze.de Mon Apr 20 19:47:53 2020 From: sca at andreasschulze.de (A. Schulze) Date: Mon, 20 Apr 2020 21:47:53 +0200 Subject: nginx-1.17.10 In-Reply-To: <20200414143415.GK20357@mdounin.ru> References: <20200414143415.GK20357@mdounin.ru> Message-ID: <9e3ab8c3-7bcd-167e-8d65-1e4432b605c9@andreasschulze.de> Am 14.04.20 um 16:34 schrieb Maxim Dounin: > Changes with nginx 1.17.10 14 Apr 2020 > > *) Feature: the "auth_delay" directive. Hello nginx developers, I'm searching for more information about this specific change and other changes in general. The source diff from 1.17.9 to 1.17.10 contain mostly code changes but nearly no other information. https://nginx.org/en/docs/http/ngx_http_core_module.html#auth_delay is a formal precise option description. But I found nothing more. Somebody had a problem, that's solved now? What's the reason for this enhancement? I simply like to know if there are other places such stuff is discussed prior releases. Thanks Andreas From nginx-forum at forum.nginx.org Mon Apr 20 21:06:14 2020 From: nginx-forum at forum.nginx.org (YAGA) Date: Mon, 20 Apr 2020 17:06:14 -0400 Subject: Websocket (wss) connection issue (status 200 instead 101) between two nginx systems In-Reply-To: <20200419185540.GV20939@daoine.org> References: <20200419185540.GV20939@daoine.org> Message-ID: Hi Francis, Many thanks for your message and for your help, it?s very kind of you. As you suggested, I?ve tried to add these lines on the ?black box? side: proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection upgrade; proxy_http_version 1.1; But unfortunately, without success, these lines didn?t improve the websocket connection. So, I?ve decided to roll back to previous version without the connection upgrade on the ?black box?. I?ve tried different changes to the setup and I finally found my mistake, on the server side I?ve added an extra slash at the end: proxy_pass https://192.168.1.20:80/; I change this line with: proxy_pass https://192.168.1.20:80; Now, it works smoothly. The evil always comes from details? Thanks again Francis for your time and your assistance, Regards, YAGA Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287709,287727#msg-287727 From mdounin at mdounin.ru Tue Apr 21 13:40:13 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Apr 2020 16:40:13 +0300 Subject: nginx-1.17.10 In-Reply-To: <9e3ab8c3-7bcd-167e-8d65-1e4432b605c9@andreasschulze.de> References: <20200414143415.GK20357@mdounin.ru> <9e3ab8c3-7bcd-167e-8d65-1e4432b605c9@andreasschulze.de> Message-ID: <20200421134013.GT20357@mdounin.ru> Hello! On Mon, Apr 20, 2020 at 09:47:53PM +0200, A. Schulze wrote: > Am 14.04.20 um 16:34 schrieb Maxim Dounin: > > Changes with nginx 1.17.10 14 Apr 2020 > > > > *) Feature: the "auth_delay" directive. > > Hello nginx developers, > > I'm searching for more information about this specific change and other changes in general. > The source diff from 1.17.9 to 1.17.10 contain mostly code changes but nearly no other information. > > https://nginx.org/en/docs/http/ngx_http_core_module.html#auth_delay is a formal precise option description. > But I found nothing more. Somebody had a problem, that's solved now? > What's the reason for this enhancement? > > I simply like to know if there are other places such stuff is discussed prior releases. The commit log of the change in question is pretty descriptive, much like other commit logs (well, at least we hope they are): http://hg.nginx.org/nginx/rev/681b78a98a52 -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Apr 21 14:44:50 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Apr 2020 17:44:50 +0300 Subject: nginx-1.18.0 Message-ID: <20200421144450.GU20357@mdounin.ru> Changes with nginx 1.18.0 21 Apr 2020 *) 1.18.x stable branch. -- Maxim Dounin http://nginx.org/ From thresh at nginx.com Tue Apr 21 17:22:22 2020 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 21 Apr 2020 20:22:22 +0300 Subject: aarch64 packages for other Linux flavors In-Reply-To: References: <4e388ac4-8291-9e19-0774-351af78a4445@nginx.com> Message-ID: Hi Emilio, 15.04.2020 14:21, Emilio Fernandes wrote: > Our policy is to provide packages for officially upstream-supported > distributions. > > https://wiki.centos.org/FAQ/General#What_architectures_are_supported.3F > states that they only support x86_64, and aarch64 is unofficial. > > > Here is something you may find interesting. > https://github.com/varnishcache/varnish-cache/pull/3263 - a?PR I've > created for Varnish Cache > project. > It is based on Docker?+ QEMU?and builds packages for different > versions of Debian/Ubuntu/Centos/Alpine for both x64 and aarch64. > > > Nice work, Martin! > > @Konstantin: any idea when the new aarch64 packages will be available ? > May we help you somehow ? I've just published RHEL8/CentOS8 aarch64 packages for nginx stable on http://nginx.org/packages/rhel/8/aarch64/. The mainline will follow the suit soon, as well as proper documentation on http://nginx.org/en/linux_packages.html. With Alpine, it is proving to be more difficult than we thought, as there are problems runing those on AWS EC2 which we use on our build farm: https://github.com/mcrute/alpine-ec2-ami/issues/28 . -- Konstantin Pavlov https://www.nginx.com/ From paul at stormy.ca Tue Apr 21 23:09:41 2020 From: paul at stormy.ca (Paul) Date: Tue, 21 Apr 2020 19:09:41 -0400 Subject: SSL and port number [was: Rewrite -- failure] In-Reply-To: <20200414223939.GQ20939@daoine.org> References: <131f8eb9-986d-73ba-e606-200154fc1624@stormy.ca> <20200414223939.GQ20939@daoine.org> Message-ID: <8ed9a632-b660-2747-53d3-d674bec13d1b@stormy.ca> Thanks for your input. I have spent quite some time on this, and have failed on "rewrite". It all works using a different port number but *without* SSL -- the moment I add the Certbot back in (see config below) I get "Error code: SSL_ERROR_RX_RECORD_TOO_LONG". Also, same server, on default port 80, works perfectly as https, but if I add :80 to the requested URL, I get the same "Error code: SSL_ERROR_RX_RECORD_TOO_LONG"... All suggestions warmly welcomed, thanks. ...and stay well - Paul. server { listen 8084; # listen 443 ssl; # ssl_certificate /etc/letsencrypt/live/serv1.example.com/fullchain.pem; # managed by Certbot # ssl_certificate_key /etc/letsencrypt/live/serv1.example.com/privkey.pem; # managed by Certbot # include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot # ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot server_name my_app; access_log /var/log/nginx/access.log; error_log /var/log/nginx/ships-error_log; proxy_buffering off; location / { proxy_pass http://192.168.xxx.yyy:8084; proxy_set_header Host $host; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } #server { # if ($host = serv1.example.com) { # return 301 https://$host$request_uri; # } # managed by Certbot # automatically sets to https if someone comes in on http # listen 8084; # listen 443 ssl; # server_name serv1.example.com; # rewrite ^ https://$host$request_uri? permanent; #} On 2020-04-14 6:39 p.m., Francis Daly wrote: > On Tue, Apr 14, 2020 at 04:38:51PM -0400, Paul wrote: > > Hi there, > >> My problem is that I need to split serv1.example.com to two physical servers >> (both fully functional on LAN). The first (192.168.aaa.bbb) serving static >> https works fine. But I cannot "rewrite" (redirect, re-proxy?) to the second >> server (192.168.xxx.yyy, Perl cgi) where the request comes in as >> https://serv1.example.com/foo and I need to get rid of "foo" > > http://nginx.org/r/proxy_pass -- proxy_pass can (probably) do what > you want, without rewrites. The documentation phrase to look for is > "specified with a URI". > >> "rewrite ^(.*serv1\.example\.com\/)foo\/(.*) $1$2 permanent;" (tried >> permanent, break, last and no flags) > > "rewrite" (http://nginx.org/r/rewrite) works on the "/foo" part, not the > "https://" or the "serv1.example.com" parts of the request, which is why > that won't match your requests. > >> location /foo { # big db server, perfect on LAN, PERL, cgi >> # rewrite ^/foo(.*) /$1 break; #tried permanent, break, last and >> no flags > > That one looks to me to be most likely to work; but you probably need > to be very clear about what you mean when you think "it doesn't work". > > In general - show the request, show the response, and describe the response > that you want instead. > >> # rewrite ^/foo/(.*)$ /$1 last; #tried permanent, break, last and >> no flags >> rewrite ^(.*serv1\.example\.com\/)foo\/(.*) $1$2 permanent; #tried >> permanent, break, last and no flags >> proxy_pass http://192.168.xxx.yyy:8084; >> proxy_set_header Host $host; >> proxy_http_version 1.1; >> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> } > > I suggest trying > > location /foo/ { > proxy_pass http://192.168.xxx.yyy:8084/; > } > > (note the trailing / in both places) and then seeing what else needs to > be added. > > Note also that, in any case, if you request /foo/one.cgi which is really > upstream's /one.cgi, and the response body includes a link to /two.png, > then the browser will look for /two.png not /foo/two.png, which will > be sought on the other server. That may or may not be what you want, > depending on how you have set things up. > > That is: it is in general non-trivial to reverse-proxy a service at a > different places in the url hierarchy from where the service believes > it is located. Sometimes a different approach is simplest. > >> server { >> >> # automatically sets to https if someone comes in on http >> listen 80; >> listen 8084; > > Hmm. Is this 8084 the same as 192.168.xxx.yyy:8084 above? If so, things > might get a bit confused. > > Good luck with it, > > f > \\\||// (@ @) ooO_(_)_Ooo__________________________________ |______|_____|_____|_____|_____|_____|_____|_____| |___|____|_____|_____|_____|_____|_____|_____|____| |_____|_____| mailto:paul at stormy.ca _|____|____| From nginx-forum at forum.nginx.org Wed Apr 22 01:22:35 2020 From: nginx-forum at forum.nginx.org (deprito) Date: Tue, 21 Apr 2020 21:22:35 -0400 Subject: UDP Load balancing - [Solved] In-Reply-To: <1b04611e510e3e3cd69fa13c756204ac.NginxMailingListEnglish@forum.nginx.org> References: <20200128140227.7gavlqio4wanoy73@Romans-MacBook-Pro.local> <1b04611e510e3e3cd69fa13c756204ac.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4141e9f1f26d86f0cba80ec9ebba1904.NginxMailingListEnglish@forum.nginx.org> Hello @arigatox, do you mind share with me, how to LB UDP protocol like wireguard? My nginx.conf user www-data; worker_processes auto; pid /run/nginx.pid; include /etc/nginx/modules-enabled/*.conf; stream { upstream syslog_udp { server x.x.x.x:51820; server x.x.x.x:51820; } server { listen 51820 udp; proxy_pass syslog_udp; proxy_responses 0; } } worker_rlimit_nofile 1000000; events { worker_connections 20000; } my nginx : nginx version: nginx/1.16.1 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286837,287751#msg-287751 From francis at daoine.org Wed Apr 22 07:07:07 2020 From: francis at daoine.org (Francis Daly) Date: Wed, 22 Apr 2020 08:07:07 +0100 Subject: Websocket (wss) connection issue (status 200 instead 101) between two nginx systems In-Reply-To: References: <20200419185540.GV20939@daoine.org> Message-ID: <20200422070707.GW20939@daoine.org> On Mon, Apr 20, 2020 at 05:06:14PM -0400, YAGA wrote: Hi there, Great that you've found the fix! > proxy_pass https://192.168.1.20:80/; > I change this line with: > proxy_pass https://192.168.1.20:80; > > Now, it works smoothly. And thanks for sharing the resolution with the list; that will hopefully help the next person with the same problem find it more quickly :-) Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Apr 22 07:14:41 2020 From: francis at daoine.org (Francis Daly) Date: Wed, 22 Apr 2020 08:14:41 +0100 Subject: SSL and port number [was: Rewrite -- failure] In-Reply-To: <8ed9a632-b660-2747-53d3-d674bec13d1b@stormy.ca> References: <131f8eb9-986d-73ba-e606-200154fc1624@stormy.ca> <20200414223939.GQ20939@daoine.org> <8ed9a632-b660-2747-53d3-d674bec13d1b@stormy.ca> Message-ID: <20200422071441.GX20939@daoine.org> On Tue, Apr 21, 2020 at 07:09:41PM -0400, Paul wrote: Hi there, I confess I'm not quite certain what you are reporting here -- if you can say "with *this* config, I make *this* request and I get *this* response, but I want *that* response instead", it may be clearer. However, there is one thing that might be a misunderstanding here: "listen 8000;" means that nginx will listen for http, so you must make requests to port 8000 using http not https. "listen 8001 ssl;" means that nginx will listen for https, so you must make requests to port 8001 using https not http. You can have both "listen" directives in the same server{}, but you still must use the correct protocol on each port, or there will be errors. Cheers, f -- Francis Daly francis at daoine.org From lawrence at begame.nl Wed Apr 22 08:43:26 2020 From: lawrence at begame.nl (Lawrence) Date: Wed, 22 Apr 2020 10:43:26 +0200 Subject: Nginx wp-admin access control In-Reply-To: <6ad022c5-c758-85f2-7dcc-a9e2b0835abe@unix-solution.de> Message-ID: <2702050048-17492@mail6.enem.nl> Thanks everyone for the great support. After many replies I found that nginx did not like the cascading config that was suggested by some. Once I removed that, things seemed to stabilize? and all seems good. Thanks Lawrence From: basti To: Sent: 16/04/2020 6:04 PM Subject: Re: Nginx wp-admin access control when you use fail2ban have a look on ipset it performe better on large lists. Am 16.04.20 um 17:13 schrieb Lawrence: > Greetings All, > > WOW, thanks for all the suggestions guys. Not many of them are > understood, I will try the fail2ban and see how far I get. > > Thanks gaian. > Lawrence > > > *From: * basti > *To: * > *Sent: * 16/04/2020 4:54 PM > *Subject: * Re: Nginx wp-admin access control > > ? ? I have not follow the entire discussion. > > ? ? What is the goal to do with wp-admin? > > ? ? There are several ways to limit access: > ? ? - http basic auth > ? ? - use a x509 cert to authenticate instead of user/pass > ? ? - write a hook plugin to wp_login() to use you own / external login > > ? ? - just use fail2ban to keep bad guys out > ? ? - ... > > ? ? On 16.04.20 16:46, Francis Daly wrote: > ? ? ?> On Wed, Apr 15, 2020 at 12:52:59PM +0200, Lawrence wrote: > ? ? ?> > ? ? ?> Hi there, > ? ? ?> > ? ? ?>> To start, I am very much a beginner to nginx and ?coding. I am a > ? ? application support engineer, but got very little ?development skills. > ? ? ?> > ? ? ?> I don't know WordPress; but on the nginx side, what matters is the > ? ? ?> request that is made (the url, handled in a "location") and the > ? ? way that > ? ? ?> you want nginx to handle that request. > ? ? ?> > ? ? ?> In nginx (in general), one request is handled in one location; > ? ? ?> only the configuration in, or inherited into, that location > ? ? ?> matters. Location-matching does not include the request query > ? ? ?> string. Inheritance is per directive, and is either by replacement or > ? ? ?> not at all. The "*_pass" directives are not inherited; the others > ? ? are. > ? ? ?> > ? ? ?> There are exceptions to this description, but it is probably a good > ? ? ?> enough starting point to understanding the configuration that is > ? ? needed. > ? ? ?> > ? ? ?> The documentation for any directive X can be found from > ? ? ?> http://nginx.org/r/X > ? ? ?> > ? ? ?>> My goal is to have the sites available but the access to all wp > ? ? admin must be limited. > ? ? ?>> below are a few of the solutions I found. Non seem to work > ? ? fully. I assume it is my understanding of nginx configuration. > ? ? ?>> > ? ? ?>> method #1? -- test unsuccessfully. > ? ? ?> > ? ? ?> In this case, does "unsuccessful" mean: the php file is not handled > ? ? ?> when it should be; or the php file is handled when it should not > ? ? be; or > ? ? ?> something else? In general, it is good to be specific -- what > ? ? request was > ? ? ?> made, what response was returned, and what response was wanted > ? ? instead. > ? ? ?> > ? ? ?> > ? ? ?> So, with me not knowing WordPress, your mail and some brief web > ? ? searching > ? ? ?> suggests that you want your nginx to do the following: > ? ? ?> > ? ? ?> * allow any access to any request that ends in ".php", except > ? ? ?> * restrict access to the request /wp-login.php and > ? ? ?> * restrict access to any php request that starts with /wp-admin/, > ? ? except > ? ? ?> * allow any access to /wp-admin/admin-ajax.php > ? ? ?> > ? ? ?> where "restrict" is to be based on an infrequently-changing list > ? ? of IP > ? ? ?> addresses or address ranges. > ? ? ?> > ? ? ?> And this is in addition to the normal "try_files" config to just get > ? ? ?> wordpress working. > ? ? ?> > ? ? ?> Is that an accurate description of the desired request / response > ? ? ?> handling mapping? > ? ? ?> > ? ? ?> If so, something like (untested): > ? ? ?> > ? ? ?> === > ? ? ?> ? include fastcgi.conf; # has fastcgi_param, etc, but not > ? ? fastcgi_pass > ? ? ?> ? # Can directly paste the relevant lines here instead > ? ? ?> > ? ? ?> ? location / { > ? ? ?> ? ? try_files $uri $uri/ /index.php?$args; > ? ? ?> ? } > ? ? ?> ? location ~ \.php$ { > ? ? ?> ? ? location ~ ^/wp-admin/ { > ? ? ?> ? ? ? allow 192.168.1.0/24; > ? ? ?> ? ? ? deny all; > ? ? ?> ? ? ? fastcgi_pass unix:/run/php/php7.0-fpm.sock; > ? ? ?> ? ? } > ? ? ?> ? ? fastcgi_pass unix:/run/php/php7.0-fpm.sock; > ? ? ?> ? } > ? ? ?> ? location = /wp-login.php { > ? ? ?> ? ? allow 192.168.1.0/24; > ? ? ?> ? ? deny all; > ? ? ?> ? ? fastcgi_pass unix:/run/php/php7.0-fpm.sock; > ? ? ?> ? } > ? ? ?> ? location = /wp-admin/admin-ajax.php { > ? ? ?> ? ? fastcgi_pass unix:/run/php/php7.0-fpm.sock; > ? ? ?> ? } > ? ? ?> === > ? ? ?> > ? ? ?> looks like it should work. There are other ways to arrange things, > ? ? ?> and there is repetition here of the "allow" list; it may be > ? ? simpler to > ? ? ?> maintain that list twice than to use another "include" file. > ? ? ?> > ? ? ?> If you are happy to test and report what fails, then it should be > ? ? possible > ? ? ?> to end up with a suitable config. > ? ? ?> > ? ? ?> Good luck with it, > ? ? ?> > ? ? ?> ? ? ?f > ? ? ?> > ? ? _______________________________________________ > ? ? nginx mailing list > ? ? nginx at nginx.org > ? ? http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From edigarov at qarea.com Wed Apr 22 08:46:25 2020 From: edigarov at qarea.com (Gregory Edigarov) Date: Wed, 22 Apr 2020 11:46:25 +0300 Subject: need to preserve / in location Message-ID: Hello, Everybody this is directory structure: /front/admin/index.html /front/superadmin/index.html that's what I have in config ??? rewrite ^/(admin)$ /$1/ last; ??? location /admin/ { ??????? index index.html; ??????? root /front; ??????? try_files $uri? admin/index.html;???????????????? #direct all request to index.html ?? } and the errors: 2020/04/22 08:35:13 [error] 73#73: *1 open() "/frontindex.html" failed (2: No such file or directory), client: 192.168.224.1, server: , request: "GET /admin HTTP/1.1", host: "127.0.0.1" 192.168.224.1 - - [22/Apr/2020:08:35:13 +0000] "GET /admin HTTP/1.1" 404 146 "-" "curl/7.58.0" "-" 2020/04/22 08:35:24 [error] 73#73: *2 open() "/frontindex.html" failed (2: No such file or directory), client: 192.168.224.1, server: , request: "GET /admin/ HTTP/1.1", host: "127.0.0.1" 192.168.224.1 - - [22/Apr/2020:08:35:24 +0000] "GET /admin/ HTTP/1.1" 404 146 "-" "curl/7.58.0" "-" what's the right config in situation given? -- With best regards, ??????? Gregory Edigarov From edigarov at qarea.com Wed Apr 22 09:18:38 2020 From: edigarov at qarea.com (Gregory Edigarov) Date: Wed, 22 Apr 2020 12:18:38 +0300 Subject: need to preserve / in location In-Reply-To: References: Message-ID: <282fe9d5-974d-4782-28db-d4b6038ee51d@qarea.com> On 2020-04-22 11:46, Gregory Edigarov wrote: > Hello, Everybody > > this is directory structure: > > /front/admin/index.html > > /front/superadmin/index.html > > that's what I have in config > > ??? rewrite ^/(admin)$ /$1/ last; > ??? location /admin/ { > ??????? index index.html; > ??????? root /front; > ??????? try_files $uri? admin/index.html;???????????????? #direct all > request to index.html > ?? } > > and the errors: > > 2020/04/22 08:35:13 [error] 73#73: *1 open() "/frontindex.html" failed > (2: No such file or directory), client: 192.168.224.1, server: , > request: "GET /admin HTTP/1.1", host: "127.0.0.1" > 192.168.224.1 - - [22/Apr/2020:08:35:13 +0000] "GET /admin HTTP/1.1" > 404 146 "-" "curl/7.58.0" "-" > > 2020/04/22 08:35:24 [error] 73#73: *2 open() "/frontindex.html" failed > (2: No such file or directory), client: 192.168.224.1, server: , > request: "GET /admin/ HTTP/1.1", host: "127.0.0.1" > 192.168.224.1 - - [22/Apr/2020:08:35:24 +0000] "GET /admin/ HTTP/1.1" > 404 146 "-" "curl/7.58.0" "-" > > what's the right config in situation given? > forgot to say: I also have "root /front;" directive in upper level > -- > > With best regards, > > ??????? Gregory Edigarov > From anthony at mindmedia.com.sg Wed Apr 22 11:03:01 2020 From: anthony at mindmedia.com.sg (P.V.Anthony) Date: Wed, 22 Apr 2020 19:03:01 +0800 Subject: Nginx wp-admin access control In-Reply-To: <2702050048-17492@mail6.enem.nl> References: <2702050048-17492@mail6.enem.nl> Message-ID: On 22/4/20 4:43 pm, Lawrence wrote: > Thanks everyone for the great support. > > After many replies I found that nginx did not like the cascading config > that was suggested by some. Once I removed that, things seemed to > stabilize? and all seems good. Please share the final working config. If you do not mind. P.V.Anthony From mdounin at mdounin.ru Wed Apr 22 15:17:36 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 Apr 2020 18:17:36 +0300 Subject: need to preserve / in location In-Reply-To: References: Message-ID: <20200422151736.GZ20357@mdounin.ru> Hello! On Wed, Apr 22, 2020 at 11:46:25AM +0300, Gregory Edigarov wrote: > Hello, Everybody > > this is directory structure: > > /front/admin/index.html > > /front/superadmin/index.html > > that's what I have in config > > ??? rewrite ^/(admin)$ /$1/ last; > ??? location /admin/ { > ??????? index index.html; > ??????? root /front; > ??????? try_files $uri? admin/index.html;???????????????? #direct all request to index.html > ?? } > > and the errors: > > 2020/04/22 08:35:13 [error] 73#73: *1 open() "/frontindex.html" failed (2: > No such file or directory), client: 192.168.224.1, server: , request: "GET > /admin HTTP/1.1", host: "127.0.0.1" > 192.168.224.1 - - [22/Apr/2020:08:35:13 +0000] "GET /admin HTTP/1.1" 404 146 > "-" "curl/7.58.0" "-" That's stange, because with the configuration given the "/admin" request is expected to be mapped into "admin/index.html" as per try_files, and will end up opening "/frontadmin/index.html". 2020/04/22 17:42:59 [error] 38574#100110: *1 open() "/frontadmin/index.html" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /admin HTTP/1.1", host: "127.0.0.1:8080" > 2020/04/22 08:35:24 [error] 73#73: *2 open() "/frontindex.html" failed (2: > No such file or directory), client: 192.168.224.1, server: , request: "GET > /admin/ HTTP/1.1", host: "127.0.0.1" > 192.168.224.1 - - [22/Apr/2020:08:35:24 +0000] "GET /admin/ HTTP/1.1" 404 > 146 "-" "curl/7.58.0" "-" Same here. 2020/04/22 17:46:20 [error] 38574#100110: *2 open() "/frontadmin/index.html" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /admin/ HTTP/1.1", host: "127.0.0.1:8080" > what's the right config in situation given? The right config depends on what you are trying to get. In most cases, just as simple configuration with appropriate "root" will do what's expected ("index index.html;" is the default and can be safely omitted): location / { root /front; } With such a configuration any request to "/admin" will end up with a redirect to "/admin/" (as long as "/front/admin" is a directory). Any request to "/admin/" will return "/front/admin/index.html" (if exists). And any request to a non-existent file will return 404. If you really want to return a positive response regardless of whether a file exists or not, adding a leading "/" before "admin/index.html" in your configuration might work for you: ??? rewrite ^/(admin)$ /$1/ last; ??? location /admin/ { ??????? root /front; ??????? try_files $uri?/admin/index.html; ?? } (Note that "index index.html;" is meaningless - it is never used, as "try_files" without explicitly specified trailing "/" prevents access to directories.) Alternatively, you may want to simplify configuration into something like: root /front; location = /admin { rewrite ^ /admin/ last; } location /admin/ { error_page 404 = /admin/index.html; log_not_found off; } This configuration works much like the one with only "location /" above, but explicitly handles requests to "/admin" similarly to how it's handled in your configuration, and also handles 404 errors to return appropriate index file. -- Maxim Dounin http://mdounin.ru/ From thomas at glanzmann.de Wed Apr 22 17:07:15 2020 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Wed, 22 Apr 2020 19:07:15 +0200 Subject: Load balancing 50000 Citrix ICA sessions through nginx - hardware requirements Message-ID: <20200422170715.GC18205@glanzmann.de> Hello, I would like to use nginx to load balance Citrix ICA sessions (socks over https) to four netscalers. Nginx would just distribute the 50000 sessions to 4 netscalers. Just tcp with ip hash, no ssl offloading necessary. The traffic is approx. 5 Gbit/s. The connections are long running approx. 10 minutes upto 10 hours. I would like to know what hardware is required to pull that off. Obviously 10 Gbit/s interfaces but what aboure CPU and RAM requirements? Can someone guide me? Otherwise I will try to benchmark it. Cheers, Thomas From emilio.fernandes70 at gmail.com Thu Apr 23 06:37:07 2020 From: emilio.fernandes70 at gmail.com (Emilio Fernandes) Date: Thu, 23 Apr 2020 09:37:07 +0300 Subject: aarch64 packages for other Linux flavors In-Reply-To: References: <4e388ac4-8291-9e19-0774-351af78a4445@nginx.com> Message-ID: Hi Konstantin, El mar., 21 abr. 2020 a las 20:23, Konstantin Pavlov () escribi?: > Hi Emilio, > > 15.04.2020 14:21, Emilio Fernandes wrote: > > Our policy is to provide packages for officially > upstream-supported > > distributions. > > > > > https://wiki.centos.org/FAQ/General#What_architectures_are_supported.3F > > states that they only support x86_64, and aarch64 is unofficial. > > > > > > Here is something you may find interesting. > > https://github.com/varnishcache/varnish-cache/pull/3263 - a PR I've > > created for Varnish Cache > > project. > > It is based on Docker + QEMU and builds packages for different > > versions of Debian/Ubuntu/Centos/Alpine for both x64 and aarch64. > > > > > > Nice work, Martin! > > > > @Konstantin: any idea when the new aarch64 packages will be available ? > > May we help you somehow ? > > I've just published RHEL8/CentOS8 aarch64 packages for nginx stable on > http://nginx.org/packages/rhel/8/aarch64/. The mainline will follow the > suit soon, as well as proper documentation on > http://nginx.org/en/linux_packages.html. > That's great! Thank you! > > With Alpine, it is proving to be more difficult than we thought, as > there are problems runing those on AWS EC2 which we use on our build > farm: https://github.com/mcrute/alpine-ec2-ami/issues/28 . > Looking forward this one to be resolved! Gracias! Emilio > > -- > Konstantin Pavlov > https://www.nginx.com/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.grigorov at gmail.com Thu Apr 23 09:39:42 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Thu, 23 Apr 2020 12:39:42 +0300 Subject: aarch64 packages for other Linux flavors In-Reply-To: References: <4e388ac4-8291-9e19-0774-351af78a4445@nginx.com> Message-ID: On Tue, Apr 21, 2020 at 8:23 PM Konstantin Pavlov wrote: > Hi Emilio, > > 15.04.2020 14:21, Emilio Fernandes wrote: > > Our policy is to provide packages for officially > upstream-supported > > distributions. > > > > > https://wiki.centos.org/FAQ/General#What_architectures_are_supported.3F > > states that they only support x86_64, and aarch64 is unofficial. > > > > > > Here is something you may find interesting. > > https://github.com/varnishcache/varnish-cache/pull/3263 - a PR I've > > created for Varnish Cache > > project. > > It is based on Docker + QEMU and builds packages for different > > versions of Debian/Ubuntu/Centos/Alpine for both x64 and aarch64. > > > > > > Nice work, Martin! > > > > @Konstantin: any idea when the new aarch64 packages will be available ? > > May we help you somehow ? > > I've just published RHEL8/CentOS8 aarch64 packages for nginx stable on > http://nginx.org/packages/rhel/8/aarch64/. The mainline will follow the > suit soon, as well as proper documentation on > http://nginx.org/en/linux_packages.html. > > Awesome! > With Alpine, it is proving to be more difficult than we thought, as > there are problems runing those on AWS EC2 which we use on our build > farm: https://github.com/mcrute/alpine-ec2-ami/issues/28 . > Thanks for the update! I've just "voted" on this issue! Regards, Martin > > -- > Konstantin Pavlov > https://www.nginx.com/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From xeioex at nginx.com Thu Apr 23 16:10:38 2020 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 23 Apr 2020 19:10:38 +0300 Subject: njs-0.4.0 Message-ID: <0D094862-CF66-4162-8005-F88F7DC320DA@nginx.com> Hello, I?m glad to announce a new release of NGINX JavaScript module (njs). This release focuses on extending http and stream modules. Notable new features: - js_import directive. : nginx.conf: : js_import foo.js; : js_import lib from path/file.js; : : location / { : js_content foo.bar; : } : : foo.js: : function bar(r) { : r.return(200); : } : : export default {bar}; - multi-value headers support in r.headersOut: : foo.js: : function content(r) { : r.headersOut[?Set-Cookie?] = [ : ?foo=111; Max-Age=3600; path=/?, : ?bar=qqq; Max-Age=86400; path=/? : ]; : : r.return(200); : } You can learn more about njs: - Overview and introduction: http://nginx.org/en/docs/njs/ - Presentation: https://youtu.be/Jc_L6UffFOs - Using node modules with njs: http://nginx.org/en/docs/njs/node_modules.html Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.4.0 23 Apr 2020 nginx modules: *) Feature: added js_import directive. *) Feature: added support for multi-value headers in r.headersOut. *) Improvement: iteration over r.headersOut with special headers. *) Improvement: iteration over r.headersOut with duplicates. *) Change: r.responseBody property handler now returns ?undefined? instead of throwing an exception if response body is not available. Core: *) Feature: added script arguments support in CLI. *) Feature: converting externals values to native js objects. *) Bugfix: fixed NULL-pointer dereference in ?__proto__? property handler. *) Bugfix: fixed handling of no-newline at the end of the script. *) Bugfix: fixed RegExp() constructor with empty pattern and non-empty flags. *) Bugfix: fixed String.prototype.replace() when function returns non-string. *) Bugfix: fixed reading of pseudofiles in ?fs?. From lagged at gmail.com Fri Apr 24 00:05:24 2020 From: lagged at gmail.com (Andrei) Date: Fri, 24 Apr 2020 03:05:24 +0300 Subject: UDP Load balancing - [Solved] In-Reply-To: <4141e9f1f26d86f0cba80ec9ebba1904.NginxMailingListEnglish@forum.nginx.org> References: <20200128140227.7gavlqio4wanoy73@Romans-MacBook-Pro.local> <1b04611e510e3e3cd69fa13c756204ac.NginxMailingListEnglish@forum.nginx.org> <4141e9f1f26d86f0cba80ec9ebba1904.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Wed, Apr 22, 2020 at 4:22 AM deprito wrote: > Hello @arigatox, > > do you mind share with me, how to LB UDP protocol like wireguard? > > My nginx.conf > user www-data; > worker_processes auto; > pid /run/nginx.pid; > include /etc/nginx/modules-enabled/*.conf; > > stream { > > upstream syslog_udp { > server x.x.x.x:51820; > server x.x.x.x:51820; > } > > server { > listen 51820 udp; > proxy_pass syslog_udp; > proxy_responses 0; > } > > > } > > This is hilarious: > worker_rlimit_nofile 1000000; > What makes you think your box can handle that many open files? :D > > events { > > worker_connections 20000; > > } > > > my nginx : > nginx version: nginx/1.16.1 > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,286837,287751#msg-287751 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carsten.delellis at DELELLIS.NET Fri Apr 24 09:01:36 2020 From: carsten.delellis at DELELLIS.NET (Carsten Laun-De Lellis) Date: Fri, 24 Apr 2020 09:01:36 +0000 Subject: Using NGINX as reverse proxy to webmin on a remote server Message-ID: Hi all I am new to Nginx and I don't get a setup running with one central web server and several webin servers. The webin servers are setup according the following scheme: WebminX runs on https://hostX.local.domain:10000. My goal is to setup a central Nginx server and reverse proxy to the different webmin servers. Therefor I created the following conf: server { listen 443 ssl; server_name nginxhost.local.domain; ssl_certificate /certs/ nginxhost.delellis.net.cert.pem; ssl_certificate_key /certs/ nginxhost.delellis.net.privkey.pem; # NGINX usually only allows 1M per request. Increase this to JIRA's maximum attachment size (10M by default) client_max_body_size 10M; location /host1 { proxy_pass https://host1.local.domain:10000/; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffering off; sub_filter_once off; } location /host2 { proxy_pass https://host2.local.domain:10000/; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffering off; sub_filter_once off; } } Unfortunately this is not working. I have checked the internet but found only how-tos where Nginx and webmin server running on the same host. Also these how-tos don't work for me. I would appreciate any help on this. Mit freundlichem Gru? / Best regards Carsten Laun-De Lellis Hauptstrasse 13 D - 67705 Trippstadt Phone: +49 6306 5269850 Mobile: +49 151 275 30865 Fax: +49 6306 992142 email: carsten.delellis at delellis.net http://www.linkedin.com/in/carstenlaundelellis USt.-ID.: DE257421372 --------------------------------------------------- Diese E-Mail k?nnte vertrauliche und/oder rechtlich gesch?tzte Informationen enthalten. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorised copying, disclosure or distribution of the material in this e-mail is strictly forbidden. -------------- next part -------------- An HTML attachment was scrubbed... URL: From edigarov at qarea.com Fri Apr 24 13:54:47 2020 From: edigarov at qarea.com (Gregory Edigarov) Date: Fri, 24 Apr 2020 16:54:47 +0300 Subject: need to preserve / in location In-Reply-To: <20200422151736.GZ20357@mdounin.ru> References: <20200422151736.GZ20357@mdounin.ru> Message-ID: Maxim, Thanks for a great explanation. On 2020-04-22 18:17, Maxim Dounin wrote: > Hello! > > On Wed, Apr 22, 2020 at 11:46:25AM +0300, Gregory Edigarov wrote: > >> Hello, Everybody >> >> this is directory structure: >> >> /front/admin/index.html >> >> /front/superadmin/index.html >> >> that's what I have in config >> >> ??? rewrite ^/(admin)$ /$1/ last; >> ??? location /admin/ { >> ??????? index index.html; >> ??????? root /front; >> ??????? try_files $uri? admin/index.html;???????????????? #direct all request to index.html >> ?? } >> >> and the errors: >> >> 2020/04/22 08:35:13 [error] 73#73: *1 open() "/frontindex.html" failed (2: >> No such file or directory), client: 192.168.224.1, server: , request: "GET >> /admin HTTP/1.1", host: "127.0.0.1" >> 192.168.224.1 - - [22/Apr/2020:08:35:13 +0000] "GET /admin HTTP/1.1" 404 146 >> "-" "curl/7.58.0" "-" > That's stange, because with the configuration given the "/admin" > request is expected to be mapped into "admin/index.html" as per > try_files, and will end up opening "/frontadmin/index.html". > > 2020/04/22 17:42:59 [error] 38574#100110: *1 open() "/frontadmin/index.html" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /admin HTTP/1.1", host: "127.0.0.1:8080" > >> 2020/04/22 08:35:24 [error] 73#73: *2 open() "/frontindex.html" failed (2: >> No such file or directory), client: 192.168.224.1, server: , request: "GET >> /admin/ HTTP/1.1", host: "127.0.0.1" >> 192.168.224.1 - - [22/Apr/2020:08:35:24 +0000] "GET /admin/ HTTP/1.1" 404 >> 146 "-" "curl/7.58.0" "-" > Same here. > > 2020/04/22 17:46:20 [error] 38574#100110: *2 open() "/frontadmin/index.html" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /admin/ HTTP/1.1", host: "127.0.0.1:8080" > >> what's the right config in situation given? > The right config depends on what you are trying to get. In most > cases, just as simple configuration with appropriate "root" will do > what's expected ("index index.html;" is the default and can be > safely omitted): > > location / { > root /front; > } > > With such a configuration any request to "/admin" will end up with > a redirect to "/admin/" (as long as "/front/admin" is a > directory). Any request to "/admin/" will return > "/front/admin/index.html" (if exists). And any request to a > non-existent file will return 404. > > If you really want to return a positive response regardless of > whether a file exists or not, adding a leading "/" before > "admin/index.html" in your configuration might work for you: > > ??? rewrite ^/(admin)$ /$1/ last; > ??? location /admin/ { > ??????? root /front; > ??????? try_files $uri?/admin/index.html; > ?? } > > (Note that "index index.html;" is meaningless - it is never used, > as "try_files" without explicitly specified trailing "/" prevents > access to directories.) > > Alternatively, you may want to simplify configuration into > something like: > > root /front; > > location = /admin { > rewrite ^ /admin/ last; > } > > location /admin/ { > error_page 404 = /admin/index.html; > log_not_found off; > } > > This configuration works much like the one with only "location /" > above, but explicitly handles requests to "/admin" similarly to > how it's handled in your configuration, and also handles 404 > errors to return appropriate index file. From francis at daoine.org Sat Apr 25 17:45:12 2020 From: francis at daoine.org (Francis Daly) Date: Sat, 25 Apr 2020 18:45:12 +0100 Subject: Using NGINX as reverse proxy to webmin on a remote server In-Reply-To: References: Message-ID: <20200425174512.GA20939@daoine.org> On Fri, Apr 24, 2020 at 09:01:36AM +0000, Carsten Laun-De Lellis wrote: Hi there, > The webin servers are setup according the following scheme: > > WebminX runs on https://hostX.local.domain:10000. > > My goal is to setup a central Nginx server and reverse proxy to the different webmin servers. > location /host1 { > proxy_pass https://host1.local.domain:10000/; For this, you probably want "location /host1/" (with the trailing /). > Unfortunately this is not working. I have checked the internet but found only how-tos where Nginx and webmin server running on the same host. Also these how-tos don't work for me. > What does "not working" mean here? You make one specific request; you want to get one specific response; but you get a different response instead? I suspect that things will be simpler if you are happy to reconfigure all of the "webmin" instances so that they believe they are installed at the same place in the url hierarchy as the external users see. That is: * add the line webprefix=/host1 to /etc/webmin/config on host1 * add the line webprefix=/host2 to /etc/webmin/config on host2 and then change your config to only use one trailing slash like so: location /host1/ { proxy_pass https://host1.local.domain:10000; And if there is still a problem, if you can show the request/response that does not do what you expect, it may be simpler for others to understand and help. Cheers, f -- Francis Daly francis at daoine.org From gray at nxg.name Sun Apr 26 12:49:19 2020 From: gray at nxg.name (Norman Gray) Date: Sun, 26 Apr 2020 13:49:19 +0100 Subject: rewrite and map ??interfering regexps Message-ID: Greetings. I'm trying to do some fairly intricate URI rewriting, and the behaviour of the 'rewrite' statement does not correspond to anything I can explain from the docs. The goal is that /A/foo/modified is rewritten to /_newroot/foo and /B/foo/modified to /_defaultroot/foo. I hope to achieve this with map $uri $modroot { default _defaultroot; ~^/A _newroot; } and location ~ /modified$ { rewrite ^(.+)/modified$ /$modroot/-$1-; } (in the real config, /_newroot is reverse-proxied to a web service, so that the URIs it handles end up grafted on to selected trees; the 'map' is intended to select/limit which URIs are passed on to this service). The complete nginx.conf is at the bottom. Looking in the error log, when I retrieve /hello/modified I find 2020/04/26 12:23:28 [notice] 63328#0: *5 "^(.+)/modified$" matches "/hello/modified", client: 127.0.0.1, server: localhost, request: "GET /hello/modified HTTP/1.1", host: "localhost" 2020/04/26 12:23:28 [notice] 63328#0: *5 rewritten data: "/_defaultroot/-/hello-", args: "", client: 127.0.0.1, server: localhost, request: "GET /hello/modified HTTP/1.1", host: "localhost" ...which is fine: the map defines $modroot as /_defaultroot, and the rewrite captures the /hello. But retrieving /A/hello/modified, 2020/04/26 13:00:51 [notice] 63828#0: *6 "^(.+)/modified$" matches "/A/hello/modified", client: 127.0.0.1, server: localhost, request: "GET /A/hello/modified HTTP/1.1", host: "localhost" 2020/04/26 13:00:51 [notice] 63828#0: *6 rewritten data: "/_newroot/--", args: "", client: 127.0.0.1, server: localhost, request: "GET /A/hello/modified HTTP/1.1", host: "localhost" I would expect this to be rewritten to /_newroot/-/A/hello- Here, the map has defined $modroot as /_newroot (which is correct). The 'rewrite' _has_ matched, but the $1 in that line appears to be empty. Note the '+' in the regexp: there is supposed be be a string of non-zero length in there (ie, this is ruling out that I'm inadvertently matching, and replacing, an empty string, as a result of being somehow confused about where in the string '^' is matching). It's as if the regexp match in the 'map' is somehow interfering with the group-capturing in the 'rewrite'. As a workaround, I can get this to work with ^(?/A) in the 'map', and using $newprefix in the 'rewrite', but that's fiddly/ugly and more confusing than localising the rewriting to the 'rewrite' statement. Am I misunderstanding how 'rewrite' matches things, or is there an issue here? Best wishes, Norman % ../sbin/nginx -V nginx version: nginx/1.18.0 built by clang 11.0.3 (clang-1103.0.32.29) configure arguments: --prefix=/Data/tools/nginx-1.18 --with-pcre=../pcre-8.44 Both nginx and pcre built, as shown, from source. This is on macOS 10.15.3, but I get the same results with (packaged) nginx/1.16.1 on FreeBSD 12.1 Complete nginx.conf: worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; error_log logs/error.log debug; rewrite_log on; sendfile on; keepalive_timeout 65; # selected URIs are dynamically 'rehomed' map $uri $modroot { default _defaultroot; ~^/A _newroot; } server { listen 80; server_name localhost; location / { root html; index index.html index.htm; } location ~ /modified$ { rewrite ^(.+)/modified$ /$modroot/-$1-; } location /_defaultroot { # not a 'rehomed' one internal; error_page 404 /404-private.html; } location /_newroot { internal; root html/x; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } -- Norman Gray : https://nxg.me.uk From themadbeaker at gmail.com Sun Apr 26 14:59:03 2020 From: themadbeaker at gmail.com (J.R.) Date: Sun, 26 Apr 2020 09:59:03 -0500 Subject: limit_req at server level gives 404 error for files rewritten at location level? Message-ID: I skimmed over the ngx_http_limit_req_module.c and didn't see anything obvious in relation to file checking, but here's my scenario... I have a location block that will re-write the requested 'versioned' file name to the actual common file name, so I can set some things immutable without having to deal with changing tons of physical files (only the HTML changes). example (shortened for brevity): # file.v1.gif = file.gif location ~* (.+)\.v\d+\.(gif|jpg|png)$ { try_files $1.$2 $uri @proxy; } When I had the following limit_req in my @proxy 'location' block, limiting worked fine. location @proxy { limit_req zone=reqlimit burst=24 delay=12; proxy_pass http://backend; } Obviously the above limit_req never interacted with the rewriting 'location' mentioned above (the @proxy at the end of try_files was just a fallback in case of errors). However, when I moved the limit_req up to the 'server' level, I started getting 404 errors for all the image requests that were handled by the above mentioned rewriting location block. All other files were handled just fine. Kind of scratching my head on this one. Any thoughts Maxim? Thanks! From mdounin at mdounin.ru Sun Apr 26 16:28:38 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 26 Apr 2020 19:28:38 +0300 Subject: rewrite and map ??interfering regexps In-Reply-To: References: Message-ID: <20200426162838.GM20357@mdounin.ru> Hello! On Sun, Apr 26, 2020 at 01:49:19PM +0100, Norman Gray wrote: > Greetings. > > I'm trying to do some fairly intricate URI rewriting, and the behaviour of > the 'rewrite' statement does not correspond to anything I can explain from > the docs. > > The goal is that /A/foo/modified is rewritten to /_newroot/foo and > /B/foo/modified to /_defaultroot/foo. I hope to achieve this with > > map $uri $modroot { > default _defaultroot; > ~^/A _newroot; > } > > and > > location ~ /modified$ { > rewrite ^(.+)/modified$ /$modroot/-$1-; > } > > (in the real config, /_newroot is reverse-proxied to a web service, so that > the URIs it handles end up grafted on to selected trees; the 'map' is > intended to select/limit which URIs are passed on to this service). > > The complete nginx.conf is at the bottom. > > Looking in the error log, when I retrieve /hello/modified I find > > 2020/04/26 12:23:28 [notice] 63328#0: *5 "^(.+)/modified$" matches > "/hello/modified", client: 127.0.0.1, server: localhost, request: "GET > /hello/modified HTTP/1.1", host: "localhost" > 2020/04/26 12:23:28 [notice] 63328#0: *5 rewritten data: > "/_defaultroot/-/hello-", args: "", client: 127.0.0.1, server: localhost, > request: "GET /hello/modified HTTP/1.1", host: "localhost" > > ...which is fine: the map defines $modroot as /_defaultroot, and the rewrite > captures the /hello. > > But retrieving /A/hello/modified, > > 2020/04/26 13:00:51 [notice] 63828#0: *6 "^(.+)/modified$" matches > "/A/hello/modified", client: 127.0.0.1, server: localhost, request: "GET > /A/hello/modified HTTP/1.1", host: "localhost" > 2020/04/26 13:00:51 [notice] 63828#0: *6 rewritten data: "/_newroot/--", > args: "", client: 127.0.0.1, server: localhost, request: "GET > /A/hello/modified HTTP/1.1", host: "localhost" > > I would expect this to be rewritten to /_newroot/-/A/hello- > > Here, the map has defined $modroot as /_newroot (which is correct). The > 'rewrite' _has_ matched, but the $1 in that line appears to be empty. Note > the '+' in the regexp: there is supposed be be a string of non-zero length > in there (ie, this is ruling out that I'm inadvertently matching, and > replacing, an empty string, as a result of being somehow confused about > where in the string '^' is matching). > > It's as if the regexp match in the 'map' is somehow interfering with the > group-capturing in the 'rewrite'. > > As a workaround, I can get this to work with ^(?/A) in the 'map', > and using $newprefix in the 'rewrite', but that's fiddly/ugly and more > confusing than localising the rewriting to the 'rewrite' statement. > > Am I misunderstanding how 'rewrite' matches things, or is there an issue > here? The issue is that $1..$N variables as used by the second argument of the rewrite directive are from the last regular expression matched. And the last regular expression is not the one from the first argument of the rewrite directive when using a variable from map with regular expressions. Relevant ticket is here: https://trac.nginx.org/nginx/ticket/564 Unfortunately, there is no obvious solution. On the other hand, this is something relatively easy to work around. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Sun Apr 26 16:43:39 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 26 Apr 2020 19:43:39 +0300 Subject: limit_req at server level gives 404 error for files rewritten at location level? In-Reply-To: References: Message-ID: <20200426164339.GN20357@mdounin.ru> Hello! On Sun, Apr 26, 2020 at 09:59:03AM -0500, J.R. wrote: > I skimmed over the ngx_http_limit_req_module.c and didn't see anything > obvious in relation to file checking, but here's my scenario... > > I have a location block that will re-write the requested 'versioned' > file name to the actual common file name, so I can set some things > immutable without having to deal with changing tons of physical files > (only the HTML changes). > > example (shortened for brevity): > > # file.v1.gif = file.gif > location ~* (.+)\.v\d+\.(gif|jpg|png)$ { > try_files $1.$2 $uri @proxy; > } > > When I had the following limit_req in my @proxy 'location' block, > limiting worked fine. > > location @proxy { > limit_req zone=reqlimit burst=24 delay=12; > proxy_pass http://backend; > } > > Obviously the above limit_req never interacted with the rewriting > 'location' mentioned above (the @proxy at the end of try_files was > just a fallback in case of errors). > > However, when I moved the limit_req up to the 'server' level, I > started getting 404 errors for all the image requests that were > handled by the above mentioned rewriting location block. All other > files were handled just fine. > > Kind of scratching my head on this one. Any thoughts Maxim? Unfortunately, configuration shown is not enough to say anything for sure, but my best guess is as follows: You are using "try_files $1.$2 ...", and your configuration expects that $1 and $2 variables are from the regular expression in the location. This might not be true as long as there are any other regular expressions in the configuration. In particular, if limit_req uses a map with regular expressions, this might result in $1.$2 to be set to something completely different from what was expected from the location matching. The general rule is: avoid using positional captures from regular expressions in location and server_name matching, these can be used only in very simple configurations. -- Maxim Dounin http://mdounin.ru/ From themadbeaker at gmail.com Sun Apr 26 18:38:59 2020 From: themadbeaker at gmail.com (J.R.) Date: Sun, 26 Apr 2020 13:38:59 -0500 Subject: limit_req at server level gives 404 error for files rewritten at location level? Message-ID: > In particular, if limit_req uses a map with regular expressions, > this might result in $1.$2 to be set to something completely > different from what was expected from the location matching. > > The general rule is: avoid using positional captures from regular > expressions in location and server_name matching, these can be > used only in very simple configurations. You were right! I forgot I had a map matching /24 of the $binary_subnet_addr that was being used by the limit_req_zone directive. I went through and gave names to my capture groups just to ensure this type of conflict doesn't happen again. Thanks again for your help! From gray at nxg.name Sun Apr 26 19:03:18 2020 From: gray at nxg.name (Norman Gray) Date: Sun, 26 Apr 2020 20:03:18 +0100 Subject: rewrite and map ??interfering regexps In-Reply-To: <20200426162838.GM20357@mdounin.ru> References: <20200426162838.GM20357@mdounin.ru> Message-ID: Maxim, hello. On 26 Apr 2020, at 17:28, Maxim Dounin wrote: > Relevant ticket is here: > > https://trac.nginx.org/nginx/ticket/564 > > Unfortunately, there is no obvious solution. On the other hand, > this is something relatively easy to work around. Aha, so it _is_ the map regexp and the rewrite regexp mutually interfering! Thanks for the speedy insight. Looking through the comments in the ticket, I agree with you that 'the current behaviour is bad, and should be fixed'. If only on a principle of least surprise. Until it is fixed, however, it would be extremely useful if, in the description of the 'map' stanza (ie, in ) it mentioned that the regexp in 'map' can interfere with the regexp in a 'rewrite' directive, in such a way that positional groups in the latter don't work. It could note that this is a (temporary?) defect, but that until it is fixed, using named groups in the 'rewrite' regexp is a good workaround, and give an example. It would be better here than in the documentation of 'rewrite', as that would keep the 'rewrite' documentation relatively simple. It only needs to be seen by people using 'rewrite' and 'map' together, who might be assumed to be marginally more sophisticated users. Best wishes, Norman -- Norman Gray : https://nxg.me.uk From themadbeaker at gmail.com Sun Apr 26 20:28:02 2020 From: themadbeaker at gmail.com (J.R.) Date: Sun, 26 Apr 2020 15:28:02 -0500 Subject: rewrite and map ??interfering regexps Message-ID: > Until it is fixed, however, it would be extremely useful if, in the > description of the 'map' stanza it mentioned > that the regexp in 'map' can interfere with the regexp in a 'rewrite' > directive, in such a way that positional groups in the latter don't > work. Yeah, I just realized I posted a question a couple hours after yours, and the answer was the same with the positional capture in a map causing issues with other directives after it... I would agree that adding a note in the map directive documentation would probably go a long way to help eliminate a lot of these redundant troubleshooting issues. From carsten.delellis at DELELLIS.NET Mon Apr 27 06:19:11 2020 From: carsten.delellis at DELELLIS.NET (Carsten Laun-De Lellis) Date: Mon, 27 Apr 2020 06:19:11 +0000 Subject: AW: Using NGINX as reverse proxy to webmin on a remote server In-Reply-To: <20200425174512.GA20939@daoine.org> References: <20200425174512.GA20939@daoine.org> Message-ID: Hi Francis First of all. Thank you very much for your quick reply. As I said I am new to Nginx and not 100% sure, what information you need to help me here. I tried your config, but it doesn't work. This means I am forwarded to the webmin login page, but can see the basic html only (login form, headline). I cannot see any graphical elements, like colors, gifs .... I have attached screenshots from the login page, and after login. I have also attached a simple network drawing how the servers are connected. If you need some more information please let me know what to look for in the logs. Mit freundlichem Gru? / Best regards Carsten Laun-De Lellis ? Hauptstrasse 13 D - 67705 Trippstadt ? Phone: +49 6306 5269850 Mobile: +49 151 275 30865 Fax:???? +49 6306 992142 email:?carsten.delellis at delellis.net ? http://www.linkedin.com/in/carstenlaundelellis ? USt.-ID.: DE257421372 ? --------------------------------------------------- Diese E-Mail k?nnte vertrauliche und/oder rechtlich gesch?tzte Informationen enthalten. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorised copying, disclosure or distribution of the material in this e-mail is strictly forbidden. -----Urspr?ngliche Nachricht----- Von: Francis Daly Gesendet: Saturday, April 25, 2020 7:45 PM An: nginx at nginx.org Betreff: Re: Using NGINX as reverse proxy to webmin on a remote server On Fri, Apr 24, 2020 at 09:01:36AM +0000, Carsten Laun-De Lellis wrote: Hi there, > The webin servers are setup according the following scheme: > > WebminX runs on https://hostX.local.domain:10000. > > My goal is to setup a central Nginx server and reverse proxy to the different webmin servers. > location /host1 { > proxy_pass https://host1.local.domain:10000/; For this, you probably want "location /host1/" (with the trailing /). > Unfortunately this is not working. I have checked the internet but found only how-tos where Nginx and webmin server running on the same host. Also these how-tos don't work for me. > What does "not working" mean here? You make one specific request; you want to get one specific response; but you get a different response instead? I suspect that things will be simpler if you are happy to reconfigure all of the "webmin" instances so that they believe they are installed at the same place in the url hierarchy as the external users see. That is: * add the line webprefix=/host1 to /etc/webmin/config on host1 * add the line webprefix=/host2 to /etc/webmin/config on host2 and then change your config to only use one trailing slash like so: location /host1/ { proxy_pass https://host1.local.domain:10000; And if there is still a problem, if you can show the request/response that does not do what you expect, it may be simpler for others to understand and help. Cheers, f -- Francis Daly francis at daoine.org -------------- next part -------------- A non-text attachment was scrubbed... Name: webmin.png Type: image/png Size: 20356 bytes Desc: webmin.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: webmin_after_login.png Type: image/png Size: 29593 bytes Desc: webmin_after_login.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Nginx_Webmin.png Type: image/png Size: 19206 bytes Desc: Nginx_Webmin.png URL: From paul at iwascoding.com Mon Apr 27 07:57:57 2020 From: paul at iwascoding.com (Paul Hecker) Date: Mon, 27 Apr 2020 09:57:57 +0200 Subject: Implementation of http2/RST_STREAM in NGINX 1.18.0 Message-ID: Hi, it seems that macOS still has an issue with the proper handling of RST_STREAM. Since NGINX 1.18.0 the proper handling of RST_STREAM is re-enabled in this commit: https://hg.nginx.org/nginx/rev/2e61e4b6bcd9 I used git bisect to track this down. Our server mainly handles basic-auth protected image uploads through a CGI. All the clients are using NSURLSession to connect to the CGI. After the 401 reply NGINX is sending the RST_STREAM (as the images may be quite large and the upload continues) but the NSURLSession and its subcomponents are not re-trying with an authorized requst. Instead they are failing with an error. As I can patch the sources and build my own version as a work-around, I would like to send you an heads up. Maybe this is an issue with all browsers on macOS/iOS that are using the NSURLSession subsystem. Also you may consider adding a configuration option for the RST_STREAM handling, so that the user/administrator can decide whether most of its web-clients properly support the RST_STREAM. Thanks, Paul -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4106 bytes Desc: not available URL: From anthony at mindmedia.com.sg Mon Apr 27 09:51:23 2020 From: anthony at mindmedia.com.sg (P.V.Anthony) Date: Mon, 27 Apr 2020 17:51:23 +0800 Subject: AW: Using NGINX as reverse proxy to webmin on a remote server In-Reply-To: References: <20200425174512.GA20939@daoine.org> Message-ID: <0bc63edc-7443-2c3f-d37c-204e993d017e@mindmedia.com.sg> On 27/4/20 2:19 pm, Carsten Laun-De Lellis wrote: > As I said I am new to Nginx and not 100% sure, what information you need to help me here. > > I tried your config, but it doesn't work. This means I am forwarded to the webmin login page, but can see the basic html only (login form, headline). I cannot see any graphical elements, like colors, gifs .... Checkout the following link. https://serverfault.com/questions/443482/proxying-webmin-with-nginx P.V.Anthony From carsten.delellis at DELELLIS.NET Mon Apr 27 12:49:16 2020 From: carsten.delellis at DELELLIS.NET (Carsten Laun-De Lellis) Date: Mon, 27 Apr 2020 12:49:16 +0000 Subject: AW: AW: Using NGINX as reverse proxy to webmin on a remote server In-Reply-To: <0bc63edc-7443-2c3f-d37c-204e993d017e@mindmedia.com.sg> References: <20200425174512.GA20939@daoine.org> <0bc63edc-7443-2c3f-d37c-204e993d017e@mindmedia.com.sg> Message-ID: <467c432047f54f5395aa7f38c21e51bc@DELELLIS.NET> Hi Anthony Thank you for your quick reply. I've tried to configure my servers according to the link you sent, but it didn't work out. The config on the Nginx server looks like: server { server_name vml000036.delellis.net; listen 192.168.178.36:80; location /vml000032 { proxy_pass http://192.168.1.32:10000; proxy_set_header Host $host; } } The webmin config on the upstream server looks like: path=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin passwd_cindex=2 ld_env=LD_LIBRARY_PATH tempdelete_days=7 by_view=0 find_pid_command=ps auwwwx | grep NAME | grep -v grep | awk '{ print $2 }' passwd_pindex=1 passwd_file=/etc/shadow passwd_mindex=4 passwd_uindex=0 os_type=debian-linux os_version=9.0 real_os_type=Ubuntu Linux real_os_version=18.04.4 lang=en.UTF-8 log=1 referers_none=1 md5pass=1 theme=authentic-theme product=webmin webprefix=/vml000032 webprefixnoredir=1 referer=vml000036.delellis.net I have tried also as referer the IP Address of the Nginx server, but didn't work either. When I open the page in my webbrowser I get the logon screen to the webmin sever on my Nginx hostsystem. Not on vml000032. But even when I try to login the page refreshes and nothing else happens. Mit freundlichem Gru? / Best regards Carsten Laun-De Lellis ? Hauptstrasse 13 D - 67705 Trippstadt ? Phone: +49 6306 5269850 Mobile: +49 151 275 30865 Fax:???? +49 6306 992142 email:?carsten.delellis at delellis.net ? http://www.linkedin.com/in/carstenlaundelellis ? USt.-ID.: DE257421372 ? --------------------------------------------------- Diese E-Mail k?nnte vertrauliche und/oder rechtlich gesch?tzte Informationen enthalten. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorised copying, disclosure or distribution of the material in this e-mail is strictly forbidden. -----Urspr?ngliche Nachricht----- Von: P.V.Anthony Gesendet: Monday, April 27, 2020 11:51 AM An: nginx at nginx.org Betreff: Re: AW: Using NGINX as reverse proxy to webmin on a remote server On 27/4/20 2:19 pm, Carsten Laun-De Lellis wrote: > As I said I am new to Nginx and not 100% sure, what information you need to help me here. > > I tried your config, but it doesn't work. This means I am forwarded to the webmin login page, but can see the basic html only (login form, headline). I cannot see any graphical elements, like colors, gifs .... Checkout the following link. https://serverfault.com/questions/443482/proxying-webmin-with-nginx P.V.Anthony -------------- next part -------------- A non-text attachment was scrubbed... Name: webminlogin36.png Type: image/png Size: 26901 bytes Desc: webminlogin36.png URL: From mahmood.nt at gmail.com Mon Apr 27 13:53:59 2020 From: mahmood.nt at gmail.com (Mahmood Naderan) Date: Mon, 27 Apr 2020 18:23:59 +0430 Subject: Want to use --emit-relocs in the linker step Message-ID: Hi, I want to add '--emit-relocs' at the linker stage while building nginx, I have edited the objs/Makefile to be like this: $(LINK) -o objs/nginx \ objs/src/core/nginx.o \ .... objs/ngx_modules.o \ -ldl -lpthread -lcrypt -lpcre -lz --emit-relocs \ -Wl,-E However, I get this error cc: error: unrecognized command line option '--emit-relocs' Any idea to fix that? Regards, Mahmood -------------- next part -------------- An HTML attachment was scrubbed... URL: From defan at nginx.com Mon Apr 27 13:58:21 2020 From: defan at nginx.com (Andrei Belov) Date: Mon, 27 Apr 2020 16:58:21 +0300 Subject: Want to use --emit-relocs in the linker step In-Reply-To: References: Message-ID: <9558C731-8F7E-45C1-9312-A8AADF614E0F@nginx.com> > On 27 Apr 2020, at 16:53, Mahmood Naderan wrote: > > Hi, > I want to add '--emit-relocs' at the linker stage while building nginx, I have edited the objs/Makefile to be like this: > > $(LINK) -o objs/nginx \ > objs/src/core/nginx.o \ > .... > objs/ngx_modules.o \ > -ldl -lpthread -lcrypt -lpcre -lz --emit-relocs \ > -Wl,-E > > > However, I get this error > > cc: error: unrecognized command line option '--emit-relocs' > > Any idea to fix that? As it is linker option, you should use -Wl,--emit-relocs instead. Also, the better way is to specify linker options via "--with-ld-opt" nginx configure option, e.g.: ./configure --with-ld-opt="-Wl,--emit-relocs" HTH, -- Andrei From mahmood.nt at gmail.com Mon Apr 27 14:15:17 2020 From: mahmood.nt at gmail.com (Mahmood Naderan) Date: Mon, 27 Apr 2020 18:45:17 +0430 Subject: Want to use --emit-relocs in the linker step In-Reply-To: <9558C731-8F7E-45C1-9312-A8AADF614E0F@nginx.com> References: <9558C731-8F7E-45C1-9312-A8AADF614E0F@nginx.com> Message-ID: Thank you. That is right. Regards, Mahmood On Mon, Apr 27, 2020 at 6:28 PM Andrei Belov wrote: > > > On 27 Apr 2020, at 16:53, Mahmood Naderan wrote: > > > > Hi, > > I want to add '--emit-relocs' at the linker stage while building nginx, > I have edited the objs/Makefile to be like this: > > > > $(LINK) -o objs/nginx \ > > objs/src/core/nginx.o \ > > .... > > objs/ngx_modules.o \ > > -ldl -lpthread -lcrypt -lpcre -lz --emit-relocs \ > > -Wl,-E > > > > > > However, I get this error > > > > cc: error: unrecognized command line option '--emit-relocs' > > > > Any idea to fix that? > > As it is linker option, you should use -Wl,--emit-relocs instead. > > Also, the better way is to specify linker options via "--with-ld-opt" > nginx configure option, e.g.: > > ./configure --with-ld-opt="-Wl,--emit-relocs" > > > HTH, > > -- Andrei > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Apr 27 14:54:03 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 27 Apr 2020 17:54:03 +0300 Subject: Want to use --emit-relocs in the linker step In-Reply-To: References: Message-ID: <20200427145403.GQ20357@mdounin.ru> Hello! On Mon, Apr 27, 2020 at 06:23:59PM +0430, Mahmood Naderan wrote: > Hi, > I want to add '--emit-relocs' at the linker stage while building nginx, I > have edited the objs/Makefile to be like this: > > $(LINK) -o objs/nginx \ > objs/src/core/nginx.o \ > .... > objs/ngx_modules.o \ > -ldl -lpthread -lcrypt -lpcre -lz --emit-relocs \ > -Wl,-E > > > However, I get this error > > cc: error: unrecognized command line option '--emit-relocs' > > Any idea to fix that? Likely you have to use -Wl,--emit-relocs instead, since by nginx does not use linker directly, but rather calls it via compiler instead. Note well that there is no need to edit objs/Makefile manually, there is the "--with-ld-opt" configure option, see http://nginx.org/en/docs/configure.html and/or output of the "./configure --help" command. -- Maxim Dounin http://mdounin.ru/ From anthony at mindmedia.com.sg Mon Apr 27 18:45:22 2020 From: anthony at mindmedia.com.sg (P.V.Anthony) Date: Tue, 28 Apr 2020 02:45:22 +0800 Subject: AW: AW: Using NGINX as reverse proxy to webmin on a remote server In-Reply-To: <467c432047f54f5395aa7f38c21e51bc@DELELLIS.NET> References: <20200425174512.GA20939@daoine.org> <0bc63edc-7443-2c3f-d37c-204e993d017e@mindmedia.com.sg> <467c432047f54f5395aa7f38c21e51bc@DELELLIS.NET> Message-ID: <959f3f5a-da1e-087f-6f3c-abd809cd0bd9@mindmedia.com.sg> On 27/4/20 8:49 pm, Carsten Laun-De Lellis wrote: > I've tried to configure my servers according to the link you sent, but it didn't work out. I tried on my server and got it to work. Not exactly the way you may want. This is a start for further research. It seems that there are webmin settings that need to be done and not nginx. Here is my config for nginx. server { listen *:80; server_name webmin.example.com ; root /var/www/webmin.example.com/web/; location / { proxy_pass http://127.0.0.1:10000/; } Added the line to the bottom of /etc/webmin/config referer=1 The above is not a good idea but a start for more research. Next disable ssl in webmin. For me I had to change the file /etc/webmin/miniserv.conf with ssl=0 The above works for me. Having said all that, I do not like the idea of webmin facing the internet. To scary for me. I would just use ssh -D 8080 user at server then with firefox set to proxy on 8080 and use webmin. I feel that is much more safer. It is like a vpn connection. P.V.Anthony From jmedina at mardom.com Mon Apr 27 19:32:07 2020 From: jmedina at mardom.com (Johan Gabriel Medina Capois) Date: Mon, 27 Apr 2020 19:32:07 +0000 Subject: nginx reverse proxy rewrite rule Message-ID: Good afternoon How to make rewrite rule for nginx as reverse proxy for IIS backend Example We have a site configured https://dell.com but we want need when someone request https://dell.com the reverse proxy return dell.com/support/logon, how can we do it? Thank for your support Regards Johan Medina Administrador de Sistemas e Infraestructura [Logo] Departamento: TECNOLOGIA Central Tel: 809-539-600 Ext: 8139 Flota: (809) 974-4954 Directo: 809 974-4954 Email: jmedina at mardom.com Grupos de correo: otros at mardom.com Web:www.mardom.com [Facebook icon] [Instagram icon] [Linkedin icon] [Youtube icon] [Banner] -------------- next part -------------- An HTML attachment was scrubbed... URL: From praveenssit at gmail.com Tue Apr 28 04:40:37 2020 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Tue, 28 Apr 2020 10:10:37 +0530 Subject: How to hide kernel information Message-ID: Hello, I have hosted Nginx 1.16.1 on Ubuntu 16.04. Have configured SSL from LetsEncrypt. Everything is running fine. Only port 80 and 443 are allowed. During security testing, I see that kernel information is exposed on domain. More details at https://www.tenable.com/plugins/nessus/11936 Is there any way to hide kernel information using Nginx ? Cheers, PK -------------- next part -------------- An HTML attachment was scrubbed... URL: From anthony at mindmedia.com.sg Tue Apr 28 05:35:32 2020 From: anthony at mindmedia.com.sg (P.V.Anthony) Date: Tue, 28 Apr 2020 13:35:32 +0800 Subject: AW: AW: Using NGINX as reverse proxy to webmin on a remote server In-Reply-To: <467c432047f54f5395aa7f38c21e51bc@DELELLIS.NET> References: <20200425174512.GA20939@daoine.org> <0bc63edc-7443-2c3f-d37c-204e993d017e@mindmedia.com.sg> <467c432047f54f5395aa7f38c21e51bc@DELELLIS.NET> Message-ID: <08656ba6-ef71-d60f-4398-8abdf06944cf@mindmedia.com.sg> On 27/4/20 8:49 pm, Carsten Laun-De Lellis wrote: > I've tried to configure my servers according to the link you sent, but it didn't work out. Here is more information found in the internet. https://serverfault.com/questions/740818/webmin-and-reverse-proxy https://github.com/webmin/webmin/issues/420 Here is some documentation for using ssh like a vpn. https://www.howtogeek.com/168145/how-to-use-ssh-tunneling/ read this section, "Dynamic Port Forwarding: Use Your SSH Server as a Proxy" from the above link. P.V.Anthony From lists at lazygranch.com Tue Apr 28 05:40:43 2020 From: lists at lazygranch.com (lists) Date: Mon, 27 Apr 2020 22:40:43 -0700 Subject: How to hide kernel information In-Reply-To: Message-ID: An HTML attachment was scrubbed... URL: From praveenssit at gmail.com Tue Apr 28 05:53:53 2020 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Tue, 28 Apr 2020 11:23:53 +0530 Subject: How to hide kernel information In-Reply-To: References: Message-ID: SINFP method is used to get the kernel information. On Tue, Apr 28, 2020 at 11:10 AM lists wrote: > Well I know nmap can detect the OS. I don't recall it could detect the rev > of the kernel. > > https://nmap.org/book/man-os-detection.html > > https://nmap.org/book/defenses.html > > *From:* praveenssit at gmail.com > *Sent:* April 27, 2020 9:41 PM > *To:* nginx at nginx.org > *Reply-to:* nginx at nginx.org > *Subject:* How to hide kernel information > > Hello, > > I have hosted Nginx 1.16.1 on Ubuntu 16.04. Have configured SSL from > LetsEncrypt. Everything is running fine. Only port 80 and 443 are allowed. > > During security testing, I see that kernel information is exposed on > domain. More details at https://www.tenable.com/plugins/nessus/11936 > > Is there any way to hide kernel information using Nginx ? > > Cheers, > PK > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- *Regards,* *K S Praveen KumarM: +91-9986855625 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Tue Apr 28 06:19:03 2020 From: lists at lazygranch.com (lists) Date: Mon, 27 Apr 2020 23:19:03 -0700 Subject: How to hide kernel information In-Reply-To: Message-ID: <03fpe0b9101kqa5rtge00f9c.1588054743118@lazygranch.com> An HTML attachment was scrubbed... URL: From praveenssit at gmail.com Tue Apr 28 12:42:40 2020 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Tue, 28 Apr 2020 18:12:40 +0530 Subject: Compile Nginx Message-ID: Hello, Can I compile nginx on Ubuntu 16.04 and reuse it on other deployments? Or do I need to compile every time ? Please advise. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Apr 28 12:46:08 2020 From: nginx-forum at forum.nginx.org (Aran) Date: Tue, 28 Apr 2020 08:46:08 -0400 Subject: SSL: error:0909006C:PEM routines:get_name:no start line:Expecting: ANY PRIVATE KEY error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib) Message-ID: Hi, [emerg] SSL_CTX_use_PrivateKey_file("/etc/nginx/ssl/domain.key") failed (SSL: error:0909006C:PEM routines:get_name:no start line:Expecting: ANY PRIVATE KEY error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib) We bought ssl certificates from godaddy and tried to install their guidance... and i get this error. Is it a private key error. In that case private key error. Can we ask for a new key or is there a way i can get the key with in their zip folder of ssl certificates? Thanks in advance! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287867,287867#msg-287867 From praveenssit at gmail.com Tue Apr 28 13:15:51 2020 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Tue, 28 Apr 2020 18:45:51 +0530 Subject: How to hide kernel information In-Reply-To: <03fpe0b9101kqa5rtge00f9c.1588054743118@lazygranch.com> References: <03fpe0b9101kqa5rtge00f9c.1588054743118@lazygranch.com> Message-ID: Okay. I exactly don't know how the Security Testing Team is able to get the kernel information. They use Qualys and Nessus for performing tests. All I can say is only port 443 allowed to the server and I thought asking you guys if it is from Nginx or is there any way to handle it. Server is behind firewall. On Tue, Apr 28, 2020 at 11:49 AM lists wrote: > Have you tried it? > https://securiteam.com/tools/5qp0920ikm/ > > I ran the nmap OS detection on my own server once and it triggered > SSHGuard, locking me out. So a tip is you may want to run SINFP from a > disposable IP address if you are running fail2ban, etc. > *From:* praveenssit at gmail.com > *Sent:* April 27, 2020 10:54 PM > *To:* nginx at nginx.org > *Reply-to:* nginx at nginx.org > *Subject:* Re: How to hide kernel information > > SINFP method is used to get the kernel information. > > On Tue, Apr 28, 2020 at 11:10 AM lists wrote: > >> Well I know nmap can detect the OS. I don't recall it could detect the >> rev of the kernel. >> >> https://nmap.org/book/man-os-detection.html >> >> https://nmap.org/book/defenses.html >> >> *From:* praveenssit at gmail.com >> *Sent:* April 27, 2020 9:41 PM >> *To:* nginx at nginx.org >> *Reply-to:* nginx at nginx.org >> *Subject:* How to hide kernel information >> >> Hello, >> >> I have hosted Nginx 1.16.1 on Ubuntu 16.04. Have configured SSL from >> LetsEncrypt. Everything is running fine. Only port 80 and 443 are allowed. >> >> During security testing, I see that kernel information is exposed on >> domain. More details at https://www.tenable.com/plugins/nessus/11936 >> >> Is there any way to hide kernel information using Nginx ? >> >> Cheers, >> PK >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > > > *Regards,* > > > *K S Praveen KumarM: +91-9986855625 <+919986855625>* > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- *Regards,* *K S Praveen KumarM: +91-9986855625 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglist at unix-solution.de Tue Apr 28 13:17:04 2020 From: mailinglist at unix-solution.de (basti) Date: Tue, 28 Apr 2020 15:17:04 +0200 Subject: Compile Nginx In-Reply-To: References: Message-ID: <514112a5-76cb-9d4f-eaaa-55ac9cb91846@unix-solution.de> It depends on how you compile. First of all have a look at the repository of you distribution or nginx itself it's easier to update for bugfix or security impacts. The 2'nd way can be to upgrade you server and get a newer nginx. If that all is not an option I would prefer a build a debian package. it can be easily transferred and installed. Be aware that a computer/ server can have different architectures. On 28.04.20 14:42, Praveen Kumar K S wrote: > Hello, > > Can I compile nginx on Ubuntu 16.04 and reuse it on other deployments? > Or do I need to compile every time ? Please advise. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From r at roze.lv Tue Apr 28 13:20:23 2020 From: r at roze.lv (Reinis Rozitis) Date: Tue, 28 Apr 2020 16:20:23 +0300 Subject: Compile Nginx In-Reply-To: References: Message-ID: <000501d61d5f$c9638710$5c2a9530$@roze.lv> > Can I compile nginx on Ubuntu 16.04 and reuse it on other deployments? Or do I need to compile every time ? Please advise. As far as the hosts have all the shared libraries like openssl/pcre etc (you can check with 'ldd /path/to/nginx') there is no need to compile every time and you can just copy the nginx binary. rr From josef.vybihal at gmail.com Tue Apr 28 13:30:41 2020 From: josef.vybihal at gmail.com (=?UTF-8?Q?Josef_Vyb=C3=ADhal?=) Date: Tue, 28 Apr 2020 15:30:41 +0200 Subject: How to hide kernel information In-Reply-To: References: <03fpe0b9101kqa5rtge00f9c.1588054743118@lazygranch.com> Message-ID: The test is GUESSing, it's written there in the link you posted. What are your HTTP headers - what do you expose there? Do you expose your nginx version to clients? Like in headers? Error pages? From those, it's possible determine used OS and then guess kernel information. Is your app leaking this info, is simle HTML page "leaking" it too? In normal conditions, nginx does not expose such information - why would it?. Post your config, or something to work with maybe. Once you say, 80 and 443, then only 443, also you say "I see that kernel information is exposed on domain" - where do you see that? Show us, and help us better understand... My guess, is: its guessing from some header or error page, where there is info like: Server: nginx/1.4.6 (Ubuntu) X-Powered-By: PHP/5.5.9-1ubuntu4.25 in headers, for example. P. On Tue, Apr 28, 2020 at 3:16 PM Praveen Kumar K S wrote: > Okay. I exactly don't know how the Security Testing Team is able to get > the kernel information. They use Qualys and Nessus for performing tests. > All I can say is only port 443 allowed to the server and I thought asking > you guys if it is from Nginx or is there any way to handle it. Server is > behind firewall. > > On Tue, Apr 28, 2020 at 11:49 AM lists wrote: > >> Have you tried it? >> https://securiteam.com/tools/5qp0920ikm/ >> >> I ran the nmap OS detection on my own server once and it triggered >> SSHGuard, locking me out. So a tip is you may want to run SINFP from a >> disposable IP address if you are running fail2ban, etc. >> *From:* praveenssit at gmail.com >> *Sent:* April 27, 2020 10:54 PM >> *To:* nginx at nginx.org >> *Reply-to:* nginx at nginx.org >> *Subject:* Re: How to hide kernel information >> >> SINFP method is used to get the kernel information. >> >> On Tue, Apr 28, 2020 at 11:10 AM lists wrote: >> >>> Well I know nmap can detect the OS. I don't recall it could detect the >>> rev of the kernel. >>> >>> https://nmap.org/book/man-os-detection.html >>> >>> https://nmap.org/book/defenses.html >>> >>> *From:* praveenssit at gmail.com >>> *Sent:* April 27, 2020 9:41 PM >>> *To:* nginx at nginx.org >>> *Reply-to:* nginx at nginx.org >>> *Subject:* How to hide kernel information >>> >>> Hello, >>> >>> I have hosted Nginx 1.16.1 on Ubuntu 16.04. Have configured SSL from >>> LetsEncrypt. Everything is running fine. Only port 80 and 443 are allowed. >>> >>> During security testing, I see that kernel information is exposed on >>> domain. More details at https://www.tenable.com/plugins/nessus/11936 >>> >>> Is there any way to hide kernel information using Nginx ? >>> >>> Cheers, >>> PK >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> -- >> >> >> *Regards,* >> >> >> *K S Praveen KumarM: +91-9986855625 <+919986855625>* >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > > > *Regards,* > > > *K S Praveen KumarM: +91-9986855625 * > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan at pingsweep.co.uk Tue Apr 28 13:40:31 2020 From: dan at pingsweep.co.uk (Daniel Hadfield) Date: Tue, 28 Apr 2020 14:40:31 +0100 Subject: SSL: error:0909006C:PEM routines:get_name:no start line:Expecting: ANY PRIVATE KEY error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib) In-Reply-To: References: Message-ID: <327d3bcc-0bcb-9246-c4eb-7ecaa6660273@pingsweep.co.uk> The key is the key you used when you generated the CSR. The key remains on your machine at all times not sent to godaddy. On 28/04/2020 13:46, Aran wrote: > Hi, > > [emerg] SSL_CTX_use_PrivateKey_file("/etc/nginx/ssl/domain.key") failed > (SSL: error:0909006C:PEM routines:get_name:no start line:Expecting: ANY > PRIVATE KEY error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM > lib) > > We bought ssl certificates from godaddy and tried to install their > guidance... and i get this error. Is it a private key error. > > In that case private key error. Can we ask for a new key or is there a way i > can get the key with in their zip folder of ssl certificates? > > Thanks in advance! > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287867,287867#msg-287867 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From praveenssit at gmail.com Tue Apr 28 13:42:58 2020 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Tue, 28 Apr 2020 19:12:58 +0530 Subject: Compile Nginx In-Reply-To: <000501d61d5f$c9638710$5c2a9530$@roze.lv> References: <000501d61d5f$c9638710$5c2a9530$@roze.lv> Message-ID: I usually install from the official nginx apt repo. But since I want to use modules like more_set_headers which requires building nginx from source, I'm looking for best practices. On Tue, Apr 28, 2020 at 6:50 PM Reinis Rozitis wrote: > > Can I compile nginx on Ubuntu 16.04 and reuse it on other deployments? > Or do I need to compile every time ? Please advise. > > As far as the hosts have all the shared libraries like openssl/pcre etc > (you can check with 'ldd /path/to/nginx') there is no need to compile every > time and you can just copy the nginx binary. > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Regards,* *K S Praveen KumarM: +91-9986855625 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Tue Apr 28 13:51:57 2020 From: anoopalias01 at gmail.com (Anoop Alias) Date: Tue, 28 Apr 2020 19:21:57 +0530 Subject: Compile Nginx In-Reply-To: References: <000501d61d5f$c9638710$5c2a9530$@roze.lv> Message-ID: The Nginx binary compiled on one system can be run on a similar architecture system as it is portable code. The ones you download from the repo are compiled on a machine to binary by the repo maintainer you can ship the binary in a tool like rpm or deb On Tue, Apr 28, 2020 at 7:13 PM Praveen Kumar K S wrote: > I usually install from the official nginx apt repo. But since I want to > use modules like more_set_headers which requires building nginx from > source, I'm looking for best practices. > > On Tue, Apr 28, 2020 at 6:50 PM Reinis Rozitis wrote: > >> > Can I compile nginx on Ubuntu 16.04 and reuse it on other deployments? >> Or do I need to compile every time ? Please advise. >> >> As far as the hosts have all the shared libraries like openssl/pcre etc >> (you can check with 'ldd /path/to/nginx') there is no need to compile every >> time and you can just copy the nginx binary. >> >> rr >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > -- > > > *Regards,* > > > *K S Praveen KumarM: +91-9986855625 * > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From themadbeaker at gmail.com Tue Apr 28 15:17:13 2020 From: themadbeaker at gmail.com (J.R.) Date: Tue, 28 Apr 2020 10:17:13 -0500 Subject: How to hide kernel information Message-ID: > Okay. I exactly don't know how the Security Testing Team is able to get the > kernel information. They use Qualys and Nessus for performing tests. All I > can say is only port 443 allowed to the server and I thought asking you > guys if it is from Nginx or is there any way to handle it. Server is behind > firewall. As someone else commented, check your HTTP headers to make sure they aren't publishing something extremely obvious for the casual scanner. As for determining kernel version, the web server has zero control over that. The scanner program you are referring to fingerprints based on kernel TCP settings / support... i.e. TCP Flags, Window, Options, MSS, etc... Totally unrelated to nginx, and the same information could be gathered on any open service / port. From praveenssit at gmail.com Tue Apr 28 15:33:55 2020 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Tue, 28 Apr 2020 21:03:55 +0530 Subject: How to hide kernel information In-Reply-To: References: Message-ID: Thank you for your support. I will take all your inputs into consideration to fix this issue. On Tue, Apr 28, 2020 at 8:47 PM J.R. wrote: > > Okay. I exactly don't know how the Security Testing Team is able to get > the > > kernel information. They use Qualys and Nessus for performing tests. All > I > > can say is only port 443 allowed to the server and I thought asking you > > guys if it is from Nginx or is there any way to handle it. Server is > behind > > firewall. > > As someone else commented, check your HTTP headers to make sure they > aren't publishing something extremely obvious for the casual scanner. > > As for determining kernel version, the web server has zero control > over that. The scanner program you are referring to fingerprints based > on kernel TCP settings / support... i.e. TCP Flags, Window, Options, > MSS, etc... Totally unrelated to nginx, and the same information > could be gathered on any open service / port. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Regards,* *K S Praveen KumarM: +91-9986855625 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul at stormy.ca Tue Apr 28 18:56:09 2020 From: paul at stormy.ca (Paul) Date: Tue, 28 Apr 2020 14:56:09 -0400 Subject: SSL and port number [was: Rewrite -- failure] In-Reply-To: <20200422071441.GX20939@daoine.org> References: <131f8eb9-986d-73ba-e606-200154fc1624@stormy.ca> <20200414223939.GQ20939@daoine.org> <8ed9a632-b660-2747-53d3-d674bec13d1b@stormy.ca> <20200422071441.GX20939@daoine.org> Message-ID: <18ed9d18-d459-a7ec-89bd-a8e7a826a220@stormy.ca> On 2020-04-22 3:14 a.m., Francis Daly wrote: > On Tue, Apr 21, 2020 at 07:09:41PM -0400, Paul wrote: > > Hi there, > > I confess I'm not quite certain what you are reporting here -- if you > can say "with *this* config, I make *this* request and I get *this* > response, but I want *that* response instead", it may be clearer. > > However, there is one thing that might be a misunderstanding here: > > "listen 8000;" means that nginx will listen for http, so you must make > requests to port 8000 using http not https. > > "listen 8001 ssl;" means that nginx will listen for https, so you must > make requests to port 8001 using https not http. > > You can have both "listen" directives in the same server{}, but you > still must use the correct protocol on each port, or there will be errors. Hi Francis, Thanks. I have the two sites "mostly" working now (full config below), but could you please expand on your comment ""listen 8001 ssl;" means that nginx will listen for https, so you must make requests to port 8001 using https not http." My problem is that app/server A (static html) is working perfectly, but app/server B works only if the user's browser requests specifically "https://... ", but returns a "400 Bad Request // The plain HTTP request was sent to HTTPS port // nginx" if the browser requests http (which I believe is the default for most browsers if you paste or type just the URL into them.) In other words, the last few lines of the config. work for port 80 (sends seamlessly the 301, then the content), but not for port 8084 (sends only the 400.) Many thanks -- Paul # Combined file, two servers for myapps.example.com # myappa "A" for static site /var/wwww/myappa on 192.168.aaa.bbb # myappb "B" for cgi site /usr/share/myappb on 192.168.xxx.yyy # Server A server { listen 443 ssl; ssl_certificate /etc/letsencrypt/live/myapps.example.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/myapps.example.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot server_name myapps.example.com; access_log /var/log/nginx/access.log; error_log /var/log/nginx/myapp-error_log; proxy_buffering off; location / { proxy_pass http://myappa; proxy_set_header Host $host; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } # Server B server { listen 8084 ssl; ssl_certificate /etc/letsencrypt/live/myapps.example.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/myapps.example.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot server_name myapps.example.com; access_log /var/log/nginx/access.log; error_log /var/log/nginx/myapp-error_log; proxy_buffering off; location / { proxy_pass http://myappb:8084; proxy_set_header Host $host; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } server { if ($host = myapps.example.com) { return 301 https://$host$request_uri; } # managed by Certbot # automatically sets to https if someone comes in on http listen 80; listen 8084; server_name myapps.example.com; rewrite ^ https://$host$request_uri? permanent; } \\\||// (@ @) ooO_(_)_Ooo__________________________________ |______|_____|_____|_____|_____|_____|_____|_____| |___|____|_____|_____|_____|_____|_____|_____|____| |_____|_____| mailto:paul at stormy.ca _|____|____| From lists at lazygranch.com Tue Apr 28 20:44:27 2020 From: lists at lazygranch.com (lists) Date: Tue, 28 Apr 2020 13:44:27 -0700 Subject: How to hide kernel information In-Reply-To: Message-ID: An HTML attachment was scrubbed... URL: From themadbeaker at gmail.com Wed Apr 29 11:56:11 2020 From: themadbeaker at gmail.com (J.R.) Date: Wed, 29 Apr 2020 06:56:11 -0500 Subject: SSL and port number [was: Rewrite -- failure] Message-ID: To redirect a browser from http to https, you don't need to do an 'if' or 'rewrite'... The following would be the most efficient (and simplest)... server { listen 80; server_name myapps.example.com; access_log off; return 301 https://$host$request_uri; } From francis at daoine.org Wed Apr 29 16:47:32 2020 From: francis at daoine.org (Francis Daly) Date: Wed, 29 Apr 2020 17:47:32 +0100 Subject: SSL and port number [was: Rewrite -- failure] In-Reply-To: <18ed9d18-d459-a7ec-89bd-a8e7a826a220@stormy.ca> References: <131f8eb9-986d-73ba-e606-200154fc1624@stormy.ca> <20200414223939.GQ20939@daoine.org> <8ed9a632-b660-2747-53d3-d674bec13d1b@stormy.ca> <20200422071441.GX20939@daoine.org> <18ed9d18-d459-a7ec-89bd-a8e7a826a220@stormy.ca> Message-ID: <20200429164732.GB20939@daoine.org> On Tue, Apr 28, 2020 at 02:56:09PM -0400, Paul wrote: > On 2020-04-22 3:14 a.m., Francis Daly wrote: Hi there, > Thanks. I have the two sites "mostly" working now (full config below), but > could you please expand on your comment ""listen 8001 ssl;" means that nginx > will listen for https, so you must make requests to port 8001 using https > not http." nginx listens on an ip:port, and it expects exactly one protocol to be spoken on that port. I believe I see what may be the problem here... > My problem is that app/server A (static html) is working perfectly, but > app/server B works only if the user's browser requests specifically > "https://... ", but returns a "400 Bad Request // The plain HTTP request was > sent to HTTPS port // nginx" if the browser requests http (which I believe > is the default for most browsers if you paste or type just the URL into > them.) ...your server B has two server blocks. One says "listen 8084 ssl"; the other says "listen 8084". You want one to be https and the other to be http. Current nginx does not support doing that. If you need it to be done, you must use something other than current nginx. Your access url is "https://myapps.example.com:8084/" If someone tries to use "ftp://myapps.example.com:8084/", they will get an error indication. If they try "http://myapps.example.com:8084/", they will get an error indication. If they try "gopher://myapps.example.com:8084/", they will get an error indication. The error indication that current-nginx gives is "this is not a valid https protocol request"; it does not try to guess what sort of protocol request it actually is. If you just remove the "listen 8084" from the second server, and invite people to use the correct url (either "http://myapps.example.com/", or "https://myapps.example.com:8084/"), then it should all Just Work. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Apr 29 17:15:18 2020 From: francis at daoine.org (Francis Daly) Date: Wed, 29 Apr 2020 18:15:18 +0100 Subject: AW: Using NGINX as reverse proxy to webmin on a remote server In-Reply-To: <467c432047f54f5395aa7f38c21e51bc@DELELLIS.NET> References: <20200425174512.GA20939@daoine.org> <0bc63edc-7443-2c3f-d37c-204e993d017e@mindmedia.com.sg> <467c432047f54f5395aa7f38c21e51bc@DELELLIS.NET> Message-ID: <20200429171518.GC20939@daoine.org> On Mon, Apr 27, 2020 at 12:49:16PM +0000, Carsten Laun-De Lellis wrote: Hi there, Thanks for the pictures in your previous reply; they do give a bit of a hint as to what is going on. The "after-login" picture shows that /session_login.cgi does not exist on nginx -- that is to be expected, because you want that link to go to /vml000032/session_login.cgi instead. (Otherwise, you would not be able to have two separate webmin instances in different places.) > I've tried to configure my servers according to the link you sent, but it didn't work out. It appears to be the case that webmin is not especially straightforward to reverse-proxy at a non-root url. >From the various web pages listed, it looks like there may be different versions of webmin that do different things. So if you are happy to keep testing and trying, there are perhaps a few more things that you can try. > The config on the Nginx server looks like: > server { > server_name vml000036.delellis.net; > listen 192.168.178.36:80; > > location /vml000032 { > proxy_pass http://192.168.1.32:10000; > proxy_set_header Host $host; The linked web page seems to suggest that you want location /vml000032/ { # with the trailing / proxy_pass http://192.168.1.32:10000/; # with the trailing / proxy_set_header Host $host; proxy_redirect http://$host:10000/ /vml000032/; } I suspect that you either want both of the last two lines, or neither of them. You may be better of with neither; only testing will show. > The webmin config on the upstream server looks like: > webprefix=/vml000032 > webprefixnoredir=1 > referer=vml000036.delellis.net That looks like it has a chance of working, so long as webmin is not running with ssl. Maybe webmin config also can use relative_redir=1 And it may be useful to edit miniserv.conf so that it includes cookiepath=/vml000032 > When I open the page in my webbrowser I get the logon screen to the webmin sever on my Nginx hostsystem. Not on vml000032. > I suspect that that is because you used the line proxy_set_header Host $host; When the rest is working, you can perhaps try to log in using credentials that are different on the two servers, and see which lets you in. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Apr 29 22:41:47 2020 From: nginx-forum at forum.nginx.org (thok) Date: Wed, 29 Apr 2020 18:41:47 -0400 Subject: secure_link module & expiry/caching headers Message-ID: <0d0f236c4a8431054e0e61364746489b.NginxMailingListEnglish@forum.nginx.org> Hi, I am currently experimenting with the secure_links module and following the guide at https://nginx.org/en/docs/http/ngx_http_secure_link_module.html#secure_link_md5 Since the expiry time is passed via the $arg_expiry, I was wondering if there is a way to transform this to a valid expires header (https://nginx.org/en/docs/http/ngx_http_headers_module.html#expires) or Cache-Control header? Thanks, best regards, Thomas Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287897,287897#msg-287897 From praveenssit at gmail.com Thu Apr 30 09:45:14 2020 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Thu, 30 Apr 2020 15:15:14 +0530 Subject: Compatibility Matrix Message-ID: Hello, Can anyone help where can I get the compatibility matrix on the versions of pcre, zlib, openssl is tested/supported by specific version of Nginx ? TIA -- *Regards,* *K S Praveen KumarM: +91-9986855625 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Thu Apr 30 13:38:50 2020 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Thu, 30 Apr 2020 16:38:50 +0300 Subject: Compatibility Matrix In-Reply-To: References: Message-ID: <20200430133850.GA31955@FreeBSD.org.ru> Hi Praveen, hope you're doing well these days. I don't think I've ever heard about a document like that, however it's definitely possible to create one. Usually every new nginx release supports latest versions of all the components it depends. Since pcre and zlib releases are in most cases have backward compatibility with their previous releases, it's possible to build nginx with ealier or recent versions of those two. OpenSSL is a bit different in this case cause every major OpenSSL release brings new features and nginx utilizes OpenSSL functionalities for its needs. For two recent releases from mainline and stable branches I'd recommend to use the following versions (1.18.0 and 1.17.10 accordingly) of the components you've mentioned: - pcre 8.44 - zlib 1.2.11 - openssl 1.1.1g Please let me know if you have any questions. Thank you. -- Sergey Osokin On Thu, Apr 30, 2020 at 03:15:14PM +0530, Praveen Kumar K S wrote: > Hello, > > Can anyone help where can I get the compatibility matrix on the versions of > pcre, zlib, openssl is tested/supported by specific version of Nginx ? > > TIA > > -- > *Regards,* > *K S Praveen KumarM: +91-9986855625 * From praveenssit at gmail.com Thu Apr 30 13:52:36 2020 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Thu, 30 Apr 2020 19:22:36 +0530 Subject: Compatibility Matrix In-Reply-To: <20200430133850.GA31955@FreeBSD.org.ru> References: <20200430133850.GA31955@FreeBSD.org.ru> Message-ID: Hi Sergey, Thanks for the detailed response. That helps a lot. On Thu, Apr 30, 2020 at 7:09 PM Sergey A. Osokin wrote: > Hi Praveen, > > hope you're doing well these days. > > I don't think I've ever heard about a document like that, however > it's definitely possible to create one. > > Usually every new nginx release supports latest versions of all the > components > it depends. Since pcre and zlib releases are in most cases have backward > compatibility with their previous releases, it's possible to build nginx > with > ealier or recent versions of those two. > > OpenSSL is a bit different in this case cause every major OpenSSL release > brings new features and nginx utilizes OpenSSL functionalities for its > needs. > > For two recent releases from mainline and stable branches I'd recommend > to use the following versions (1.18.0 and 1.17.10 accordingly) of the > components you've mentioned: > > - pcre 8.44 > - zlib 1.2.11 > - openssl 1.1.1g > > Please let me know if you have any questions. > > Thank you. > > -- > Sergey Osokin > > > On Thu, Apr 30, 2020 at 03:15:14PM +0530, Praveen Kumar K S wrote: > > Hello, > > > > Can anyone help where can I get the compatibility matrix on the versions > of > > pcre, zlib, openssl is tested/supported by specific version of Nginx ? > > > > TIA > > > > -- > > *Regards,* > > *K S Praveen KumarM: +91-9986855625 * > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Regards,* *K S Praveen KumarM: +91-9986855625 * -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Apr 30 18:09:05 2020 From: nginx-forum at forum.nginx.org (MarcoI) Date: Thu, 30 Apr 2020 14:09:05 -0400 Subject: POST result: 404 Message-ID: This is the nginx configuration in Ubuntu 18.04 : server { listen 443 ssl http2 default_server; server_name ggc.world; ssl_certificate /etc/letsencrypt/live/ggc.world/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/ggc.world/privkey.pem; # managed by Certbot ssl_trusted_certificate /etc/letsencrypt/live/ggc.world/chain.pem; ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot ssl_session_timeout 5m; #ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20- draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:50m; #ssl_stapling on; #ssl_stapling_verify on; access_log /var/log/nginx/ggcworld-access.log combined; add_header Strict-Transport-Security "max-age=31536000"; location = /favicon.ico { access_log off; log_not_found off; } location / { proxy_pass http://127.0.0.1:8080; #proxy_pass http://127.0.0.1:2000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; #proxy_set_header Host $host; } } server { listen 80 default_server; listen [::]:80 default_server; error_page 497 https://$host:$server_port$request_uri; server_name ggc.world; return 301 https://$server_name$request_uri; access_log /var/log/nginx/ggcworld-access.log combined; add_header Strict-Transport-Security "max-age=31536000"; location = /favicon.ico { access_log off; log_not_found off; } location / { proxy_pass http://127.0.0.1:8080; #proxy_pass http://127.0.0.1:2000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; #proxy_set_header Host $host; } } upstream websocket { ip_hash; server localhost:3000; } server { listen 81; server_name ggc.world; #location / { location ~ ^/(websocket|websocket\/socket-io) { proxy_pass http://127.0.0.1:4201; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header X-Forwared-For $remote_addr; proxy_set_header Host $host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; } } upstream golang-webserver { ip_hash; server 127.0.0.1:2000; } server { listen 3000; server_name ggc.world; location / { proxy_pass http://golang-webserver; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; #proxy_set_header Host $host; } } This is my vue.config.js file for the vue.js frontend: // vue.config.js module.exports = { // options... publicPath: '', devServer: { host: '0.0.0.0', port: 8080, //port: 2000, public: 'ggc.world' }, } And this is port configuration for go-webserver : server-gorillamux.go : const ( CONN_HOST = "192.168.1.7" CONN_PORT = "2000" ) Compiling the frontend: DONE Compiled successfully in 1224ms 7:55:19 PM App running at: - Local: http://localhost:8080 - Network: http://ggc.world/ Note that the development build is not optimized. To create a production build, run npm run build. And running the go-webserver: goServer$ go run server-gorillamux.go I get this error: POST https://ggc.world/puser/add 404 These are the last lines of the nano /var/log/nginx/ggcworld-access.log file : 36.119.16 - - [30/Apr/2020:19:56:57 +0200] "GET / HTTP/2.0" 200 694 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36" 2.36.119.16 - - [30/Apr/2020:19:56:57 +0200] "GET /js/app.js HTTP/2.0" 200 147353 "https://ggc.world/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36" 2.36.119.16 - - [30/Apr/2020:19:56:58 +0200] "GET /js/chunk-vendors.js HTTP/2.0" 200 4241853 "https://ggc.world/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safa$ 2.36.119.16 - - [30/Apr/2020:19:56:58 +0200] "GET /sockjs-node/info?t=1588269418560 HTTP/2.0" 200 79 "https://ggc.world/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.$ 2.36.119.16 - - [30/Apr/2020:19:57:21 +0200] "POST /puser/add HTTP/2.0" 404 137 "https://ggc.world/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36" How can I solve the problem? Looking forward to your kind help. Marco Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287914,287914#msg-287914 From nginx-forum at forum.nginx.org Thu Apr 30 18:19:38 2020 From: nginx-forum at forum.nginx.org (MarcoI) Date: Thu, 30 Apr 2020 14:19:38 -0400 Subject: net::ERR_CONNECTION_REFUSED . How to correctly configure Nginx with Socket.io? In-Reply-To: <20200212081302.GD26683@daoine.org> References: <20200212081302.GD26683@daoine.org> Message-ID: <88bfb6dcc0f8ea177544c3a4535b3a69.NginxMailingListEnglish@forum.nginx.org> Hi Francis, I remember you have deep expertise with nginx configuration. I posted in the mailing list a question about how to configure nginx in order to use also golang webserver. Would you be so kind in having a look at it? https://forum.nginx.org/read.php?2,287914 Thank you very much. Looking forward to your kind help. Marco Posted at Nginx Forum: https://forum.nginx.org/read.php?2,286850,287915#msg-287915 From teward at thomas-ward.net Thu Apr 30 18:44:15 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Thu, 30 Apr 2020 14:44:15 -0400 Subject: POST result: 404 In-Reply-To: References: Message-ID: <60cf87c7-6307-ea65-10fe-36459405d4b0@thomas-ward.net> On 4/30/20 2:09 PM, MarcoI wrote: > This is the nginx configuration in Ubuntu 18.04 : > > server { > listen 443 ssl http2 default_server; > server_name ggc.world; > > ... > > location / { > proxy_pass http://127.0.0.1:8080; If I'm reading your config directly, this is passing port 443 to the backend here at port 8080 on the system locally.? Therefore, the 404 request could be coming from this backend.? Have you verified that this path actually works in your app when accessed directly on the system?? If it does not, then the backend app is at fault here. > ... > > And running the go-webserver: > > goServer$ go run server-gorillamux.go > > > I get this error: POST https://ggc.world/puser/add 404 ... which is indicative of the issue because of the above mentioned proxy_pass block being on the app you've built/compiled.? If that backend doesn't have the capacity to handle the requested path it could return the 404 which would trickle back and show a 404 via the nginx server. > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287914,287914#msg-287914 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Apr 30 19:20:51 2020 From: nginx-forum at forum.nginx.org (MarcoI) Date: Thu, 30 Apr 2020 15:20:51 -0400 Subject: POST result: 404 In-Reply-To: <60cf87c7-6307-ea65-10fe-36459405d4b0@thomas-ward.net> References: <60cf87c7-6307-ea65-10fe-36459405d4b0@thomas-ward.net> Message-ID: <4d0d464c4ca0340918f2db72d473352b.NginxMailingListEnglish@forum.nginx.org> Hi Thomas, thank you for your kind help. I'm not sure, due to my lack of knowledge, how I can check if the path from port 443 to port 8080 works in my app when accessed directly on my system. (base) marco at pc01:~$ sudo netstat -plnt Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 19569/nginx: master tcp 0 0 0.0.0.0:81 0.0.0.0:* LISTEN 19569/nginx: master tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 754/systemd-resolve tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1230/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1035/cupsd tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN 19569/nginx: master tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 1321/postgres tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 19569/nginx: master tcp 0 0 127.0.0.1:33917 0.0.0.0:* LISTEN 1227/containerd tcp6 0 0 :::80 :::* LISTEN 19569/nginx: master tcp6 0 0 :::22 :::* LISTEN 1230/sshd tcp6 0 0 ::1:631 :::* LISTEN 1035/cupsd ports 80 and 443 seem both listening and owned by nginx. How can I check if the backend has the capacity to handle the requested path? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287914,287917#msg-287917 From teward at thomas-ward.net Thu Apr 30 19:47:32 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Thu, 30 Apr 2020 15:47:32 -0400 Subject: POST result: 404 In-Reply-To: <4d0d464c4ca0340918f2db72d473352b.NginxMailingListEnglish@forum.nginx.org> References: <60cf87c7-6307-ea65-10fe-36459405d4b0@thomas-ward.net> <4d0d464c4ca0340918f2db72d473352b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <91801cf3-e172-e2a1-0e39-5adcd4fddc31@thomas-ward.net> On 4/30/20 3:20 PM, MarcoI wrote: > Hi Thomas, > thank you for your kind help. > > ... > > How can I check if the backend has the capacity to handle the requested > path? This is where you need to expand the knowledge into other tools such as `curl`.? On the system where nginx and your webapp run execute this: ??? curl -X POST http://127.0.0.1:8080/puser/add ... if this 404s then you know that the issue is that your backend application written in Go doesn't accept this as a POST-able path. > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287914,287917#msg-287917 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: