From nginx-forum at forum.nginx.org Fri Jan 1 07:54:21 2021 From: nginx-forum at forum.nginx.org (Dr_tux) Date: Fri, 01 Jan 2021 02:54:21 -0500 Subject: Block extension for specified IP address Message-ID: <5b42f6b9b9f2f5a1df594d2b021613fe.NginxMailingListEnglish@forum.nginx.org> Hi Guys, I'm stuck in this. I am open to your suggestions. Thank you. I have a reverse proxy configuration as follows, I want to block exe extensions for requests from here. But it doesn't work. How can I do that ? Example Config: location / { types { application/octet-stream bin exe dll; allow 10.1.1.2/32; deny all; } proxy_pass http://10.1.1.9:4545; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; } Thank you in advance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290354,290354#msg-290354 From nginx-forum at forum.nginx.org Fri Jan 1 13:46:09 2021 From: nginx-forum at forum.nginx.org (Decor) Date: Fri, 01 Jan 2021 08:46:09 -0500 Subject: Using nginx in front of my app Message-ID: Hi, I would like to not bother with the networking side of my project. Hence I will use nginx to receive https/websocket requests and serve static html (vuejs) files. But, sometimes, clients will call my API for more complex actions (database...) and I would like nginx to be a proxy to simplify my application. client sends encrypted websocket > nginx receive it and decrypts it > nginx transfers the http content using UDP (for simplicity) to my local c socket application > my app does what it does best and sends back the response to nginx > nginx sends back the valid encrypted packet to the client So I need nginx as a https proxy but switching to UDP protocol and remembering how to answer the client after getting the response from my app. How may I do that? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290355,290355#msg-290355 From sca at andreasschulze.de Fri Jan 1 19:39:01 2021 From: sca at andreasschulze.de (A. Schulze) Date: Fri, 1 Jan 2021 20:39:01 +0100 Subject: difference between auth_basic and auth_ldap Message-ID: Hello & happy new year! my goal is to configure nginx to deny access from most client-ip but allow access from special ip's for authenticated users. This work for basic_authentication as expect but behave different with auth_ldap I use https://github.com/kvspb/nginx-auth-ldap. simplified configuration with no allowed IPs at all: server { listen *:80; deny all; location /auth_basic { auth_basic "auth_basic"; auth_basic_user_file /path/to/auth_basic_user_file; } } $ curl -v http://nginx/auth_basic $ curl -v -u user:pass http://nginx/auth_basic $ curl -v -u user:wrong http://nginx/auth_basic all three calls return "403 Forbidden", which is ok and acceptable to me. switching to auth_ldap the results are different: ldap_server ldap-server { url ldap://ldap-server/dc=example?cn?sub?(objectclass=top); require valid_user; } server { listen *:80; deny all; location /auth_ldap { auth_ldap "auth_ldap"; auth_ldap_servers "ldap-server"; } } $ curl -v http://nginx/auth_ldap $ curl -v -u user:wrong http://nginx/auth_ldap return "401 Unauthorized" expected: "403 Forbidden" $ curl -v -u user:pass http://nginx/auth_ldap return "403 Forbidden" Is there anything wrong with my configuration or is the unexpected request for authentication a result of how https://github.com/kvspb/nginx-auth-ldap is written? Andreas -> return "403 Forbidden" From geovana.possenti at gmail.com Fri Jan 1 20:51:52 2021 From: geovana.possenti at gmail.com (Geovana Possenti) Date: Fri, 1 Jan 2021 17:51:52 -0300 Subject: Multiples JWT in the same request Message-ID: Hello, I have a request that sends two different tokens (JWT) generated with the same key (JWK). Could nginx validate both tokens? Each of them is passed in a different Header. I tried to duplicate the auth_jwt configuration but it is not possible to duplicate this parameterization in the same location. It works: location /myapp { proxy_pass http://xxxx; auth_jwt "Client Token" token=$http_authclient; auth_jwt_key_file conf.d/key.jwt; } It doesn't work: location /myapp { proxy_pass http://xxxx; auth_jwt "Client Token" token=$http_authclient; auth_jwt "User Token" token=$http_authuser; auth_jwt_key_file conf.d/key.jwt; } Request Example passing two Tokens: curl -H "Authclient: XXXXXXX" -H "Authuser: YYYYYYYY" http://localhost:8080/myapp Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From wi2p at hotmail.com Sat Jan 2 01:13:59 2021 From: wi2p at hotmail.com (kev jr) Date: Sat, 2 Jan 2021 01:13:59 +0000 Subject: How to configure Auth digest behind a proxy Message-ID: Hello and happy new year I try to implement digest authentication on Nginx. The architecture is the following : Server A is the client Server B is the proxy (a API solution which only transmits the request as a proxy) Server C is my Nginx server where I configure the Digest authentification I have the following error, when my client try to connect to my Nginx through the proxy : uri mismatch - does not match request-uri because the client (server A) send the following parameter for the authentication Digest username="client", realm="Test", nonce="xxxxxx", uri="proxyuri", cnonce="xxxx=", nc=xxxx, qop=auth, response="xxx", algorithm=MD5\r\n The client (server A) send the proxyuri, and the Nginx server (server C) waiting for the nginxuri. Do you know which parameter, I need to add in my Nginx configuration to perform the connection ? Or Do you know, if it's possible to implement Digest authentication on Nginx behind a proxy ? For your information, the direct connection Client to Nginx server with Digest authentication works fine. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Jan 3 22:17:01 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 4 Jan 2021 01:17:01 +0300 Subject: difference between auth_basic and auth_ldap In-Reply-To: References: Message-ID: <20210103221701.GO1147@mdounin.ru> Hello! On Fri, Jan 01, 2021 at 08:39:01PM +0100, A. Schulze wrote: > Hello & happy new year! > > my goal is to configure nginx to deny access from most client-ip but allow access from special ip's > for authenticated users. This work for basic_authentication as expect but behave different with auth_ldap > I use https://github.com/kvspb/nginx-auth-ldap. > > simplified configuration with no allowed IPs at all: > > server { > listen *:80; > deny all; > location /auth_basic { > auth_basic "auth_basic"; > auth_basic_user_file /path/to/auth_basic_user_file; > } > } > > $ curl -v http://nginx/auth_basic > $ curl -v -u user:pass http://nginx/auth_basic > $ curl -v -u user:wrong http://nginx/auth_basic > > all three calls return "403 Forbidden", which is ok and acceptable to me. > > switching to auth_ldap the results are different: > > ldap_server ldap-server { > url ldap://ldap-server/dc=example?cn?sub?(objectclass=top); > require valid_user; > } > server { > listen *:80; > deny all; > location /auth_ldap { > auth_ldap "auth_ldap"; > auth_ldap_servers "ldap-server"; > } > } > > $ curl -v http://nginx/auth_ldap > $ curl -v -u user:wrong http://nginx/auth_ldap > return "401 Unauthorized" expected: "403 Forbidden" > > $ curl -v -u user:pass http://nginx/auth_ldap > return "403 Forbidden" > > Is there anything wrong with my configuration or is the unexpected request for authentication > a result of how https://github.com/kvspb/nginx-auth-ldap is written? This is a result of how nginx-auth-ldap is written. Or, more strictly, how it adds itself into nginx request processing pipeline - it simply adds itself as an HTTP module, and ends up called before the access module. It is relatively easily to fix assuming dynamic module linking (that is, if you are using the "load_module" directive to load the module), just using ngx_module_order="ngx_http_auth_ldap_module ngx_http_access_module" should do the trick. For static linking it wouldn't be that easy though, as static linking does not support module order selection via ngx_module_order, and appropriate configure variables with lists of modules needs to be adjusted directly instead. Quick-and-dirty workaround would be to use auth_request as a "proxy" for auth_ldap. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Sun Jan 3 22:32:04 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 4 Jan 2021 01:32:04 +0300 Subject: How to configure Auth digest behind a proxy In-Reply-To: References: Message-ID: <20210103223204.GP1147@mdounin.ru> Hello! On Sat, Jan 02, 2021 at 01:13:59AM +0000, kev jr wrote: > Hello and happy new year > > I try to implement digest authentication on Nginx. > > The architecture is the following : > Server A is the client > Server B is the proxy (a API solution which only transmits the request as a proxy) > Server C is my Nginx server where I configure the Digest authentification > > I have the following error, when my client try to connect to my Nginx through the proxy : > uri mismatch - does not match request-uri > > because the client (server A) send the following parameter for the authentication > Digest username="client", realm="Test", nonce="xxxxxx", uri="proxyuri", cnonce="xxxx=", nc=xxxx, qop=auth, response="xxx", algorithm=MD5\r\n > > The client (server A) send the proxyuri, and the Nginx server (server C) waiting for the nginxuri. > > Do you know which parameter, I need to add in my Nginx configuration to perform the connection ? > Or Do you know, if it's possible to implement Digest authentication on Nginx behind a proxy ? > > For your information, the direct connection Client to Nginx server with Digest authentication works fine. nginx itself does not support Digest authentication, only Basic one (http://nginx.org/r/auth_basic). If you are using some 3rd party module to implement Digest authentication, you may want to refer to this module docs (or sources) to find out how to properly use it behind a proxy. Alternatively, consider switching to Basic authentication instead, which is natively supported and, for obvious reasons, does not have problems with proxying. -- Maxim Dounin http://mdounin.ru/ From maxim at nginx.com Mon Jan 4 08:47:04 2021 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 4 Jan 2021 11:47:04 +0300 Subject: Multiples JWT in the same request In-Reply-To: References: Message-ID: Hi Geovana, On 01.01.2021 23:51, Geovana Possenti wrote: > Hello, > > I have a request that sends two different tokens (JWT) generated with > the same key (JWK).? > [...] You probably want to reach nginx-plus support as the http_auth_jwt module is nginx-plus property. Thanks, Maxim -- Maxim Konovalov From sca at andreasschulze.de Mon Jan 4 19:29:04 2021 From: sca at andreasschulze.de (A. Schulze) Date: Mon, 4 Jan 2021 20:29:04 +0100 Subject: difference between auth_basic and auth_ldap In-Reply-To: <20210103221701.GO1147@mdounin.ru> References: <20210103221701.GO1147@mdounin.ru> Message-ID: Am 03.01.21 um 23:17 schrieb Maxim Dounin: > This is a result of how nginx-auth-ldap is written. Or, more > strictly, how it adds itself into nginx request processing > pipeline - it simply adds itself as an HTTP module, and ends up > called before the access module. > > It is relatively easily to fix assuming dynamic module linking > (that is, if you are using the "load_module" directive to load the > module), just using > > ngx_module_order="ngx_http_auth_ldap_module ngx_http_access_module" > > should do the trick. Hello Maxim, as I only use dynamic module linking it was easy to adopt your suggestion. It works. I created a pull request: https://github.com/kvspb/nginx-auth-ldap/pull/242 Andreas From hongyi.zhao at gmail.com Tue Jan 5 12:08:23 2021 From: hongyi.zhao at gmail.com (Hongyi Zhao) Date: Tue, 5 Jan 2021 20:08:23 +0800 Subject: About the native rtmps protocol support in nginx. Message-ID: Currently, on Ubuntu20.10, I've compiled the latest git master version of FFmpeg with the native TLS/SSL support through the following configuration option: $ ./configure --enable-openssl $ ffmpeg -protocols |& egrep -i 'in|out|rtmps' Input: rtmps Output: rtmps At this moment, I want to use Nginx as the media streaming server for rtmps protocol, and I've noticed that the capability of rtmp protocol support in nginx is enabled by this module: . But I'm not sure whether the module mentioned above has the native rtmps protocol support capability. Any hints will be greatly appreciated. Regards, -- Assoc. Prof. Hongyi Zhao Theory and Simulation of Materials Hebei Polytechnic University of Science and Technology engineering NO. 552 North Gangtie Road, Xingtai, China From nginx-forum at forum.nginx.org Tue Jan 5 15:53:58 2021 From: nginx-forum at forum.nginx.org (kpirateone) Date: Tue, 05 Jan 2021 10:53:58 -0500 Subject: least_conn upstream configuration issue "host not found in upstream" with Kubernetes DNS Message-ID: I am trying to configure a least_conn upstream for kubernetes pods and find that we receive a "host not found in upstream" error on startup if the pod has not been started. config snippet: upstream backend { least_conn; server pod0.servicename.namespace.svc.cluster.local:8081; server pod1.servicename.namespace.svc.cluster.local:8081; server pod2.servicename.namespace.svc.cluster.local:8081; } location / { proxy_pass http://backend; } we would like to be able to start up nginx and use least_conn load balancing across all pods. however nginx fails to start unless all pods exist and can resolve dns. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290383,290383#msg-290383 From francis at daoine.org Tue Jan 5 17:27:34 2021 From: francis at daoine.org (Francis Daly) Date: Tue, 5 Jan 2021 17:27:34 +0000 Subject: least_conn upstream configuration issue "host not found in upstream" with Kubernetes DNS In-Reply-To: References: Message-ID: <20210105172734.GM23032@daoine.org> On Tue, Jan 05, 2021 at 10:53:58AM -0500, kpirateone wrote: Hi there, > I am trying to configure a least_conn upstream for kubernetes pods and find > that we receive a "host not found in upstream" error on startup if the pod > has not been started. See, for example, the thread at https://forum.nginx.org/read.php?2,290205 In short: stock nginx does not care if the upstream service (the pod) is running or not; but it does care that whatever hostname you use in the "upstream" section resolves to an IP address at nginx-start time. That thread does list a few possible ways of approaching a solution; maybe one will work for you? > we would like to be able to start up nginx and use least_conn load balancing > across all pods. however nginx fails to start unless all pods exist and can > resolve dns. "exist" should not matter. "can resolve dns" does matter. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Jan 5 17:48:19 2021 From: francis at daoine.org (Francis Daly) Date: Tue, 5 Jan 2021 17:48:19 +0000 Subject: limit requests and CORS Policy In-Reply-To: References: <20201226191019.GJ23032@daoine.org> Message-ID: <20210105174819.GN23032@daoine.org> On Tue, Dec 29, 2020 at 12:33:02PM +0500, Ali Mohsin wrote: Hi there, > Hello, I have solved the issue, Thank you for following-up to the list with the solution. Great that you now have a working system; and even better that the next person searching the list with the same problem, will be able to find an answer that they can try too! Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Jan 5 18:07:06 2021 From: francis at daoine.org (Francis Daly) Date: Tue, 5 Jan 2021 18:07:06 +0000 Subject: Block extension for specified IP address In-Reply-To: <5b42f6b9b9f2f5a1df594d2b021613fe.NginxMailingListEnglish@forum.nginx.org> References: <5b42f6b9b9f2f5a1df594d2b021613fe.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20210105180706.GO23032@daoine.org> On Fri, Jan 01, 2021 at 02:54:21AM -0500, Dr_tux wrote: Hi there, > I'm stuck in this. I am open to your suggestions. Thank you. Untested by me, but... > I have a reverse proxy configuration as follows, I want to block exe > extensions for requests from here. But it doesn't work. How can I do that ? I would probably use "map" to set a variable $block_this_extension to 1 or empty based on whatever part of the url is relevant to you -- perhaps match the end of $request_filename; perhaps look at $request_uri or $document_uri. And then, if the "ok" IP addresses are easy to list in a "map", set a new variable $block_this_request based on $remote_addr to empty for the "ok" addresses, defaulting to $block_this_extension. And then just before the proxy_pass (or whatever you are trying to protect), if $block_this_request is 1 (or: is not empty), return the rejection code that you want. (If the "ok" IP addresses are not easy to list in a map, then "geo" can be used, but that becomes a little more fiddly, I think.) http://nginx.org/r/map Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Jan 5 18:20:34 2021 From: nginx-forum at forum.nginx.org (kpirateone) Date: Tue, 05 Jan 2021 13:20:34 -0500 Subject: least_conn upstream configuration issue "host not found in upstream" with Kubernetes DNS In-Reply-To: <20210105172734.GM23032@daoine.org> References: <20210105172734.GM23032@daoine.org> Message-ID: <07923fb54e8198a787cfe2b5c81c91f3.NginxMailingListEnglish@forum.nginx.org> thanks for your prompt response, i did read through this thread prior and it does relate to my understanding of the issue. it seems like we could 1) use ips instead of host names, not ideal due to the dynamic nature of the ips 2) maintain an upstream configuration externally and use nginx reload to modify the upstream as pods come online scaling up, as stated in the referenced thread (https://forum.nginx.org/read.php?2,290205) is not supported in stock nginx. there doesnt seem to be any way to use a least connected policy without individually knowing all the hosts? as in, there is no way to use least connected over a headless service that returns multiple ips; we must need to explicitly declare all known hosts? i think i understand, thanks again Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290383,290387#msg-290387 From francis at daoine.org Tue Jan 5 19:55:19 2021 From: francis at daoine.org (Francis Daly) Date: Tue, 5 Jan 2021 19:55:19 +0000 Subject: least_conn upstream configuration issue "host not found in upstream" with Kubernetes DNS In-Reply-To: <07923fb54e8198a787cfe2b5c81c91f3.NginxMailingListEnglish@forum.nginx.org> References: <20210105172734.GM23032@daoine.org> <07923fb54e8198a787cfe2b5c81c91f3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20210105195519.GP23032@daoine.org> On Tue, Jan 05, 2021 at 01:20:34PM -0500, kpirateone wrote: Hi there, > 1) use ips instead of host names, not ideal due to the dynamic nature of the > ips > 2) maintain an upstream configuration externally and use nginx reload to > modify the upstream as pods come online > > scaling up, as stated in the referenced thread > (https://forum.nginx.org/read.php?2,290205) is not supported in stock > nginx. "scaling up" by turning on upstream services on pre-configured IP:ports is supported in nginx. Or on re-configured IP:ports. The thing that is not supported is run-time name resolution, if I understand things correctly. > there doesnt seem to be any way to use a least connected policy without > individually knowing all the hosts? nginx needs to know what upstream IP:ports to try to connect to. There's not really a way around that. Stock-nginx learns that list at start time. > as in, there is no way to use least > connected over a headless service that returns multiple ips; we must need to > explicitly declare all known hosts? A hostname that resolves to multiple IPs should be fine, according to the documentation -- http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server nginx will request the name resolution at start time. Put your list of IPs in your name server config, or in your nginx config, as you prefer. Cheers, f -- Francis Daly francis at daoine.org From gfrankliu at gmail.com Tue Jan 5 22:13:04 2021 From: gfrankliu at gmail.com (Frank Liu) Date: Tue, 5 Jan 2021 14:13:04 -0800 Subject: memory usage for ssl_client_certificate Message-ID: Hi, If I have a 5M pem file for ssl_client_certificate, and 1000 concurrent connections, will nginx load the file 1000 times with 1000*5M memory usage, or only 1 time load in memory to be shared across all connections? Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From phoenix.kiula at gmail.com Wed Jan 6 00:27:20 2021 From: phoenix.kiula at gmail.com (Phoenix Kiula) Date: Tue, 5 Jan 2021 19:27:20 -0500 Subject: Installing dynamic modules when Nginx itself is installed via yum/dnf (Linux) Message-ID: We have a limitation on the server to only install Nginx via DNF. This currently (as of this date of writing) installs Nginx v1.18.0. We cannot compile our own Nginx. I understand that with this we cannot install "static modules" because that requires the whole of Nginx to be reinstalled. But I'd like to check how we can install the dynamic modules : 1. Nginx More Headers 2. Nginx Brotli 3. Nginx Security Headers Most lazy suggestions on blogs etc have instructions to either compile Nginx itself, or they're just plain erroneous. I searched here and couldn't find a question about these specific modules. Edit: just to list the nginx modules that are in fact available in my repos currently: # dnf search nginx-* Last metadata expiration check: 0:26:34 ago on Mon 04 Jan 2021 10:56:18 PM EST. ============ Name Matched: nginx-* ========= nginx-all-modules.noarch : A meta package that installs all available Nginx modules nginx-filesystem.noarch : The basic directory layout for the Nginx server nginx-mimetypes.noarch : MIME type mappings for nginx nginx-mod-http-image-filter.x86_64 : Nginx HTTP image filter module nginx-mod-http-perl.x86_64 : Nginx HTTP perl module nginx-mod-http-xslt-filter.x86_64 : Nginx XSLT module nginx-mod-mail.x86_64 : Nginx mail modules nginx-mod-stream.x86_64 : Nginx stream modules This is quite a pointless list, sadly. Welcome any pointers to maintainable ways of installing so that a "dnf update nginx" will not break the modules. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguelmclara at gmail.com Wed Jan 6 01:20:05 2021 From: miguelmclara at gmail.com (Miguel C) Date: Wed, 6 Jan 2021 01:20:05 +0000 Subject: Installing dynamic modules when Nginx itself is installed via yum/dnf (Linux) In-Reply-To: References: Message-ID: AFAIK you would need the modules built with the same nginx version, so if they are not available via packages I'm afraid you still need to build... On Wed, Jan 6, 2021, 00:27 Phoenix Kiula wrote: > We have a limitation on the server to only install Nginx via DNF. This > currently (as of this date of writing) installs Nginx v1.18.0. We cannot > compile our own Nginx. > > I understand that with this we cannot install "static modules" because > that requires the whole of Nginx to be reinstalled. But I'd like to check > how we can install the dynamic modules > > : > > 1. Nginx More Headers > 2. Nginx Brotli > 3. Nginx Security Headers > > Most lazy suggestions on blogs etc have instructions to either compile > Nginx itself, or they're just plain erroneous. I searched here and couldn't > find a question about these specific modules. > > Edit: just to list the nginx modules that are in fact available in my > repos currently: > > # dnf search nginx-* > > Last metadata expiration check: 0:26:34 ago on Mon 04 Jan 2021 10:56:18 PM EST. > ============ Name Matched: nginx-* ========= > nginx-all-modules.noarch : A meta package that installs all available Nginx modules > nginx-filesystem.noarch : The basic directory layout for the Nginx server > nginx-mimetypes.noarch : MIME type mappings for nginx > nginx-mod-http-image-filter.x86_64 : Nginx HTTP image filter module > nginx-mod-http-perl.x86_64 : Nginx HTTP perl module > nginx-mod-http-xslt-filter.x86_64 : Nginx XSLT module > nginx-mod-mail.x86_64 : Nginx mail modules > nginx-mod-stream.x86_64 : Nginx stream modules > > > This is quite a pointless list, sadly. > > > Welcome any pointers to maintainable ways of installing so that a "dnf > update nginx" will not break the modules. > > > Thanks! > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From phoenix.kiula at gmail.com Wed Jan 6 04:02:17 2021 From: phoenix.kiula at gmail.com (Phoenix Kiula) Date: Tue, 5 Jan 2021 23:02:17 -0500 Subject: Installing dynamic modules when Nginx itself is installed via yum/dnf (Linux) In-Reply-To: References: Message-ID: Thanks Miguel. Is there a simple official guide with precise instructions that shows all these modules being installed? Couldn't find it. Thanks! On Tue, Jan 5, 2021 at 8:20 PM Miguel C wrote: > AFAIK you would need the modules built with the same nginx version, so if > they are not available via packages I'm afraid you still need to build... > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguelmclara at gmail.com Wed Jan 6 11:07:51 2021 From: miguelmclara at gmail.com (Miguel C) Date: Wed, 6 Jan 2021 11:07:51 +0000 Subject: Installing dynamic modules when Nginx itself is installed via yum/dnf (Linux) In-Reply-To: References: Message-ID: it's just the standard way mostly but you might get help from the system... I mostly use FreeBSD for nginx or docker and in FreeBSD you have the ports system that helps, there's also "flavor" ports that install nginx with common modules or all and you can easily build and select just the ones you need. nginx blog as a great guide on it though https://www.nginx.com/blog/compiling-dynamic-modules-nginx-plus/ hope that helps On Wed, Jan 6, 2021, 04:02 Phoenix Kiula wrote: > Thanks Miguel. Is there a simple official guide with precise instructions > that shows all these modules being installed? Couldn't find it. Thanks! > > > On Tue, Jan 5, 2021 at 8:20 PM Miguel C wrote: > >> AFAIK you would need the modules built with the same nginx version, so if >> they are not available via packages I'm afraid you still need to build... >> >> > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From phoenix.kiula at gmail.com Wed Jan 6 22:30:53 2021 From: phoenix.kiula at gmail.com (Phoenix Kiula) Date: Wed, 6 Jan 2021 17:30:53 -0500 Subject: Installing dynamic modules when Nginx itself is installed via yum/dnf (Linux) In-Reply-To: References: Message-ID: Thank you Miguel. But you misunderstood the question. This suggestion... > nginx blog as a great guide on it though > https://www.nginx.com/blog/compiling-dynamic-modules-nginx-plus/ > > ...misses the very first question in this thread: we cannot compile nginx from source on our server. At least not in a way that that compiled version would become the nginx installed in our *system*. We need to install nginx via the default Fedora dnf package manager, which at this time installs 1.18.0. Now, what I don't mind doing is to compile nginx in some self-contained folder somewhere, then use that compilation to create the .so or whatever the module file for that version is....if all of this module compiling does *not* affect the system-installed dnf version of nginx. Is this possible? If so, the instructions do not help with this. The first step in that official tutorial is to compile nginx and that compiled nginx then becomes the system's main nginx. It replaces whatever was installed via "dnf install nginx". Yes? Hope this makes sense. Have I correctly understood how nginx compilation works? Appreciate any pointers. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Jan 7 02:15:42 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 7 Jan 2021 05:15:42 +0300 Subject: memory usage for ssl_client_certificate In-Reply-To: References: Message-ID: <20210107021542.GT1147@mdounin.ru> Hello! On Tue, Jan 05, 2021 at 02:13:04PM -0800, Frank Liu wrote: > If I have a 5M pem file for ssl_client_certificate, and 1000 concurrent > connections, will nginx load the file 1000 times with 1000*5M memory usage, > or only 1 time load in memory to be shared across all connections? The certificates in the ssl_client_certificate file are loaded during configuration parsing, and the amount of memory used does not depend on the number of connections. -- Maxim Dounin http://mdounin.ru/ From teward at thomas-ward.net Thu Jan 7 03:19:32 2021 From: teward at thomas-ward.net (Thomas Ward) Date: Wed, 6 Jan 2021 22:19:32 -0500 Subject: Installing dynamic modules when Nginx itself is installed via yum/dnf (Linux) In-Reply-To: References: Message-ID: <6e5b44e2-bd0e-0da1-2ef2-05702cfc37fa@thomas-ward.net> I'm fairly familiar with the 'compiling process' for dynamic modules - the process is the same for NGINX Open Source as wel as NGINX Plus. You would need to compile the modules alongside NGINX and then harvest the compiled .so files and put them into corresponding locations on the system you want to load the dynamic modules.? In Ubuntu, we do this (or at least, I do) by using the same OS and libraries as installed on the target system (as well as the same NGINX version). This being said, **compiling** NGINX is different than **installing** NGINX - you can *compile* the nginx version 1.18.0 with the dynamic modules and the same configuration as the Fedora version, and then **take the compiled module** and load it up in your installed nginx instance.? Compiling NGINX to make the dynamic module does NOT require you to then install that NGINX version, provided that you match the `make` steps and installed/available libraries to those used in the original nginx compile done in Fedora. Thomas On 1/6/21 5:30 PM, Phoenix Kiula wrote: > Thank you Miguel. But you misunderstood the question. This suggestion... > > nginx blog as a great guide on it though > https://www.nginx.com/blog/compiling-dynamic-modules-nginx-plus/ > > > > > > ...misses the very first question in this thread: we cannot compile > nginx from source on our server. At least not in a way that that > compiled version would become the nginx installed in our *system*. We > need to install nginx via the default Fedora dnf package manager, > which at this time installs 1.18.0. > > Now, what I don't mind doing is to compile nginx in some > self-contained folder somewhere, then use that compilation to create > the .so or whatever the module file for that version is....if all of > this module compiling does *not* affect the system-installed dnf > version of nginx. Is this possible? > > If so, the instructions do not help with this. The first step in that > official tutorial is to compile nginx and that compiled nginx then > becomes the system's main nginx. It replaces whatever was installed > via "dnf install nginx". Yes? > > Hope this makes sense. Have I correctly understood how nginx > compilation works? Appreciate any pointers. > > Thank you. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From phoenix.kiula at gmail.com Thu Jan 7 03:47:09 2021 From: phoenix.kiula at gmail.com (Phoenix Kiula) Date: Wed, 6 Jan 2021 22:47:09 -0500 Subject: Installing dynamic modules when Nginx itself is installed via yum/dnf (Linux) In-Reply-To: <6e5b44e2-bd0e-0da1-2ef2-05702cfc37fa@thomas-ward.net> References: <6e5b44e2-bd0e-0da1-2ef2-05702cfc37fa@thomas-ward.net> Message-ID: Thank you Thomas. Much appreciate this, it sounds promising. Appreciate your clarity. So if I: 1. Compile nginx via `dnf install nginx` and that becomes my system's Nginx, installed usually in `/etc/nginx` 2. In a totally separate folder, say, `/usr/src`, I then download a tarball of Nginx and compile it along with the dynamic modules -- which will produce the .so files for said modules 3. Copy over the modules into the usual `/etc/nginx/modules` folder from Step 1 ....in this sequence of steps, how do I make sure that: A. The compilation in Step 2 does not become my "system's nginx" (so when I do an `nginx -v` at the command prompt it should be refer to the nginx installed in Step 1 above, and *not* the one compiled via Step 2) B. The compile in Step 2 will use the "same libraries" that DNF used? In the DNF version of life I didn't pick any libraries manually...DNF found what was on my system. Will the manual compile not do the same? Many thanks! On Wed, Jan 6, 2021 at 10:19 PM Thomas Ward wrote: > I'm fairly familiar with the 'compiling process' for dynamic modules - the > process is the same for NGINX Open Source as wel as NGINX Plus. > > You would need to compile the modules alongside NGINX and then harvest the > compiled .so files and put them into corresponding locations on the system > you want to load the dynamic modules. In Ubuntu, we do this (or at least, > I do) by using the same OS and libraries as installed on the target system > (as well as the same NGINX version). > > This being said, **compiling** NGINX is different than **installing** > NGINX - you can *compile* the nginx version 1.18.0 with the dynamic modules > and the same configuration as the Fedora version, and then **take the > compiled module** and load it up in your installed nginx instance. > Compiling NGINX to make the dynamic module does NOT require you to then > install that NGINX version, provided that you match the `make` steps and > installed/available libraries to those used in the original nginx compile > done in Fedora. > > > Thomas > > > On 1/6/21 5:30 PM, Phoenix Kiula wrote: > > Thank you Miguel. But you misunderstood the question. This suggestion... > > > >> nginx blog as a great guide on it though >> https://www.nginx.com/blog/compiling-dynamic-modules-nginx-plus/ >> >> > > > ...misses the very first question in this thread: we cannot compile nginx > from source on our server. At least not in a way that that compiled version > would become the nginx installed in our *system*. We need to install nginx > via the default Fedora dnf package manager, which at this time installs > 1.18.0. > > Now, what I don't mind doing is to compile nginx in some self-contained > folder somewhere, then use that compilation to create the .so or whatever > the module file for that version is....if all of this module compiling does > *not* affect the system-installed dnf version of nginx. Is this possible? > > If so, the instructions do not help with this. The first step in that > official tutorial is to compile nginx and that compiled nginx then becomes > the system's main nginx. It replaces whatever was installed via "dnf > install nginx". Yes? > > Hope this makes sense. Have I correctly understood how nginx compilation > works? Appreciate any pointers. > > Thank you. > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Thu Jan 7 03:53:35 2021 From: steve at greengecko.co.nz (steve at greengecko.co.nz) Date: Thu, 07 Jan 2021 03:53:35 +0000 Subject: Installing dynamic modules when Nginx itself is installed via yum/dnf (Linux) In-Reply-To: References: <6e5b44e2-bd0e-0da1-2ef2-05702cfc37fa@thomas-ward.net> Message-ID: <89862f2445e3eaa398f599bd6f4c5588@greengecko.co.nz> nginx -T will provide you with the config that is used for the delivered version of nginx 1.18.0 under fedora. That's a good starting point. Steve January 7, 2021 4:47 PM, "Phoenix Kiula" )> wrote: Thank you Thomas. Much appreciate this, it sounds promising. Appreciate your clarity. So if I: 1. Compile nginx via `dnf install nginx` and that becomes my system's Nginx, installed usually in `/etc/nginx` 2. In a totally separate folder, say, `/usr/src`, I then download a tarball of Nginx and compile it along with the dynamic modules -- which will produce the .so files for said modules 3. Copy over the modules into the usual `/etc/nginx/modules` folder from Step 1 ....in this sequence of steps, how do I make sure that: A. The compilation in Step 2 does not become my "system's nginx" (so when I do an `nginx -v` at the command prompt it should be refer to the nginx installed in Step 1 above, and *not* the one compiled via Step 2) B. The compile in Step 2 will use the "same libraries" that DNF used? In the DNF version of life I didn't pick any libraries manually...DNF found what was on my system. Will the manual compile not do the same? Many thanks! On Wed, Jan 6, 2021 at 10:19 PM Thomas Ward wrote: I'm fairly familiar with the 'compiling process' for dynamic modules - the process is the same for NGINX Open Source as wel as NGINX Plus. You would need to compile the modules alongside NGINX and then harvest the compiled .so files and put them into corresponding locations on the system you want to load the dynamic modules. In Ubuntu, we do this (or at least, I do) by using the same OS and libraries as installed on the target system (as well as the same NGINX version). This being said, **compiling** NGINX is different than **installing** NGINX - you can *compile* the nginx version 1.18.0 with the dynamic modules and the same configuration as the Fedora version, and then **take the compiled module** and load it up in your installed nginx instance. Compiling NGINX to make the dynamic module does NOT require you to then install that NGINX version, provided that you match the `make` steps and installed/available libraries to those used in the original nginx compile done in Fedora. Thomas On 1/6/21 5:30 PM, Phoenix Kiula wrote: Thank you Miguel. But you misunderstood the question. This suggestion... nginx blog as a great guide on it though https://www.nginx.com/blog/compiling-dynamic-modules-nginx-plus/ (https://www.nginx.com/blog/compiling-dynamic-modules-nginx-plus/) ...misses the very first question in this thread: we cannot compile nginx from source on our server. At least not in a way that that compiled version would become the nginx installed in our *system*. We need to install nginx via the default Fedora dnf package manager, which at this time installs 1.18.0. Now, what I don't mind doing is to compile nginx in some self-contained folder somewhere, then use that compilation to create the .so or whatever the module file for that version is....if all of this module compiling does *not* affect the system-installed dnf version of nginx. Is this possible? If so, the instructions do not help with this. The first step in that official tutorial is to compile nginx and that compiled nginx then becomes the system's main nginx. It replaces whatever was installed via "dnf install nginx". Yes? Hope this makes sense. Have I correctly understood how nginx compilation works? Appreciate any pointers. Thank you. _______________________________________________ nginx mailing list nginx at nginx.org (mailto:nginx at nginx.org) http://mailman.nginx.org/mailman/listinfo/nginx (http://mailman.nginx.org/mailman/listinfo/nginx) -------------- next part -------------- An HTML attachment was scrubbed... URL: From teward at thomas-ward.net Thu Jan 7 03:55:43 2021 From: teward at thomas-ward.net (Thomas Ward) Date: Wed, 6 Jan 2021 22:55:43 -0500 Subject: Installing dynamic modules when Nginx itself is installed via yum/dnf (Linux) In-Reply-To: References: <6e5b44e2-bd0e-0da1-2ef2-05702cfc37fa@thomas-ward.net> Message-ID: <8c222dd1-6ff7-a376-1111-c3b85d207898@thomas-ward.net> This is where **manually compiling by hand** is the problem.? You would do the compilation in a separate directory **NOT** inside the space of the system's control - usually I spawn new `/tmp` directories or destructable directories in my home space. I'm not familiar with Fedora and the `dnf` command - but `dnf install` installs the repositories-available-version of NGINX for Fedora's repos. The next steps you would take by hand are: (1) Install **all build dependencies and runtime dependencies** for NGINX and the modules you're compiling dynamically. (2) Download the tarball to temporary space. (3) At the *very* least (though I suggest you go digging in the source of Fedora's repos to get their build options, you can find them with `nginx -T` output though) you need to do this: ./configure --add-dynamic-module=/path/to/third/party/module/source/directory make **This does not install nginx, this is the compiling of the binaries.** (4) Dig in the completed compile and find your .so file and put it in /etc/nginx/modules (I believe that's where it is on your system, but I can't validate that - again I'm not a Fedora user so I can't verify that's exactly where you drop the module files themselves. These're the *basic* steps - but again this will **not** install your manually compiled nginx to overwrite what `dnf` installs - this simply compiles everything and it's up to you to go digging to get the components you need and put them where you need them to be for your system to recognize them. Thomas On 1/6/21 10:47 PM, Phoenix Kiula wrote: > Thank you Thomas. Much appreciate this, it sounds promising. > Appreciate your clarity. > > So if I: > > 1. Compile nginx via `dnf install nginx` and that becomes my system's > Nginx, installed usually in `/etc/nginx` > > 2. In a totally separate folder, say, `/usr/src`, I then download a > tarball of Nginx and compile it along with the dynamic modules -- > which will produce the .so files for said modules > > 3. Copy over the modules into the usual `/etc/nginx/modules` folder > from Step 1 > > > ....in this sequence?of steps, how do I make sure that: > > > A. The compilation in Step 2 does not become my "system's nginx" (so > when I do an `nginx -v` at the command prompt it should be refer to > the nginx installed in Step 1 above, and *not* the one compiled via > Step 2) > > B. The compile in Step 2 will use the "same libraries" that DNF used? > In the DNF version of life I didn't pick any libraries manually...DNF > found what was on my system. Will the manual compile not do the same? > > Many thanks! > > > > > On Wed, Jan 6, 2021 at 10:19 PM Thomas Ward > wrote: > > I'm fairly familiar with the 'compiling process' for dynamic > modules - the process is the same for NGINX Open Source as wel as > NGINX Plus. > > You would need to compile the modules alongside NGINX and then > harvest the compiled .so files and put them into corresponding > locations on the system you want to load the dynamic modules.? In > Ubuntu, we do this (or at least, I do) by using the same OS and > libraries as installed on the target system (as well as the same > NGINX version). > > This being said, **compiling** NGINX is different than > **installing** NGINX - you can *compile* the nginx version 1.18.0 > with the dynamic modules and the same configuration as the Fedora > version, and then **take the compiled module** and load it up in > your installed nginx instance.? Compiling NGINX to make the > dynamic module does NOT require you to then install that NGINX > version, provided that you match the `make` steps and > installed/available libraries to those used in the original nginx > compile done in Fedora. > > > Thomas > > > On 1/6/21 5:30 PM, Phoenix Kiula wrote: >> Thank you Miguel. But you misunderstood the question. This >> suggestion... >> >> nginx blog as a great guide on it though >> https://www.nginx.com/blog/compiling-dynamic-modules-nginx-plus/ >> >> >> >> >> >> ...misses the very first question in this thread: we cannot >> compile nginx from source on our server. At least not in a way >> that that compiled version would become the nginx installed in >> our *system*. We need to install nginx via the default Fedora dnf >> package manager, which at this time installs 1.18.0. >> >> Now, what I don't mind doing is to compile nginx in some >> self-contained folder somewhere, then use that compilation to >> create the .so or whatever the module file for that version >> is....if all of this module compiling does *not* affect the >> system-installed dnf version of nginx. Is this possible? >> >> If so, the instructions do not help with this. The first step in >> that official tutorial is to compile nginx and that compiled >> nginx then becomes the system's main nginx. It replaces whatever >> was installed via "dnf install nginx". Yes? >> >> Hope this makes sense. Have I correctly understood how nginx >> compilation works? Appreciate any pointers. >> >> Thank you. >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phoenix.kiula at gmail.com Thu Jan 7 04:19:24 2021 From: phoenix.kiula at gmail.com (Phoenix Kiula) Date: Wed, 6 Jan 2021 23:19:24 -0500 Subject: Installing dynamic modules when Nginx itself is installed via yum/dnf (Linux) In-Reply-To: <8c222dd1-6ff7-a376-1111-c3b85d207898@thomas-ward.net> References: <6e5b44e2-bd0e-0da1-2ef2-05702cfc37fa@thomas-ward.net> <8c222dd1-6ff7-a376-1111-c3b85d207898@thomas-ward.net> Message-ID: Perfect. This is clear Thomas. Much appreciated...between Miguel's original pointer and this clarity from you I think it solves what I'm looking for. One last question: the `nginx -T` options...I'll add those to the ./configure command, yes? On Wed, Jan 6, 2021 at 10:55 PM Thomas Ward wrote: > This is where **manually compiling by hand** is the problem. You would do > the compilation in a separate directory **NOT** inside the space of the > system's control - usually I spawn new `/tmp` directories or destructable > directories in my home space. > > I'm not familiar with Fedora and the `dnf` command - but `dnf install` > installs the repositories-available-version of NGINX for Fedora's repos. > > The next steps you would take by hand are: > > (1) Install **all build dependencies and runtime dependencies** for NGINX > and the modules you're compiling dynamically. > > (2) Download the tarball to temporary space. > > (3) At the *very* least (though I suggest you go digging in the source of > Fedora's repos to get their build options, you can find them with `nginx > -T` output though) you need to do this: > > ./configure > --add-dynamic-module=/path/to/third/party/module/source/directory > make > > **This does not install nginx, this is the compiling of the binaries.** > > (4) Dig in the completed compile and find your .so file and put it in > /etc/nginx/modules (I believe that's where it is on your system, but I > can't validate that - again I'm not a Fedora user so I can't verify that's > exactly where you drop the module files themselves. > > > These're the *basic* steps - but again this will **not** install your > manually compiled nginx to overwrite what `dnf` installs - this simply > compiles everything and it's up to you to go digging to get the components > you need and put them where you need them to be for your system to > recognize them. > > > Thomas > On 1/6/21 10:47 PM, Phoenix Kiula wrote: > > Thank you Thomas. Much appreciate this, it sounds promising. Appreciate > your clarity. > > So if I: > > 1. Compile nginx via `dnf install nginx` and that becomes my system's > Nginx, installed usually in `/etc/nginx` > > 2. In a totally separate folder, say, `/usr/src`, I then download a > tarball of Nginx and compile it along with the dynamic modules -- which > will produce the .so files for said modules > > 3. Copy over the modules into the usual `/etc/nginx/modules` folder from > Step 1 > > > ....in this sequence of steps, how do I make sure that: > > > A. The compilation in Step 2 does not become my "system's nginx" (so when > I do an `nginx -v` at the command prompt it should be refer to the nginx > installed in Step 1 above, and *not* the one compiled via Step 2) > > B. The compile in Step 2 will use the "same libraries" that DNF used? In > the DNF version of life I didn't pick any libraries manually...DNF found > what was on my system. Will the manual compile not do the same? > > Many thanks! > > > > > On Wed, Jan 6, 2021 at 10:19 PM Thomas Ward > wrote: > >> I'm fairly familiar with the 'compiling process' for dynamic modules - >> the process is the same for NGINX Open Source as wel as NGINX Plus. >> >> You would need to compile the modules alongside NGINX and then harvest >> the compiled .so files and put them into corresponding locations on the >> system you want to load the dynamic modules. In Ubuntu, we do this (or at >> least, I do) by using the same OS and libraries as installed on the target >> system (as well as the same NGINX version). >> >> This being said, **compiling** NGINX is different than **installing** >> NGINX - you can *compile* the nginx version 1.18.0 with the dynamic modules >> and the same configuration as the Fedora version, and then **take the >> compiled module** and load it up in your installed nginx instance. >> Compiling NGINX to make the dynamic module does NOT require you to then >> install that NGINX version, provided that you match the `make` steps and >> installed/available libraries to those used in the original nginx compile >> done in Fedora. >> >> >> Thomas >> >> >> On 1/6/21 5:30 PM, Phoenix Kiula wrote: >> >> Thank you Miguel. But you misunderstood the question. This suggestion... >> >> >> >>> nginx blog as a great guide on it though >>> https://www.nginx.com/blog/compiling-dynamic-modules-nginx-plus/ >>> >>> >> >> >> ...misses the very first question in this thread: we cannot compile nginx >> from source on our server. At least not in a way that that compiled version >> would become the nginx installed in our *system*. We need to install nginx >> via the default Fedora dnf package manager, which at this time installs >> 1.18.0. >> >> Now, what I don't mind doing is to compile nginx in some self-contained >> folder somewhere, then use that compilation to create the .so or whatever >> the module file for that version is....if all of this module compiling does >> *not* affect the system-installed dnf version of nginx. Is this possible? >> >> If so, the instructions do not help with this. The first step in that >> official tutorial is to compile nginx and that compiled nginx then becomes >> the system's main nginx. It replaces whatever was installed via "dnf >> install nginx". Yes? >> >> Hope this makes sense. Have I correctly understood how nginx compilation >> works? Appreciate any pointers. >> >> Thank you. >> >> >> _______________________________________________ >> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From teward at thomas-ward.net Thu Jan 7 06:34:58 2021 From: teward at thomas-ward.net (Thomas Ward) Date: Thu, 7 Jan 2021 01:34:58 -0500 Subject: Installing dynamic modules when Nginx itself is installed via yum/dnf (Linux) In-Reply-To: References: <6e5b44e2-bd0e-0da1-2ef2-05702cfc37fa@thomas-ward.net> <8c222dd1-6ff7-a376-1111-c3b85d207898@thomas-ward.net> Message-ID: You should, yes, to make sure your build as closely mirrors what is in the Fedora repos. Thomas On 1/6/21 11:19 PM, Phoenix Kiula wrote: > Perfect. This is clear Thomas. Much appreciated...between Miguel's > original pointer and this clarity from you I think it solves what?I'm > looking for. One last question: the `nginx -T` options...I'll add > those to the ./configure command, yes? > > > > On Wed, Jan 6, 2021 at 10:55 PM Thomas Ward > wrote: > > This is where **manually compiling by hand** is the problem.? You > would do the compilation in a separate directory **NOT** inside > the space of the system's control - usually I spawn new `/tmp` > directories or destructable directories in my home space. > > I'm not familiar with Fedora and the `dnf` command - but `dnf > install` installs the repositories-available-version of NGINX for > Fedora's repos. > > The next steps you would take by hand are: > > (1) Install **all build dependencies and runtime dependencies** > for NGINX and the modules you're compiling dynamically. > > (2) Download the tarball to temporary space. > > (3) At the *very* least (though I suggest you go digging in the > source of Fedora's repos to get their build options, you can find > them with `nginx -T` output though) you need to do this: > > ./configure > --add-dynamic-module=/path/to/third/party/module/source/directory > make > > **This does not install nginx, this is the compiling of the > binaries.** > > (4) Dig in the completed compile and find your .so file and put it > in /etc/nginx/modules (I believe that's where it is on your > system, but I can't validate that - again I'm not a Fedora user so > I can't verify that's exactly where you drop the module files > themselves. > > > These're the *basic* steps - but again this will **not** install > your manually compiled nginx to overwrite what `dnf` installs - > this simply compiles everything and it's up to you to go digging > to get the components you need and put them where you need them to > be for your system to recognize them. > > > Thomas > > On 1/6/21 10:47 PM, Phoenix Kiula wrote: >> Thank you Thomas. Much appreciate this, it sounds promising. >> Appreciate your clarity. >> >> So if I: >> >> 1. Compile nginx via `dnf install nginx` and that becomes my >> system's Nginx, installed usually in `/etc/nginx` >> >> 2. In a totally separate folder, say, `/usr/src`, I then download >> a tarball of Nginx and compile it along with the dynamic modules >> -- which will produce the .so files for said modules >> >> 3. Copy over the modules into the usual `/etc/nginx/modules` >> folder from Step 1 >> >> >> ....in this sequence?of steps, how do I make sure that: >> >> >> A. The compilation in Step 2 does not become my "system's nginx" >> (so when I do an `nginx -v` at the command prompt it should be >> refer to the nginx installed in Step 1 above, and *not* the one >> compiled via Step 2) >> >> B. The compile in Step 2 will use the "same libraries" that DNF >> used? In the DNF version of life I didn't pick any libraries >> manually...DNF found what was on my system. Will the manual >> compile not do the same? >> >> Many thanks! >> >> >> >> >> On Wed, Jan 6, 2021 at 10:19 PM Thomas Ward >> > wrote: >> >> I'm fairly familiar with the 'compiling process' for dynamic >> modules - the process is the same for NGINX Open Source as >> wel as NGINX Plus. >> >> You would need to compile the modules alongside NGINX and >> then harvest the compiled .so files and put them into >> corresponding locations on the system you want to load the >> dynamic modules.? In Ubuntu, we do this (or at least, I do) >> by using the same OS and libraries as installed on the target >> system (as well as the same NGINX version). >> >> This being said, **compiling** NGINX is different than >> **installing** NGINX - you can *compile* the nginx version >> 1.18.0 with the dynamic modules and the same configuration as >> the Fedora version, and then **take the compiled module** and >> load it up in your installed nginx instance.? Compiling NGINX >> to make the dynamic module does NOT require you to then >> install that NGINX version, provided that you match the >> `make` steps and installed/available libraries to those used >> in the original nginx compile done in Fedora. >> >> >> Thomas >> >> >> On 1/6/21 5:30 PM, Phoenix Kiula wrote: >>> Thank you Miguel. But you misunderstood the question. This >>> suggestion... >>> >>> nginx blog as a great guide on it though >>> https://www.nginx.com/blog/compiling-dynamic-modules-nginx-plus/ >>> >>> >>> >>> >>> >>> ...misses the very first question in this thread: we cannot >>> compile nginx from source on our server. At least not in a >>> way that that compiled version would become the nginx >>> installed in our *system*. We need to install nginx via the >>> default Fedora dnf package manager, which at this time >>> installs 1.18.0. >>> >>> Now, what I don't mind doing is to compile nginx in some >>> self-contained folder somewhere, then use that compilation >>> to create the .so or whatever the module file for that >>> version is....if all of this module compiling does *not* >>> affect the system-installed dnf version of nginx. Is this >>> possible? >>> >>> If so, the instructions do not help with this. The first >>> step in that official tutorial is to compile nginx and that >>> compiled nginx then becomes the system's main nginx. It >>> replaces whatever was installed via "dnf install nginx". Yes? >>> >>> Hope this makes sense. Have I correctly understood how nginx >>> compilation works? Appreciate any pointers. >>> >>> Thank you. >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jan 7 06:57:36 2021 From: nginx-forum at forum.nginx.org (Dr_tux) Date: Thu, 07 Jan 2021 01:57:36 -0500 Subject: Block extension for specified IP address In-Reply-To: <20210105180706.GO23032@daoine.org> References: <20210105180706.GO23032@daoine.org> Message-ID: <874a3e455b9d35ecf859ebfe785ac601.NginxMailingListEnglish@forum.nginx.org> Thanks for your help. I solved it as follows. location ~* \.(exe)$ { proxy_pass http://127.0.0.1:4545; allow 192.168.1.1/32; deny all; } Best. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290354,290412#msg-290412 From miguelmclara at gmail.com Thu Jan 7 09:42:18 2021 From: miguelmclara at gmail.com (Miguel C) Date: Thu, 7 Jan 2021 09:42:18 +0000 Subject: Installing dynamic modules when Nginx itself is installed via yum/dnf (Linux) In-Reply-To: References: Message-ID: On Wed, Jan 6, 2021, 22:31 Phoenix Kiula wrote: > Thank you Miguel. But you misunderstood the question. This suggestion... > > > >> nginx blog as a great guide on it though >> https://www.nginx.com/blog/compiling-dynamic-modules-nginx-plus/ >> >> > > > ...misses the very first question in this thread: we cannot compile nginx > from source on our server. At least not in a way that that compiled version > would become the nginx installed in our *system*. We need to install nginx > via the default Fedora dnf package manager, which at this time installs > 1.18.0. > I didn't think of that as a issue as you can build in another machine, as long as it's the same/similar platform/arch but others have suggest more steps on that and hopefully that gets you going. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phoenix.kiula at gmail.com Fri Jan 8 01:56:21 2021 From: phoenix.kiula at gmail.com (Phoenix Kiula) Date: Thu, 7 Jan 2021 20:56:21 -0500 Subject: Installing dynamic modules when Nginx itself is installed via yum/dnf (Linux) In-Reply-To: References: <6e5b44e2-bd0e-0da1-2ef2-05702cfc37fa@thomas-ward.net> <8c222dd1-6ff7-a376-1111-c3b85d207898@thomas-ward.net> Message-ID: Thank you. So I tried this. It's not as straightforward as it sounds. Many issues with the ./configure step. If I include the "nginx -V" compile options from my dnf repo install, it gives this stuff below, to which I add the "--add-compat" with the modules to add (last four lines)-- ./configure --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/run/nginx.pid --lock-path=/run/lock/subsys/nginx --user=nginx --group=nginx --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-stream_ssl_preread_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_perl_module=dynamic --with-http_auth_request_module --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_ssl_module --with-google_perftools_module --with-debug --with-cc-opt='-O2 -flto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' --with-ld-opt='-Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E' \ --with-compat \ --add-dynamic-module=../ngx_brotli \ --add-dynamic-module=../headers-more-nginx-module \ --add-dynamic-module=../ngx_security_headers This gives the first error: error: the invalid value in --with-ld-opt="-Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E" Not super informative. So I just remove this "--with-ld-opt" parameter. Next error: ./configure: no supported file AIO was found Currently file AIO is supported on FreeBSD 4.3+ and Linux 2.6.22+ only So I try to do a "yum install libaio". # yum install libaio Last metadata expiration check: 0:00:22 ago on Thu 07 Jan 2021 08:44:10 PM EST. Package libaio-0.3.111-10.fc33.x86_64 is already installed. Dependencies resolved. Nothing to do. Complete! What do I need instead of this installed lib in the system? Anyway, I just delete this option then. Try again the ./configure: Next error: ./configure: error: can not detect int size Googling for this suggests on stackoverflow that the "--with-cc-opt" is the culprit. Not sure what precisely in this is the "int size" that it was trying to detect. So I delete this whole parameter to try: --with-cc-opt='-O2 -flto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' This entire thing is gone. Trying again without this above param: ./configure: error: the HTTP image filter module requires the GD library. Seriously, what amazing engineer has made this stuff? The GD library is already installed on my system, but I check some variations: # yum install libgd Last metadata expiration check: 0:00:05 ago on Thu 07 Jan 2021 08:50:20 PM EST. No match for argument: *libgd* Error: Unable to find a match: libgd # yum install libgd-dev Last metadata expiration check: 0:00:16 ago on Thu 07 Jan 2021 08:50:20 PM EST. No match for argument: *libgd-dev* Error: Unable to find a match: libgd-dev # yum install gd Last metadata expiration check: 0:00:51 ago on Thu 07 Jan 2021 08:50:20 PM EST. Package gd-2.3.0-3.fc33.x86_64 is already installed. Dependencies resolved. Nothing to do. Complete! At this point I basically give up? What the heck? So I compiled the modules without all of these. Removed XSLT, removed image filters, everything. The .so modules thus created of course don't do much. When they're copied to the /etc/nginx/modules/ folder, and nginx reloaded, they create an issue. # systemctl status nginx.service Jan 07 20:54:00 SERVER systemd[1]: Starting The nginx HTTP and reverse proxy server... Jan 07 20:54:00 SERVER nginx[39083]: nginx: [emerg] module "/usr/share/nginx/modules/ngx_http_security_headers_module.so"> Jan 07 20:54:00 SERVER nginx[39083]: nginx: configuration file /etc/nginx/nginx.conf test failed Jan 07 20:54:00 SERVER systemd[1]: nginx.service: Control process exited, code=exited, status=1/FAILURE Jan 07 20:54:00 SERVER systemd[1]: nginx.service: Failed with result 'exit-code'. Jan 07 20:54:00 SERVER systemd[1]: Failed to start The nginx HTTP and reverse proxy server. This doesn't give any meaningful error. Nor does "journalctl -xe". Any suggestions to make this somewhat more sensible than this utterly mediocre experience? Thanks. On Thu, Jan 7, 2021 at 1:35 AM Thomas Ward wrote: > You should, yes, to make sure your build as closely mirrors what is in the > Fedora repos. > > > Thomas > > > On 1/6/21 11:19 PM, Phoenix Kiula wrote: > > Perfect. This is clear Thomas. Much appreciated...between Miguel's > original pointer and this clarity from you I think it solves what I'm > looking for. One last question: the `nginx -T` options...I'll add those to > the ./configure command, yes? > > > > On Wed, Jan 6, 2021 at 10:55 PM Thomas Ward > wrote: > >> This is where **manually compiling by hand** is the problem. You would >> do the compilation in a separate directory **NOT** inside the space of the >> system's control - usually I spawn new `/tmp` directories or destructable >> directories in my home space. >> >> I'm not familiar with Fedora and the `dnf` command - but `dnf install` >> installs the repositories-available-version of NGINX for Fedora's repos. >> >> The next steps you would take by hand are: >> >> (1) Install **all build dependencies and runtime dependencies** for NGINX >> and the modules you're compiling dynamically. >> >> (2) Download the tarball to temporary space. >> >> (3) At the *very* least (though I suggest you go digging in the source of >> Fedora's repos to get their build options, you can find them with `nginx >> -T` output though) you need to do this: >> >> ./configure >> --add-dynamic-module=/path/to/third/party/module/source/directory >> make >> >> **This does not install nginx, this is the compiling of the binaries.** >> >> (4) Dig in the completed compile and find your .so file and put it in >> /etc/nginx/modules (I believe that's where it is on your system, but I >> can't validate that - again I'm not a Fedora user so I can't verify that's >> exactly where you drop the module files themselves. >> >> >> These're the *basic* steps - but again this will **not** install your >> manually compiled nginx to overwrite what `dnf` installs - this simply >> compiles everything and it's up to you to go digging to get the components >> you need and put them where you need them to be for your system to >> recognize them. >> >> >> Thomas >> On 1/6/21 10:47 PM, Phoenix Kiula wrote: >> >> Thank you Thomas. Much appreciate this, it sounds promising. Appreciate >> your clarity. >> >> So if I: >> >> 1. Compile nginx via `dnf install nginx` and that becomes my system's >> Nginx, installed usually in `/etc/nginx` >> >> 2. In a totally separate folder, say, `/usr/src`, I then download a >> tarball of Nginx and compile it along with the dynamic modules -- which >> will produce the .so files for said modules >> >> 3. Copy over the modules into the usual `/etc/nginx/modules` folder from >> Step 1 >> >> >> ....in this sequence of steps, how do I make sure that: >> >> >> A. The compilation in Step 2 does not become my "system's nginx" (so when >> I do an `nginx -v` at the command prompt it should be refer to the nginx >> installed in Step 1 above, and *not* the one compiled via Step 2) >> >> B. The compile in Step 2 will use the "same libraries" that DNF used? In >> the DNF version of life I didn't pick any libraries manually...DNF found >> what was on my system. Will the manual compile not do the same? >> >> Many thanks! >> >> >> >> >> On Wed, Jan 6, 2021 at 10:19 PM Thomas Ward >> wrote: >> >>> I'm fairly familiar with the 'compiling process' for dynamic modules - >>> the process is the same for NGINX Open Source as wel as NGINX Plus. >>> >>> You would need to compile the modules alongside NGINX and then harvest >>> the compiled .so files and put them into corresponding locations on the >>> system you want to load the dynamic modules. In Ubuntu, we do this (or at >>> least, I do) by using the same OS and libraries as installed on the target >>> system (as well as the same NGINX version). >>> >>> This being said, **compiling** NGINX is different than **installing** >>> NGINX - you can *compile* the nginx version 1.18.0 with the dynamic modules >>> and the same configuration as the Fedora version, and then **take the >>> compiled module** and load it up in your installed nginx instance. >>> Compiling NGINX to make the dynamic module does NOT require you to then >>> install that NGINX version, provided that you match the `make` steps and >>> installed/available libraries to those used in the original nginx compile >>> done in Fedora. >>> >>> >>> Thomas >>> >>> >>> On 1/6/21 5:30 PM, Phoenix Kiula wrote: >>> >>> Thank you Miguel. But you misunderstood the question. This suggestion... >>> >>> >>> >>>> nginx blog as a great guide on it though >>>> https://www.nginx.com/blog/compiling-dynamic-modules-nginx-plus/ >>>> >>>> >>> >>> >>> ...misses the very first question in this thread: we cannot compile >>> nginx from source on our server. At least not in a way that that compiled >>> version would become the nginx installed in our *system*. We need to >>> install nginx via the default Fedora dnf package manager, which at this >>> time installs 1.18.0. >>> >>> Now, what I don't mind doing is to compile nginx in some self-contained >>> folder somewhere, then use that compilation to create the .so or whatever >>> the module file for that version is....if all of this module compiling does >>> *not* affect the system-installed dnf version of nginx. Is this possible? >>> >>> If so, the instructions do not help with this. The first step in that >>> official tutorial is to compile nginx and that compiled nginx then becomes >>> the system's main nginx. It replaces whatever was installed via "dnf >>> install nginx". Yes? >>> >>> Hope this makes sense. Have I correctly understood how nginx compilation >>> works? Appreciate any pointers. >>> >>> Thank you. >>> >>> >>> _______________________________________________ >>> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From phoenix.kiula at gmail.com Fri Jan 8 03:38:50 2021 From: phoenix.kiula at gmail.com (Phoenix Kiula) Date: Thu, 7 Jan 2021 22:38:50 -0500 Subject: Installing dynamic modules when Nginx itself is installed via yum/dnf (Linux) In-Reply-To: References: <6e5b44e2-bd0e-0da1-2ef2-05702cfc37fa@thomas-ward.net> <8c222dd1-6ff7-a376-1111-c3b85d207898@thomas-ward.net> Message-ID: Just to add to this, despite having compiled it inside a freshly downloaded folder of nginx 1.18.0, somehow it seems the modules were compiled with 1.16.1? How does this happen? # nginx -t nginx: [emerg] module "/usr/share/nginx/modules/ngx_http_security_headers_module.so" version 1016001 instead of 1018000 in /etc/nginx/nginx.conf:16 nginx: configuration file /etc/nginx/nginx.conf test failed On Thu, Jan 7, 2021 at 8:56 PM Phoenix Kiula wrote: > Thank you. So I tried this. It's not as straightforward as it sounds. > > Many issues with the ./configure step. If I include the "nginx -V" compile > options from my dnf repo install, it gives this stuff below, to which I add > the "--add-compat" with the modules to add (last four lines)-- > > > ./configure --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx > --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log > --http-client-body-temp-path=/var/lib/nginx/tmp/client_body > --http-proxy-temp-path=/var/lib/nginx/tmp/proxy > --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi > --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi > --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/run/nginx.pid > --lock-path=/run/lock/subsys/nginx --user=nginx --group=nginx > --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_v2_module > --with-http_realip_module --with-stream_ssl_preread_module > --with-http_addition_module --with-http_xslt_module=dynamic > --with-http_image_filter_module=dynamic --with-http_sub_module > --with-http_dav_module --with-http_flv_module --with-http_mp4_module > --with-http_gunzip_module --with-http_gzip_static_module > --with-http_random_index_module --with-http_secure_link_module > --with-http_degradation_module --with-http_slice_module > --with-http_stub_status_module --with-http_perl_module=dynamic > --with-http_auth_request_module --with-mail=dynamic --with-mail_ssl_module > --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_ssl_module > --with-google_perftools_module --with-debug --with-cc-opt='-O2 -flto > -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall > -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS > -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong > -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic > -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' > --with-ld-opt='-Wl,-z,relro -Wl,--as-needed -Wl,-z,now > -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E' \ > --with-compat \ > --add-dynamic-module=../ngx_brotli \ > --add-dynamic-module=../headers-more-nginx-module \ > --add-dynamic-module=../ngx_security_headers > > > > This gives the first error: > > error: the invalid value in --with-ld-opt="-Wl,-z,relro -Wl,--as-needed > -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E" > > Not super informative. So I just remove this "--with-ld-opt" parameter. > > Next error: > > ./configure: no supported file AIO was found > > Currently file AIO is supported on FreeBSD 4.3+ and Linux 2.6.22+ only > > So I try to do a "yum install libaio". > > # yum install libaio > > Last metadata expiration check: 0:00:22 ago on Thu 07 Jan 2021 08:44:10 PM > EST. > > Package libaio-0.3.111-10.fc33.x86_64 is already installed. > > Dependencies resolved. > > Nothing to do. > > Complete! > > > What do I need instead of this installed lib in the system? Anyway, I just > delete this option then. Try again the ./configure: > > Next error: > > ./configure: error: can not detect int size > > Googling for this suggests on stackoverflow that the "--with-cc-opt" is > the culprit. Not sure what precisely in this is the "int size" that it was > trying to detect. So I delete this whole parameter to try: > > --with-cc-opt='-O2 -flto -ffat-lto-objects -fexceptions -g > -grecord-gcc-switches -pipe -Wall -Werror=format-security > -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS > -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong > -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic > -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' > > > This entire thing is gone. > > Trying again without this above param: > > > ./configure: error: the HTTP image filter module requires the GD library. > > > Seriously, what amazing engineer has made this stuff? The GD library is > already installed on my system, but I check some variations: > > > # yum install libgd > > Last metadata expiration check: 0:00:05 ago on Thu 07 Jan 2021 08:50:20 PM > EST. > > No match for argument: *libgd* > > Error: Unable to find a match: libgd > > > > # yum install libgd-dev > > > Last metadata expiration check: 0:00:16 ago on Thu 07 Jan 2021 08:50:20 PM > EST. > > No match for argument: > *libgd-dev* > > Error: Unable to find a match: libgd-dev > > > > # yum install gd > > Last metadata expiration check: 0:00:51 ago on Thu 07 Jan 2021 08:50:20 PM > EST. > > Package gd-2.3.0-3.fc33.x86_64 is already installed. > > Dependencies resolved. > > Nothing to do. > > Complete! > > > > At this point I basically give up? What the heck? > > So I compiled the modules without all of these. Removed XSLT, removed > image filters, everything. The .so modules thus created of course don't do > much. When they're copied to the /etc/nginx/modules/ folder, and nginx > reloaded, they create an issue. > > > # systemctl status nginx.service > > > Jan 07 20:54:00 SERVER systemd[1]: Starting The nginx HTTP and reverse > proxy server... > Jan 07 20:54:00 SERVER nginx[39083]: nginx: [emerg] module > "/usr/share/nginx/modules/ngx_http_security_headers_module.so"> > Jan 07 20:54:00 SERVER nginx[39083]: nginx: configuration file > /etc/nginx/nginx.conf test failed > Jan 07 20:54:00 SERVER systemd[1]: nginx.service: Control process exited, > code=exited, status=1/FAILURE > Jan 07 20:54:00 SERVER systemd[1]: nginx.service: Failed with result > 'exit-code'. > Jan 07 20:54:00 SERVER systemd[1]: Failed to start The nginx HTTP and > reverse proxy server. > > > > This doesn't give any meaningful error. Nor does "journalctl -xe". > > Any suggestions to make this somewhat more sensible than this utterly > mediocre experience? > > Thanks. > > > > > > > > On Thu, Jan 7, 2021 at 1:35 AM Thomas Ward wrote: > >> You should, yes, to make sure your build as closely mirrors what is in >> the Fedora repos. >> >> >> Thomas >> >> >> On 1/6/21 11:19 PM, Phoenix Kiula wrote: >> >> Perfect. This is clear Thomas. Much appreciated...between Miguel's >> original pointer and this clarity from you I think it solves what I'm >> looking for. One last question: the `nginx -T` options...I'll add those to >> the ./configure command, yes? >> >> >> >> On Wed, Jan 6, 2021 at 10:55 PM Thomas Ward >> wrote: >> >>> This is where **manually compiling by hand** is the problem. You would >>> do the compilation in a separate directory **NOT** inside the space of the >>> system's control - usually I spawn new `/tmp` directories or destructable >>> directories in my home space. >>> >>> I'm not familiar with Fedora and the `dnf` command - but `dnf install` >>> installs the repositories-available-version of NGINX for Fedora's repos. >>> >>> The next steps you would take by hand are: >>> >>> (1) Install **all build dependencies and runtime dependencies** for >>> NGINX and the modules you're compiling dynamically. >>> >>> (2) Download the tarball to temporary space. >>> >>> (3) At the *very* least (though I suggest you go digging in the source >>> of Fedora's repos to get their build options, you can find them with `nginx >>> -T` output though) you need to do this: >>> >>> ./configure >>> --add-dynamic-module=/path/to/third/party/module/source/directory >>> make >>> >>> **This does not install nginx, this is the compiling of the binaries.** >>> >>> (4) Dig in the completed compile and find your .so file and put it in >>> /etc/nginx/modules (I believe that's where it is on your system, but I >>> can't validate that - again I'm not a Fedora user so I can't verify that's >>> exactly where you drop the module files themselves. >>> >>> >>> These're the *basic* steps - but again this will **not** install your >>> manually compiled nginx to overwrite what `dnf` installs - this simply >>> compiles everything and it's up to you to go digging to get the components >>> you need and put them where you need them to be for your system to >>> recognize them. >>> >>> >>> Thomas >>> On 1/6/21 10:47 PM, Phoenix Kiula wrote: >>> >>> Thank you Thomas. Much appreciate this, it sounds promising. Appreciate >>> your clarity. >>> >>> So if I: >>> >>> 1. Compile nginx via `dnf install nginx` and that becomes my system's >>> Nginx, installed usually in `/etc/nginx` >>> >>> 2. In a totally separate folder, say, `/usr/src`, I then download a >>> tarball of Nginx and compile it along with the dynamic modules -- which >>> will produce the .so files for said modules >>> >>> 3. Copy over the modules into the usual `/etc/nginx/modules` folder from >>> Step 1 >>> >>> >>> ....in this sequence of steps, how do I make sure that: >>> >>> >>> A. The compilation in Step 2 does not become my "system's nginx" (so >>> when I do an `nginx -v` at the command prompt it should be refer to the >>> nginx installed in Step 1 above, and *not* the one compiled via Step 2) >>> >>> B. The compile in Step 2 will use the "same libraries" that DNF used? In >>> the DNF version of life I didn't pick any libraries manually...DNF found >>> what was on my system. Will the manual compile not do the same? >>> >>> Many thanks! >>> >>> >>> >>> >>> On Wed, Jan 6, 2021 at 10:19 PM Thomas Ward >>> wrote: >>> >>>> I'm fairly familiar with the 'compiling process' for dynamic modules - >>>> the process is the same for NGINX Open Source as wel as NGINX Plus. >>>> >>>> You would need to compile the modules alongside NGINX and then harvest >>>> the compiled .so files and put them into corresponding locations on the >>>> system you want to load the dynamic modules. In Ubuntu, we do this (or at >>>> least, I do) by using the same OS and libraries as installed on the target >>>> system (as well as the same NGINX version). >>>> >>>> This being said, **compiling** NGINX is different than **installing** >>>> NGINX - you can *compile* the nginx version 1.18.0 with the dynamic modules >>>> and the same configuration as the Fedora version, and then **take the >>>> compiled module** and load it up in your installed nginx instance. >>>> Compiling NGINX to make the dynamic module does NOT require you to then >>>> install that NGINX version, provided that you match the `make` steps and >>>> installed/available libraries to those used in the original nginx compile >>>> done in Fedora. >>>> >>>> >>>> Thomas >>>> >>>> >>>> On 1/6/21 5:30 PM, Phoenix Kiula wrote: >>>> >>>> Thank you Miguel. But you misunderstood the question. This suggestion... >>>> >>>> >>>> >>>>> nginx blog as a great guide on it though >>>>> https://www.nginx.com/blog/compiling-dynamic-modules-nginx-plus/ >>>>> >>>>> >>>> >>>> >>>> ...misses the very first question in this thread: we cannot compile >>>> nginx from source on our server. At least not in a way that that compiled >>>> version would become the nginx installed in our *system*. We need to >>>> install nginx via the default Fedora dnf package manager, which at this >>>> time installs 1.18.0. >>>> >>>> Now, what I don't mind doing is to compile nginx in some self-contained >>>> folder somewhere, then use that compilation to create the .so or whatever >>>> the module file for that version is....if all of this module compiling does >>>> *not* affect the system-installed dnf version of nginx. Is this possible? >>>> >>>> If so, the instructions do not help with this. The first step in that >>>> official tutorial is to compile nginx and that compiled nginx then becomes >>>> the system's main nginx. It replaces whatever was installed via "dnf >>>> install nginx". Yes? >>>> >>>> Hope this makes sense. Have I correctly understood how nginx >>>> compilation works? Appreciate any pointers. >>>> >>>> Thank you. >>>> >>>> >>>> _______________________________________________ >>>> nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jan 11 03:01:59 2021 From: nginx-forum at forum.nginx.org (rhand) Date: Sun, 10 Jan 2021 22:01:59 -0500 Subject: upstream timed out (110: Connection timed out) while reading response header from upstr In-Reply-To: <761d94df-83c9-4071-8b1c-ac6ad43856b7@f18g2000prf.googlegroups.com> References: <761d94df-83c9-4071-8b1c-ac6ad43856b7@f18g2000prf.googlegroups.com> Message-ID: <368c298aa063b3927fedaf14a4073d8e.NginxMailingListEnglish@forum.nginx.org> See Matt.ias.be's blog post https://ma.ttias.be/nginx-and-php-fpm-upstream-timed-out-failed-110-connection-timed-out-or-reset-by-peer-while-reading/ for some tips on debugging. Can be tough sometimes though. In my Kubernetes / Docker case it was the wrong fastcig_pass in nginx config and the wrong container port. See https://imwz.io/docker-nginx-502-bad-gateway/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,30793,290430#msg-290430 From nginx-forum at forum.nginx.org Mon Jan 11 09:48:39 2021 From: nginx-forum at forum.nginx.org (iw2lsi) Date: Mon, 11 Jan 2021 04:48:39 -0500 Subject: reverse proxy: do I really have to store ssl certificates on the proxy ? Message-ID: Hi, I'm newbie so maybe this is a stupid question... sorry! I'm using a rPI to reverse proxy http services to several other rPI according to the domain and/or host names... now I'm switching to https and I wonder if I can keep the ssl certificates and keys on the destination machines or if I really have to put them on the machine that is managing the (reverse) proxy. thanks for any hints Giampaolo Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290433,290433#msg-290433 From nginx-forum at forum.nginx.org Mon Jan 11 17:32:18 2021 From: nginx-forum at forum.nginx.org (meauibzennlsfozbqd@niwghx.com) Date: Mon, 11 Jan 2021 12:32:18 -0500 Subject: anti virus exclusion Message-ID: hi, we are running nginx on linux, the nginx run as load balancer and it runs at high intensity. we're supposed to install AV software on the OS, Are there any recommended exclusions that need to be made in AV to keep nginx working properly? thanks a lot! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290434,290434#msg-290434 From francis at daoine.org Mon Jan 11 19:13:20 2021 From: francis at daoine.org (Francis Daly) Date: Mon, 11 Jan 2021 19:13:20 +0000 Subject: reverse proxy: do I really have to store ssl certificates on the proxy ? In-Reply-To: References: Message-ID: <20210111191320.GS23032@daoine.org> On Mon, Jan 11, 2021 at 04:48:39AM -0500, iw2lsi wrote: Hi there, > I'm using a rPI to reverse proxy http services to several other rPI > according to the domain and/or host names... now I'm switching to https and > I wonder if I can keep the ssl certificates and keys on the destination > machines or if I really have to put them on the machine that is managing the > (reverse) proxy. "The thing that is terminating the ssl connection" needs to have the ssl certificate and key. The certificate is public and says "this is me"; access to the key is needed to convince a client that it really is me. In the common case, where your nginx does "http{}"-level reverse proxying, you need all of the certificates and keys on the front-facing nginx server. If it suits your model, you could instead do "stream{}"-level reverse proxying, using ssl_preread (http://nginx.org/r/ssl_preread) and the example on that page. In that case, the front-facing nginx listening on this port would not do any ssl termination, or anything related to http; it would just send the opaque https stream to whichever back-end servers you configure. So in that case, that nginx would not make use of certificates or keys. nginx would basically be a tcp-pass-through system, and the individual back-end servers would do all of the https side of things. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Jan 11 21:27:02 2021 From: francis at daoine.org (Francis Daly) Date: Mon, 11 Jan 2021 21:27:02 +0000 Subject: Installing dynamic modules when Nginx itself is installed via yum/dnf (Linux) In-Reply-To: References: Message-ID: <20210111212702.GT23032@daoine.org> On Tue, Jan 05, 2021 at 07:27:20PM -0500, Phoenix Kiula wrote: Hi there, there seem to be a few different things clashing here, which combine to make it very awkward for you to build the dynamic modules that you want. I will try to explain the background as I see it, and hopefully point you at things you can do to get to a working system. Firstly - you only want to use software installed from a package. That is a sensible choice; it makes it much easier to re-create a server when the software comes from a known external source. Secondly - you only want to use packages provided by a specific third party -- your distribution. That's also sensible, but is a bit more restrictive, because... Thirdly - you want to use some software that your distribution does not make available. Here's where the conflict arises -- between your requirement#2 and requirement#3. They can't both happen, so you must choose which to break. You seem to be happy to self-compile some code that is not available from the distribution; but not to replace code that is available from the distribution. That's also a sensible choice. On the nginx side... Once upon a time, nginx was configured to be built with whatever modules were wanted, and was compiled statically with that set. The only way to add a module was to (re)compile the whole thing. The reliable way to get "the same" nginx but with an extra module, was to run "nginx -V" to find how this was configured, and then run the same command with an extra "--add-module=". Then in early 2016, "dynamic modules" were added to nginx -- some modules could be compiled as "shared object" files, and could be loaded into nginx at startup; so you could distribute one "main" binary, plus a handful of individual modules, and the user could decide which to include and which not to, in their nginx.conf. It turns out that the modules do need to know some things about the internal layout of the compiled nginx instance; and that layout can (and does) change depending on how nginx was configured to be built. So "dynamic modules" were good; but you could not reliably provide "a module for this version of nginx". You had to provide "a module for this version of nginx configured in this way", plus "a module for this version of nginx configured in that way"; and the number of possible ways was big enough that it was a headache to try to provide multiple binary versions of "the same" module. (A big thing was ssl -- the layout of some internal things were very different if ssl was or was not enabled. But other features could also change that layout.) The reliable way to build an extra module for this nginx, was to run "nginx -V" to find how this was configured, and then run the same command with an extra "--add-dynamic-module=". So, in late 2016, the "--with-compat" configure-time flag was added as an option; this would more-or-less fix the internal layout that modules needed to know about, to no longer vary based on the rest of the configure-time choices. With this, the nginx-internal layout would remain much more consistent -- basically, a "compat version number" could be changed whenever a known-breaking change would be made, and most changes would not affect that "compat version number". You could now distribute one binary module which could reasonably be expected to work with any configuration of a specific version of nginx; and there was also a reasonable chance that it would work with newer and older versions of nginx too, so long as this "compat" part was not changed. It does need that both nginx and the module are built with the same expectations of the internal layout, but that is relatively straightforward to ensure -- just use "--with-compat" at configure-time before building, and most of the work is done. In that world, the way to build a module is the few-step recipe that was in the document referred to earlier: https://www.nginx.com/blog/compiling-dynamic-modules-nginx-plus/ * get the nginx source code * get your module source code * configure nginx "--with-compat" and "--add-dynamic-module=" your module * "make modules" to build the module * copy the module .so file to wherever you want your pre-existing nginx to be able to find it All of the extra configure-options of the running nginx binary can be ignored, so long as it includes --with-compat. == In your specific case, I am not sure from your mails if your distribution included "--with-compat" or not. If it did, you should be able to just follow that recipe. If it did not, you will probably need to follow the recipe from 2016. You may be able to omit some of the "--with-" module parts, if you know that they do not affect the internal layout that affects dynamic modules. (How would you know that? Probably studying the source code would give hints; I suspect that not many people would be especially interested in investigating, since there has been a configure-time option to make it unnecessary for the past four years.) If your running nginx shows "--with-compat" in its "nginx -V" output, and if you follow exactly the recipe given, things have a better chance of working. If you get errors or unexpected output, then copy-pasting the commands that you ran and the output that you got, may help someone point at what could be done differently. If your running nginx does not show "--with-compat" in its "nginx -V" output, and you are using a well-known distribution, then maybe someone has already built some modules for your distribution; if you are happy to trust whatever external package repository you find instead of building the module yourself, that might be a way to proceed. You may be able, for example, to extract the files that you care about, in order to avoid changing the server repository configuration. You would take on the responsibility for updating things in that case, the same as if you had built it yourself. > Welcome any pointers to maintainable ways of installing so that a "dnf > update nginx" will not break the modules. That would require whoever compiled your nginx, and whoever compiles your extra modules, choosing to update in lockstep. That is unlikely, unless they are the same organization. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Jan 11 21:36:28 2021 From: nginx-forum at forum.nginx.org (iw2lsi) Date: Mon, 11 Jan 2021 16:36:28 -0500 Subject: reverse proxy: do I really have to store ssl certificates on the proxy ? In-Reply-To: <20210111191320.GS23032@daoine.org> References: <20210111191320.GS23032@daoine.org> Message-ID: <0feb92dc04f862ccae11a48e0f7cecd0.NginxMailingListEnglish@forum.nginx.org> Hi Francis, thanks a lot for your help... I'm trying the example reported here http://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html but I'm getting an error: nginx: [emerg] "proxy_pass" directive is not allowed here in /etc/nginx/sites-enabled/default.conf:16 Does it mean that the flag "--with-stream_ssl_preread_module" is not set or am I missing something else ? thanks again, Giampaolo Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290433,290446#msg-290446 From francis at daoine.org Mon Jan 11 23:49:18 2021 From: francis at daoine.org (Francis Daly) Date: Mon, 11 Jan 2021 23:49:18 +0000 Subject: reverse proxy: do I really have to store ssl certificates on the proxy ? In-Reply-To: <0feb92dc04f862ccae11a48e0f7cecd0.NginxMailingListEnglish@forum.nginx.org> References: <20210111191320.GS23032@daoine.org> <0feb92dc04f862ccae11a48e0f7cecd0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20210111234918.GU23032@daoine.org> On Mon, Jan 11, 2021 at 04:36:28PM -0500, iw2lsi wrote: Hi there, > I'm trying the example reported here > http://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html but I'm > getting an error: > > nginx: [emerg] "proxy_pass" directive is not allowed here in > /etc/nginx/sites-enabled/default.conf:16 "proxy_pass" must be inside "server{}", which must be inside "stream{}". "map" also lives inside "stream{}". The full set of stream modules can be seen at http://nginx.org/en/docs/; scroll down to the "ngx_stream_*" section of the module list. Then you can click in to the individual pages to see the context of each directive. Alternatively, start from http://nginx.org/en/docs/dirindex.html and look for the directive there; in this case you'll want "proxy_pass (ngx_stream_proxy_module)", which should show what is needed. If that does not show you the answer, copy-paste the config (redacted appropriately) and you may get more help here. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Jan 12 09:10:03 2021 From: nginx-forum at forum.nginx.org (sanjay9999) Date: Tue, 12 Jan 2021 04:10:03 -0500 Subject: Request Method Using Mixed case letters. Message-ID: <33ac96158d191f2d9a1fe9156dcd2d37.NginxMailingListEnglish@forum.nginx.org> Hi, I am using mixed case letters in request methods. nginx finalized http request to 400 becuase as per the standard Request Method is case sensitive. However it shows html response with last line showing "nginx". Our security team says "you should not disclose web server details in the response for a request" We have implemented solution to hide server name and version. However, in this case control does not reach any of out server/location block . so that I can override the 400 errror. Please help it out. Regards Sanjay B Jain Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290451,290451#msg-290451 From nginx-forum at forum.nginx.org Tue Jan 12 09:21:17 2021 From: nginx-forum at forum.nginx.org (sanjay9999) Date: Tue, 12 Jan 2021 04:21:17 -0500 Subject: Request Method Using Mixed case letters. In-Reply-To: <33ac96158d191f2d9a1fe9156dcd2d37.NginxMailingListEnglish@forum.nginx.org> References: <33ac96158d191f2d9a1fe9156dcd2d37.NginxMailingListEnglish@forum.nginx.org> Message-ID: <77b682f513a3e58c828c16459d4889e4.NginxMailingListEnglish@forum.nginx.org> Example used in testcase request method = "POSTsss" I would like to allow GET / POST / DELETE methods only. otherwise send 501 response. if ($request_method !~* ^(GET|DELETE|POST)$ ) { return 501 '{ "ver": "1.1.2", "txnid": "", "timestamp": "", "errorCode": "NotImplemented", "errorMsg": "Request Method is not implemented"}'; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290451,290452#msg-290452 From mdounin at mdounin.ru Tue Jan 12 13:37:22 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Jan 2021 16:37:22 +0300 Subject: Request Method Using Mixed case letters. In-Reply-To: <33ac96158d191f2d9a1fe9156dcd2d37.NginxMailingListEnglish@forum.nginx.org> References: <33ac96158d191f2d9a1fe9156dcd2d37.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20210112133722.GX1147@mdounin.ru> Hello! On Tue, Jan 12, 2021 at 04:10:03AM -0500, sanjay9999 wrote: > Hi, > I am using mixed case letters in request methods. nginx finalized http > request to 400 becuase as per the standard Request Method is case sensitive. > However it shows html response with last line showing "nginx". > > Our security team says "you should not disclose web server details in the > response for a request" > We have implemented solution to hide server name and version. > > However, in this case control does not reach any of out server/location > block . so that I can override the 400 errror. Consider reading these tickets: https://trac.nginx.org/nginx/ticket/936 https://trac.nginx.org/nginx/ticket/1644 In particular, consider showing this Wikipedia article to your "security team": https://en.wikipedia.org/wiki/Security_through_obscurity If you really want to hide "nginx" regardless of what's written in the above links, you can do so using the server_tokens directive (http://nginx.org/r/server_tokens): server_tokens ""; This only works in the commercial version though. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Tue Jan 12 13:59:44 2021 From: nginx-forum at forum.nginx.org (Flinou) Date: Tue, 12 Jan 2021 08:59:44 -0500 Subject: Perl module logs located in wrong file Message-ID: <2f5da8532e89e96ec5923e5d6aead7ea.NginxMailingListEnglish@forum.nginx.org> Hello ! Here is the output of nginx -V : nginx version: nginx/1.19.2 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) built with OpenSSL 1.1.1g 21 Apr 2020 TLS SNI support enabled configure arguments: --prefix=/opt/nginx/1.19.2 --with-cc-opt='-I /logiciels/openssl/1.1.1/include/ -fstack-protector-strong' --with-cc='gcc -L/logiciels/openssl/1.1.1/lib64 -Wl,-rpath,/logiciels/openssl/1.1.1/lib64' --with-compat --with-file-aio --with-http_flv_module --with-http_stub_status_module --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-http_ssl_module --with-pcre --with-debug --with-http_realip_module --with-http_addition_module --with-http_mp4_module --with-http_dav_module --with-http_random_index_module --with-http_degradation_module --with-http_perl_module --with-perl_modules_path=/opt/nginx/1.19.2/modules/perl/ --with-mail --with-mail_ssl_module --with-stream_ssl_module --with-http_secure_link_module --with-http_sub_module --with-http_auth_request_module --with-ipv6 --with-http_v2_module --with-stream --with-threads --with-http_gunzip_module --with-http_gzip_static_module --with-http_stub_status_module --with-http_slice_module --add-dynamic-module=./ngx_devel_kit-0.3.1 --add-dynamic-module=./lua-nginx-module-0.10.15 --add-dynamic-module=./headers-more-nginx-module-0.33 --add-module=./ngx_http_auth_ldap_module --with-ld-opt=-Wl,-rpath,/opt/luajit/lib Here is my nginx.conf : user w3user webgrp; worker_processes 1; load_module ./modules/ngx_http_headers_more_filter_module.so; error_log /var/projects/instance/nginx_1.19/logs/error.log debug; pid /var/projects/instance/nginx_1.19/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; perl_modules /appli/projects/instance/nginx_1.19/conf/perl/modules/; perl_modules ./modules/perl/; perl_require test.pm; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/projects/instance/nginx_1.19/logs/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; proxy_cache_path //var/projects/instance/nginx_1.19/cache/ levels=1:2 keys_zone=instance_cache:10m max_size=10g inactive=60m use_temp_path=off; include /appli/projects/instance/nginx_1.19/conf//instance.conf; } I am witnessing a weird behaviour. All logs are well written down in /var/projects/instance/nginx_1.19/logs/error.log (as provided in nginx.conf) excepting logs from perl_module which are written in /opt/nginx/1.19.2/logs/error.log. For example (when nginx failed to locate perl module) $ cat /opt/nginx/1.19.2/logs/error.log 2021/01/12 10:28:27 [emerg] 48992#48992: require_pv("test.pm") failed: "Can't locate test.pm in @INC (@INC contains: /appli/projects/instance/nginx_1.19/perl/modules/ /opt/nginx/1.19.2/./modules/perl/ /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at (eval 1) line 1." or when start up went fine : $ cat /opt/nginx/1.19.2/logs/error.log 2021/01/12 11:40:55 [notice] 53486#53486: signal process started In other hand, remaining logs can be found in /var/projects/instance/nginx_1.19/logs/error.log. This behaviour does not reproduce when I start nginx without perl module directives. Any ideas of what is going on here ? Thank you ! Antoine Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290458,290458#msg-290458 From me at nanaya.pro Tue Jan 12 14:14:50 2021 From: me at nanaya.pro (nanaya) Date: Tue, 12 Jan 2021 23:14:50 +0900 Subject: Usage of $proxy_add_x_forwarded_for on edge proxies Message-ID: Should there be warning in documentation on usage of $proxy_add_x_forwarded_for for X-Forwarded-For proxy header on edge proxies? I keep seeing config examples with proxy settings like this: proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; Which doesn't make sense on edge servers as there's no way to trust the client-provided value. At best it just adds unnecessary complexity trying to figure out the last "trustworthy" entry. The correct value should be just $remote_addr (and thus drop client-provided values). I think $proxy_add_x_forwarded_for should only be used for proxies located behind another proxy. (or someone please correct me on this) From mdounin at mdounin.ru Tue Jan 12 17:22:51 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Jan 2021 20:22:51 +0300 Subject: Perl module logs located in wrong file In-Reply-To: <2f5da8532e89e96ec5923e5d6aead7ea.NginxMailingListEnglish@forum.nginx.org> References: <2f5da8532e89e96ec5923e5d6aead7ea.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20210112172251.GY1147@mdounin.ru> Hello! On Tue, Jan 12, 2021 at 08:59:44AM -0500, Flinou wrote: > Hello ! > > Here is the output of nginx -V : > > > nginx version: nginx/1.19.2 > built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) > built with OpenSSL 1.1.1g 21 Apr 2020 > TLS SNI support enabled > configure arguments: --prefix=/opt/nginx/1.19.2 --with-cc-opt='-I > /logiciels/openssl/1.1.1/include/ -fstack-protector-strong' --with-cc='gcc > -L/logiciels/openssl/1.1.1/lib64 -Wl,-rpath,/logiciels/openssl/1.1.1/lib64' > --with-compat --with-file-aio --with-http_flv_module > --with-http_stub_status_module --with-stream_realip_module > --with-stream_ssl_module --with-stream_ssl_preread_module > --with-http_ssl_module --with-pcre --with-debug --with-http_realip_module > --with-http_addition_module --with-http_mp4_module --with-http_dav_module > --with-http_random_index_module --with-http_degradation_module > --with-http_perl_module > --with-perl_modules_path=/opt/nginx/1.19.2/modules/perl/ --with-mail > --with-mail_ssl_module --with-stream_ssl_module > --with-http_secure_link_module --with-http_sub_module > --with-http_auth_request_module --with-ipv6 --with-http_v2_module > --with-stream --with-threads --with-http_gunzip_module > --with-http_gzip_static_module --with-http_stub_status_module > --with-http_slice_module --add-dynamic-module=./ngx_devel_kit-0.3.1 > --add-dynamic-module=./lua-nginx-module-0.10.15 > --add-dynamic-module=./headers-more-nginx-module-0.33 > --add-module=./ngx_http_auth_ldap_module > --with-ld-opt=-Wl,-rpath,/opt/luajit/lib > > Here is my nginx.conf : > > > user w3user webgrp; > worker_processes 1; > > load_module ./modules/ngx_http_headers_more_filter_module.so; > error_log /var/projects/instance/nginx_1.19/logs/error.log debug; > > pid /var/projects/instance/nginx_1.19/run/nginx.pid; > > > events { > worker_connections 1024; > } > > > http { > include mime.types; > default_type application/octet-stream; > perl_modules /appli/projects/instance/nginx_1.19/conf/perl/modules/; > perl_modules ./modules/perl/; > perl_require test.pm; > > #log_format main '$remote_addr - $remote_user [$time_local] "$request" > ' > # '$status $body_bytes_sent "$http_referer" ' > # '"$http_user_agent" "$http_x_forwarded_for"'; > > access_log /var/projects/instance/nginx_1.19/logs/access.log; > > sendfile on; > #tcp_nopush on; > > #keepalive_timeout 0; > keepalive_timeout 65; > > #gzip on; > proxy_cache_path //var/projects/instance/nginx_1.19/cache/ levels=1:2 > keys_zone=instance_cache:10m max_size=10g inactive=60m use_temp_path=off; > include /appli/projects/instance/nginx_1.19/conf//instance.conf; > > } > > > I am witnessing a weird behaviour. > All logs are well written down in > /var/projects/instance/nginx_1.19/logs/error.log (as provided in nginx.conf) > excepting logs from perl_module which are written in > /opt/nginx/1.19.2/logs/error.log. > > For example (when nginx failed to locate perl module) > > $ cat /opt/nginx/1.19.2/logs/error.log > 2021/01/12 10:28:27 [emerg] 48992#48992: require_pv("test.pm") failed: > "Can't locate test.pm in @INC (@INC contains: > /appli/projects/instance/nginx_1.19/perl/modules/ > /opt/nginx/1.19.2/./modules/perl/ /usr/local/lib64/perl5 > /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl > /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at (eval > 1) line 1." > > or when start up went fine : > > $ cat /opt/nginx/1.19.2/logs/error.log > 2021/01/12 11:40:55 [notice] 53486#53486: signal process started > > In other hand, remaining logs can be found in > /var/projects/instance/nginx_1.19/logs/error.log. > This behaviour does not reproduce when I start nginx without perl module > directives. > > Any ideas of what is going on here ? Logs defined in the configuration are used once the configuration is parsed/loaded. Errors which happen during configuration parsing/loading, such as the one from the perl module, are written to stderr and to the complied-in error log. In nginx 1.19.5 or newer, this compiled-in error log can be redefined using the "-e" command line option. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Jan 12 17:46:45 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 Jan 2021 20:46:45 +0300 Subject: Usage of $proxy_add_x_forwarded_for on edge proxies In-Reply-To: References: Message-ID: <20210112174645.GZ1147@mdounin.ru> Hello! On Tue, Jan 12, 2021 at 11:14:50PM +0900, nanaya wrote: > Should there be warning in documentation on usage of $proxy_add_x_forwarded_for for X-Forwarded-For proxy header on edge proxies? > > I keep seeing config examples with proxy settings like this: > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > Which doesn't make sense on edge servers as there's no way to trust the client-provided value. At best it just adds unnecessary complexity trying to figure out the last "trustworthy" entry. > > The correct value should be just $remote_addr (and thus drop client-provided values). > > I think $proxy_add_x_forwarded_for should only be used for proxies located behind another proxy. > > (or someone please correct me on this) Let me be someone. The X-Forwarded-For is expected to contain multiple addresses, with the last one being from the last proxy. It is up to the reader of the header to trust or not particular values from the header. For example, in the realip module nginx provides set_real_ip_from and real_ip_recursive directives to configure which addresses to trust (see http://nginx.org/r/set_real_ip_from and http://nginx.org/r/real_ip_recursive). Similarly, in the geo module there are "proxy" and "proxy_recursive" parameters, and in the geoip module there are "geoip_proxy" and "geoip_proxy_recursive" directives. In some cases it might be a good idea to trust X-Forwarded-For values provided by clients: for example, the are some well-known public proxies, such as Opera Mini proxies. And it might be a good idea to trust almost everything if you are trying to extract some non-essential details, such as best-guess geoinformation. And it is always a good idea to preserve X-Forwarded-For provided by client, if any. In particular, it can be used in abuse reports and various investigations. If you want to use something without extra complexity, consider using X-Real-IP header instead, which is expected to contain only one client address as set by your edge/frontend servers. -- Maxim Dounin http://mdounin.ru/ From me at nanaya.pro Tue Jan 12 21:50:47 2021 From: me at nanaya.pro (nanaya) Date: Wed, 13 Jan 2021 06:50:47 +0900 Subject: Usage of $proxy_add_x_forwarded_for on edge proxies In-Reply-To: <20210112174645.GZ1147@mdounin.ru> References: <20210112174645.GZ1147@mdounin.ru> Message-ID: On Wed, Jan 13, 2021, at 02:46, Maxim Dounin wrote: > The X-Forwarded-For is expected to contain multiple addresses, with > the last one being from the last proxy. It is up to the reader of > the header to trust or not particular values from the header. > > For example, in the realip module nginx provides set_real_ip_from > and real_ip_recursive directives to configure which addresses to > trust (see http://nginx.org/r/set_real_ip_from and > http://nginx.org/r/real_ip_recursive). Similarly, in the geo > module there are "proxy" and "proxy_recursive" parameters, and in > the geoip module there are "geoip_proxy" and > "geoip_proxy_recursive" directives. > > In some cases it might be a good idea to trust X-Forwarded-For > values provided by clients: for example, the are some well-known > public proxies, such as Opera Mini proxies. And it might be a > good idea to trust almost everything if you are trying to extract > some non-essential details, such as best-guess geoinformation. > > And it is always a good idea to preserve X-Forwarded-For provided > by client, if any. In particular, it can be used in abuse reports > and various investigations. > > If you want to use something without extra complexity, consider > using X-Real-IP header instead, which is expected to contain only > one client address as set by your edge/frontend servers. > Is it not better to just handle all of those at the outermost proxy (with set_real_ip_from etc) and only pass the "sanitized" $remote_addr value to the upstream? At least for simple config, similar to the default REMOTE_ADDR in fastcgi_params etc. It seems like a lot of potential point of failures trying to pass the value around. And people sharing this possibly dangerous config around without warning of its implication isn't helping, I think. I guess X-Real-IP could work although I don't remember seeing it used by anything but nginx. And then I think there have been a bunch of problems caused by applications blindly trusting X-Forwarded-For which usually ends up with stripping everything but the last non-private ip by default - essentially a more complex version of outermost proxy passing $remote_addr for that header. From nginx-forum at forum.nginx.org Wed Jan 13 06:04:26 2021 From: nginx-forum at forum.nginx.org (sanjay9999) Date: Wed, 13 Jan 2021 01:04:26 -0500 Subject: Request Method Using Mixed case letters. In-Reply-To: <20210112133722.GX1147@mdounin.ru> References: <20210112133722.GX1147@mdounin.ru> Message-ID: <9f5ba6397a0740ff4fa3150e6a325f7d.NginxMailingListEnglish@forum.nginx.org> Thanks for the update. I have already taken care to hide the "nginx". With CAPITAL letters, my testcase using "POSTSSS" for request_method, works fine.However, for mixed-case and small-case , nginx default rule applies and control does not reach my server block. hence I end up getting 400 error with "nginx" server name in html response. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290451,290465#msg-290465 From nginx-forum at forum.nginx.org Wed Jan 13 06:07:23 2021 From: nginx-forum at forum.nginx.org (sanjay9999) Date: Wed, 13 Jan 2021 01:07:23 -0500 Subject: Request Method Using Mixed case letters. In-Reply-To: <9f5ba6397a0740ff4fa3150e6a325f7d.NginxMailingListEnglish@forum.nginx.org> References: <20210112133722.GX1147@mdounin.ru> <9f5ba6397a0740ff4fa3150e6a325f7d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <870991aca51262e09c77dd75b36dba32.NginxMailingListEnglish@forum.nginx.org> I would like to allow GET / POST / DELETE methods only. otherwise send 501 response. if ($request_method !~* ^(GET|DELETE|POST)$ ) { return 501 '{ "ver": "1.1.2", "txnid": "", "timestamp": "", "errorCode": "NotImplemented", "errorMsg": "Request Method is not implemented"}'; } I am using mixed case letters ( "POSTsss" for request method ) in request methods. nginx finalized http request to 400 because as per the standard Request Method is case sensitive. However it shows HTML response with last line showing "nginx". Our security team says "you should not disclose web server details in the response for a request" We have implemented solution to hide server name and version. However, in this case control does not reach any of our server/location block . so that I can override the 400 error. Please help it out. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290451,290466#msg-290466 From mdounin at mdounin.ru Wed Jan 13 13:27:44 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Jan 2021 16:27:44 +0300 Subject: Request Method Using Mixed case letters. In-Reply-To: <9f5ba6397a0740ff4fa3150e6a325f7d.NginxMailingListEnglish@forum.nginx.org> References: <20210112133722.GX1147@mdounin.ru> <9f5ba6397a0740ff4fa3150e6a325f7d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20210113132744.GB1147@mdounin.ru> Hello! On Wed, Jan 13, 2021 at 01:04:26AM -0500, sanjay9999 wrote: > Thanks for the update. > I have already taken care to hide the "nginx". The links I've provided explain why you shouldn't do this. In particular, because this has nothing to do with security, and because it is an easy way to say "thanks" to the developers, including me. > With CAPITAL letters, my testcase using "POSTSSS" for request_method, works > fine.However, for mixed-case and small-case , nginx default rule applies and > control does not reach my server block. hence I end up getting 400 error > with "nginx" server name in html response. Trying to hide "nginx" everywhere, including response headers and error pages, will at least require 3rd party modules to do so, as well as non-trivial and error prone error_page configuration. I would not recommend doing this. If you insist on not saying "thanks", the most simple available option is to use 'server_tokens "";' as recommended by the previous message (and available in the commercial version). -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Jan 13 13:53:18 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Jan 2021 16:53:18 +0300 Subject: Usage of $proxy_add_x_forwarded_for on edge proxies In-Reply-To: References: <20210112174645.GZ1147@mdounin.ru> Message-ID: <20210113135318.GC1147@mdounin.ru> Hello! On Wed, Jan 13, 2021 at 06:50:47AM +0900, nanaya wrote: > On Wed, Jan 13, 2021, at 02:46, Maxim Dounin wrote: > > The X-Forwarded-For is expected to contain multiple addresses, with > > the last one being from the last proxy. It is up to the reader of > > the header to trust or not particular values from the header. > > > > For example, in the realip module nginx provides set_real_ip_from > > and real_ip_recursive directives to configure which addresses to > > trust (see http://nginx.org/r/set_real_ip_from and > > http://nginx.org/r/real_ip_recursive). Similarly, in the geo > > module there are "proxy" and "proxy_recursive" parameters, and in > > the geoip module there are "geoip_proxy" and > > "geoip_proxy_recursive" directives. > > > > In some cases it might be a good idea to trust X-Forwarded-For > > values provided by clients: for example, the are some well-known > > public proxies, such as Opera Mini proxies. And it might be a > > good idea to trust almost everything if you are trying to extract > > some non-essential details, such as best-guess geoinformation. > > > > And it is always a good idea to preserve X-Forwarded-For provided > > by client, if any. In particular, it can be used in abuse reports > > and various investigations. > > > > If you want to use something without extra complexity, consider > > using X-Real-IP header instead, which is expected to contain only > > one client address as set by your edge/frontend servers. > > > > Is it not better to just handle all of those at the outermost > proxy (with set_real_ip_from etc) and only pass the "sanitized" > $remote_addr value to the upstream? At least for simple config, > similar to the default REMOTE_ADDR in fastcgi_params etc. Consider an application which needs both trusted address and a best-case geoinformation, as well as some data for abuse reports. The only option is to preserve X-Forwarded-For got from the client. > It seems like a lot of potential point of failures trying to > pass the value around. And people sharing this possibly > dangerous config around without warning of its implication isn't > helping, I think. It's not "dangerous config", it's incorrect usage of X-Forwarded-For which might be dengerous. In the most simply configuration with a single server the X-Forwarded-For header comes directly from the client, without anything added by nginx - and this has exactly the same implications. > I guess X-Real-IP could work although I don't remember seeing it > used by anything but nginx. And then I think there have been a > bunch of problems caused by applications blindly trusting > X-Forwarded-For which usually ends up with stripping everything > but the last non-private ip by default - essentially a more > complex version of outermost proxy passing $remote_addr for that > header. While X-Forwarded-For is often misused by applications and incorrect configurations by blindly trusting addresses in it, removing the header is going to make destroy the information available for well-written applications. While you it might be a good idea to remove the header in your particular use case - if you are sure enough your applications doesn't use it - this is certainly not how things should be configured by default. -- Maxim Dounin http://mdounin.ru/ From jfs.world at gmail.com Wed Jan 13 14:00:58 2021 From: jfs.world at gmail.com (Jeffrey 'jf' Lim) Date: Wed, 13 Jan 2021 22:00:58 +0800 Subject: Request Method Using Mixed case letters. In-Reply-To: <9f5ba6397a0740ff4fa3150e6a325f7d.NginxMailingListEnglish@forum.nginx.org> References: <20210112133722.GX1147@mdounin.ru> <9f5ba6397a0740ff4fa3150e6a325f7d.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Wed, Jan 13, 2021 at 2:04 PM sanjay9999 wrote: > > Thanks for the update. > I have already taken care to hide the "nginx". > > With CAPITAL letters, my testcase using "POSTSSS" for request_method, works > fine.However, for mixed-case and small-case , nginx default rule applies and > control does not reach my server block. hence I end up getting 400 error > with "nginx" server name in html response. > I'm curious. Could you share your server block config? As for the server name, I've thought about it in the past and initially saw it as an issue. However, as an open source supporter and user I can definitely understand the wishes of the nginx team and do want to advertise for the project as well. On the practical side, IMO I think there are more things you need to worry about than simply hiding the name. If somebody can compromise your system based on knowing that it's nginx, you either have a config issue, or they are smart enough to detect nginx even without you advertising the name. -jf From me at nanaya.pro Wed Jan 13 14:39:13 2021 From: me at nanaya.pro (nanaya) Date: Wed, 13 Jan 2021 23:39:13 +0900 Subject: Usage of $proxy_add_x_forwarded_for on edge proxies In-Reply-To: <20210113135318.GC1147@mdounin.ru> References: <20210112174645.GZ1147@mdounin.ru> <20210113135318.GC1147@mdounin.ru> Message-ID: On Wed, Jan 13, 2021, at 22:53, Maxim Dounin wrote: > It's not "dangerous config", it's incorrect usage of > X-Forwarded-For which might be dengerous. In the most simply > configuration with a single server the X-Forwarded-For header > comes directly from the client, without anything added by nginx - > and this has exactly the same implications. > Unfortunately, at least in rails, it's actually dangerous passing the value as is: https://github.com/rails/rails/blob/3f4fde4d9f804140be8304b524792c052e3d1024/actionpack/lib/action_dispatch/middleware/remote_ip.rb#L21 At least they have added a bunch of check to make it less dangerous even when using $proxy_add_x_forwarded_for (essentially works just like $remote_addr in default config). > While X-Forwarded-For is often misused by applications and > incorrect configurations by blindly trusting addresses in it, > removing the header is going to make destroy the information > available for well-written applications. While you it might be a > good idea to remove the header in your particular use case - if > you are sure enough your applications doesn't use it - this is > certainly not how things should be configured by default. > Yeah, I'm not going to trust X-Forwarded-For sent by client. Maybe it's just me. $remote_addr to me is their geolocation. Anything more "sophisticated" just looked like a potential of failure. And I don't want to have to worry if my $random_app parses the X-Forwarded-For sanely. At most I'd just log it at the edge server. Look at this wonderful function by wordpress (thankfully they do aware it's "unsafe"): https://github.com/WordPress/WordPress/blob/c5d1214607be128c99dd27589a58cc5a1d20d522/wp-admin/includes/class-wp-community-events.php#L262 Semi unrelated but I can't find this list of IPs used by Opera Mini proxies. Do you know where I can find it? From mdounin at mdounin.ru Wed Jan 13 17:45:37 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 13 Jan 2021 20:45:37 +0300 Subject: Usage of $proxy_add_x_forwarded_for on edge proxies In-Reply-To: References: <20210112174645.GZ1147@mdounin.ru> <20210113135318.GC1147@mdounin.ru> Message-ID: <20210113174537.GD1147@mdounin.ru> Hello! On Wed, Jan 13, 2021 at 11:39:13PM +0900, nanaya wrote: > On Wed, Jan 13, 2021, at 22:53, Maxim Dounin wrote: > > It's not "dangerous config", it's incorrect usage of > > X-Forwarded-For which might be dengerous. In the most simply > > configuration with a single server the X-Forwarded-For header > > comes directly from the client, without anything added by nginx - > > and this has exactly the same implications. > > > > Unfortunately, at least in rails, it's actually dangerous passing the value as is: > > https://github.com/rails/rails/blob/3f4fde4d9f804140be8304b524792c052e3d1024/actionpack/lib/action_dispatch/middleware/remote_ip.rb#L21 > > At least they have added a bunch of check to make it less > dangerous even when using $proxy_add_x_forwarded_for > (essentially works just like $remote_addr in default config). This code seems to imply that the code is running behind a trusted proxy, and both X-Forwarded-For and Client-IP headers are actually updated by the proxy. And it uses a list of trusted proxies it is going to trust to switch to a previous address, so I would say the code is safe when used properly. And it's fine with $proxy_add_x_forwarded_for, too. Another question is how often it is used properly. Given it requires update of two headers, at least one of them being very rare, I would assume the answer is "almost never". But again, it has nothing to do with $proxy_add_x_forwarded_for. Rather, properly using "proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;" will make attacks impossible. > > While X-Forwarded-For is often misused by applications and > > incorrect configurations by blindly trusting addresses in it, > > removing the header is going to make destroy the information > > available for well-written applications. While you it might be a > > good idea to remove the header in your particular use case - if > > you are sure enough your applications doesn't use it - this is > > certainly not how things should be configured by default. > > > > Yeah, I'm not going to trust X-Forwarded-For sent by client. > Maybe it's just me. $remote_addr to me is their geolocation. > Anything more "sophisticated" just looked like a potential of > failure. Again: there is a difference between trusting X-Forwarded-For sent by client and using it in the cases where trust isn't that important. Certainly no one should blindly trust X-Forwarded-For. But anyone can use it properly, and stripping it as you suggest is just wrong in most cases. > And I don't want to have to worry if my $random_app parses the > X-Forwarded-For sanely. At most I'd just log it at the edge > server. That's your choice: instead of fixing vulnerabilities in the apps you prefer to drop headers which might be used to exploit these vulnerabilities, dropping also many legitimate uses of the headers. > Look at this wonderful function by wordpress (thankfully they do > aware it's "unsafe"): > > https://github.com/WordPress/WordPress/blob/c5d1214607be128c99dd27589a58cc5a1d20d522/wp-admin/includes/class-wp-community-events.php#L262 That's exactly what I'm talking about: as long as you don't need trusted address, X-Forwarded-For can be used to extract some valuable information. Further, note that even with X-Forwarded-For set to $remote_addr, client still can provide arbitrary address to the code via the Client-IP header. So, if the code in question is somehow used to extract an addresses where trusted address is actually needed, your using $remote_addr instead of $proxy_add_x_forwarded_for won't save you from being vulnerable. > Semi unrelated but I can't find this list of IPs used by Opera > Mini proxies. Do you know where I can find it? A good list of trusted proxies is maintained by Wikimedia, see here: https://meta.wikimedia.org/wiki/XFF_project#Trusted_XFF_list -- Maxim Dounin http://mdounin.ru/ From me at nanaya.pro Wed Jan 13 19:00:54 2021 From: me at nanaya.pro (nanaya) Date: Thu, 14 Jan 2021 04:00:54 +0900 Subject: Usage of $proxy_add_x_forwarded_for on edge proxies In-Reply-To: <20210113174537.GD1147@mdounin.ru> References: <20210112174645.GZ1147@mdounin.ru> <20210113135318.GC1147@mdounin.ru> <20210113174537.GD1147@mdounin.ru> Message-ID: <2dcb1081-829e-45ac-8e24-31d63fb7227f@www.fastmail.com> On Thu, Jan 14, 2021, at 02:45, Maxim Dounin wrote: > > Another question is how often it is used properly. Given it > requires update of two headers, at least one of them being very > rare, I would assume the answer is "almost never". But again, it > has nothing to do with $proxy_add_x_forwarded_for. Rather, > properly using "proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;" > will make attacks impossible. > > > > While X-Forwarded-For is often misused by applications and > > > incorrect configurations by blindly trusting addresses in it, > > > removing the header is going to make destroy the information > > > available for well-written applications. While you it might be a > > > good idea to remove the header in your particular use case - if > > > you are sure enough your applications doesn't use it - this is > > > certainly not how things should be configured by default. > > > > > > > Yeah, I'm not going to trust X-Forwarded-For sent by client. > > Maybe it's just me. $remote_addr to me is their geolocation. > > Anything more "sophisticated" just looked like a potential of > > failure. > > Again: there is a difference between trusting X-Forwarded-For sent > by client and using it in the cases where trust isn't that > important. Certainly no one should blindly trust X-Forwarded-For. > But anyone can use it properly, and stripping it as you suggest is > just wrong in most cases. > > > And I don't want to have to worry if my $random_app parses the > > X-Forwarded-For sanely. At most I'd just log it at the edge > > server. > > That's your choice: instead of fixing vulnerabilities in the apps > you prefer to drop headers which might be used to exploit these > vulnerabilities, dropping also many legitimate uses of the headers. > > > Look at this wonderful function by wordpress (thankfully they do > > aware it's "unsafe"): > > > > https://github.com/WordPress/WordPress/blob/c5d1214607be128c99dd27589a58cc5a1d20d522/wp-admin/includes/class-wp-community-events.php#L262 > > That's exactly what I'm talking about: as long as you don't need > trusted address, X-Forwarded-For can be used to extract some > valuable information. > I'm not sure about this exact example though. It looks like it'll lump everyone with 192.168.0.0/24 (or other private network) in same network "id" which would be rather confusing instead. > Further, note that even with X-Forwarded-For set to $remote_addr, > client still can provide arbitrary address to the code via the > Client-IP header. So, if the code in question is somehow used to > extract an addresses where trusted address is actually needed, > your using $remote_addr instead of $proxy_add_x_forwarded_for > won't save you from being vulnerable. > So I think this boils down to how much one trust the apps running behind the proxy. I tend to err on safe side and don't see much benefit passing the value from client and it has maintenance overhead. Not to mention if the app has multiple components (websocket server etc) written in different languages, they probably need to reimplement the logic on all of them (hopefully with same logic and config). >From what I gather the benefits are: - potentially able to see "actual" client ip instead of their proxy - audit log What else am I missing? Personally, the list above seems to be better done on the edge proxy itself. I still stand with the default should be $remote_addr and $proxy_add_x_forwarded_for should only be used with extra care. Anything that handles X-Forwarded-For "properly" would know it and it can be passed the client-given value but everything else should just receive $remote_addr. (also good point on Client-Ip, I do have it overridden as well. It seems to be even less documented for some reason? Know any documentation on it?) From nginx-forum at forum.nginx.org Wed Jan 13 22:31:55 2021 From: nginx-forum at forum.nginx.org (rveerman) Date: Wed, 13 Jan 2021 17:31:55 -0500 Subject: how would i host more than 2 sites on the same port and IP address? Message-ID: <68ffbedc378b76b2a572b06e2900cb57.NginxMailingListEnglish@forum.nginx.org> hi. i'm aware of the reuseport directive, but i'm wondering how it would be possible to host more than 2 sites on the same IP and port, distinguishing between the sites only by means of the actual server name as it's entered into the browser.. specifically, i want to host example.com, v2.example.com, mail.example.com, and somesite.com on the same IP and port (443, ssl).. i really hope this is possible with nginx. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290480,290480#msg-290480 From nginx-forum at forum.nginx.org Wed Jan 13 22:33:52 2021 From: nginx-forum at forum.nginx.org (rveerman) Date: Wed, 13 Jan 2021 17:33:52 -0500 Subject: how would i host more than 2 sites on the same port and IP address? In-Reply-To: <68ffbedc378b76b2a572b06e2900cb57.NginxMailingListEnglish@forum.nginx.org> References: <68ffbedc378b76b2a572b06e2900cb57.NginxMailingListEnglish@forum.nginx.org> Message-ID: <37f9f0312f1a7c7f1aa94a31cfbc8902.NginxMailingListEnglish@forum.nginx.org> oh, i forgot to add, each of these sites would be proxy-forwarded to different apache2 instances, which in turn would be using different DocumentRoot for each of the sites. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290480,290481#msg-290481 From nginx-forum at forum.nginx.org Thu Jan 14 16:40:05 2021 From: nginx-forum at forum.nginx.org (nicholaschiasson) Date: Thu, 14 Jan 2021 11:40:05 -0500 Subject: ngx_upstream_jdomain module reviews Message-ID: Hello, I'd like to make a request for reviews of the code for this module I maintain https://github.com/nicholaschiasson/ngx_upstream_jdomain. The module's purpose is to provide a directive to be used in the upstream context to define upstream servers from a domain name, and have the servers updated dynamically on some interval (ideally the DNS TTL). I'd like to get more eyes on this as I am trying my best to bring the module into a production ready state. Any and all feedback would be greatly appreciated. If you discover issues, please open an issue on the repo. If you have questions or remarks, you may use the repo discussions. Thanks very much in advance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290486,290486#msg-290486 From mail at renemoser.net Thu Jan 14 20:29:25 2021 From: mail at renemoser.net (Rene Moser) Date: Thu, 14 Jan 2021 21:29:25 +0100 Subject: Strange ssl_client_certificate limitation? Message-ID: <98a78bd7-b938-a616-0558-50ff402f7855@renemoser.net> Hi I have a hard time with ssl_client_certificate. I try to use vhosts with 2 separated CA in ssl_client_certificate configs but I was not able to do it as expected. The later ssl_client_certificate was not taken in effect and even more unexpected I was able to use the first client cert to auth in the seconds vhost. To show the limitation, I created a reproducer: https://github.com/resmo/nginx-ssl_client_certificate-limit Please tell me I did something terribly wrong. Regards Ren? From francis at daoine.org Thu Jan 14 21:32:43 2021 From: francis at daoine.org (Francis Daly) Date: Thu, 14 Jan 2021 21:32:43 +0000 Subject: Strange ssl_client_certificate limitation? In-Reply-To: <98a78bd7-b938-a616-0558-50ff402f7855@renemoser.net> References: <98a78bd7-b938-a616-0558-50ff402f7855@renemoser.net> Message-ID: <20210114213243.GV23032@daoine.org> On Thu, Jan 14, 2021 at 09:29:25PM +0100, Rene Moser wrote: Hi there, > To show the limitation, I created a reproducer: > > https://github.com/resmo/nginx-ssl_client_certificate-limit > > Please tell me I did something terribly wrong. You seem to be trying to test the different server names using curl -H "Host: foo2.example.com" --insecure https://127.0.0.1:8443/ If you add a "--verbose", you may see the certificate that the server is presenting, which may hint at which server{} you are actually accessing. You probably will want to use curl's "--resolve" command to get curl to use SNI the way that you want. Something like curl --resolve foo2.example.com:8443:127.0.0.1 --insecure https://foo2.example.com:8443/ may make a better test. Good luck with it, f -- Francis Daly francis at daoine.org From mail at renemoser.net Thu Jan 14 22:10:40 2021 From: mail at renemoser.net (Rene Moser) Date: Thu, 14 Jan 2021 23:10:40 +0100 Subject: Strange ssl_client_certificate limitation? In-Reply-To: <98a78bd7-b938-a616-0558-50ff402f7855@renemoser.net> References: <98a78bd7-b938-a616-0558-50ff402f7855@renemoser.net> Message-ID: <3f094ec0-0df8-8aec-ea49-36a49b630415@renemoser.net> The only way I was able to accept both certs but use the one or the other in the vhost was to bundle the certs and distinguish by issuer dn, see: https://github.com/resmo/nginx-ssl_client_certificate-limit/pull/1 This works as expected, but feels kind of a hack. Any other suggestions? On 14.01.21 21:29, Rene Moser wrote: > Hi > > I have a hard time with ssl_client_certificate. > > I try to use vhosts with 2 separated CA in ssl_client_certificate > configs but I was not able to do it as expected. The later > ssl_client_certificate was not taken in effect and even more unexpected > I was able to use the first client cert to auth in the seconds vhost. > > To show the limitation, I created a reproducer: > > https://github.com/resmo/nginx-ssl_client_certificate-limit > > Please tell me I did something terribly wrong. > > Regards > Ren? > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Fri Jan 15 00:17:36 2021 From: francis at daoine.org (Francis Daly) Date: Fri, 15 Jan 2021 00:17:36 +0000 Subject: how would i host more than 2 sites on the same port and IP address? In-Reply-To: <68ffbedc378b76b2a572b06e2900cb57.NginxMailingListEnglish@forum.nginx.org> References: <68ffbedc378b76b2a572b06e2900cb57.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20210115001736.GW23032@daoine.org> On Wed, Jan 13, 2021 at 05:31:55PM -0500, rveerman wrote: Hi there, > possible to host more than 2 sites on the same IP and port, distinguishing > between the sites only by means of the actual server name as it's entered > into the browser.. You run a single instance of nginx with multiple server{} blocks with the same "listen" directives but different "server_name" directives. > specifically, i want to host example.com, v2.example.com, mail.example.com, > and somesite.com on the same IP and port (443, ssl).. See the somewhat old http://nginx.org/en/docs/http/configuring_https_servers.html, especially the "Server Name Indication" part at http://nginx.org/en/docs/http/configuring_https_servers.html#sni > i really hope this is possible with nginx. Basically, it Just Works, if the client (browser) is any way adequate for modern https. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Jan 15 03:40:41 2021 From: nginx-forum at forum.nginx.org (rveerman) Date: Thu, 14 Jan 2021 22:40:41 -0500 Subject: how would i host more than 2 sites on the same port and IP address? In-Reply-To: <20210115001736.GW23032@daoine.org> References: <20210115001736.GW23032@daoine.org> Message-ID: <4d4011f5e4a2d36e79503b2127ae7447.NginxMailingListEnglish@forum.nginx.org> cool :) i was able to get it to work. for completeness sake, and for all those looking for an explanation as to how to get this done properly, i will post my setup to this list now. sorry if this seems clueless to the members of this list, but please realize that there are plenty of people out there who are entirely new to the field of system administration, like i was about 2 weeks ago.. i had to edit /etc/apache2/ports.conf, to resemble this : Listen 192.168.178.21:444 Listen 192.168.178.21:447 Listen 192.168.178.21:444 Listen 192.168.178.21:447 and /etc/apache2/sites-enabled/002-mysite.com to resemble this : # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. #ServerName www.example.com ServerName mysite.com ServerAdmin rene.veerman at nicer.app DocumentRoot /home/rene/data1/htdocs/mysite.com # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn LogLevel info ssl:warn ErrorLog ${APACHE_LOG_DIR}/error.443.log CustomLog ${APACHE_LOG_DIR}/access.443.log combined # For most configuration files from conf-available/, which are # enabled or disabled at a global level, it is possible to # include a line for only one particular virtual host. For example the # following line enables the CGI configuration for this host only # after it has been globally disabled with "a2disconf". #Include conf-available/serve-cgi-bin.conf Options -Indexes -FollowSymLinks AllowOverride None Require all granted SSLEngine on SSLProtocol all -SSLv2 -SSLv3 SSLHonorCipherOrder on SSLCipherSuite "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS +RC4 RC4" #godaddy supplied SSL keys, rehashed with certbot (see the friendly manual) SSLCertificateFile /home/rene/data1/certificates/apache-ssl/a8f38c612dbe2a7e.crt SSLCertificateKeyFile /home/rene/data1/certificates/apache-ssl/mysite.com.key SSLCertificateChainFile /home/rene/data1/certificates/apache-ssl/gd_bundle-g2-g1.crt # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. #ServerName www.example.com ServerName v2.mysite.com ServerAdmin rene.veerman.netherlands at gmail.com DocumentRoot /home/rene/data1/htdocs/mysite.com_v2 # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn #LogLevel info ssl:warn LogLevel debug ErrorLog ${APACHE_LOG_DIR}/error.447.log CustomLog ${APACHE_LOG_DIR}/access.447.log combined # For most configuration files from conf-available/, which are # enabled or disabled at a global level, it is possible to # include a line for only one particular virtual host. For example the # following line enables the CGI configuration for this host only # after it has been globally disabled with "a2disconf". #Include conf-available/serve-cgi-bin.conf AllowOverride None Require all granted SSLEngine on SSLProtocol all -SSLv2 -SSLv3 SSLHonorCipherOrder on SSLCipherSuite "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS +RC4 RC4" #simple output of 'certbot certonly v2.mysite.com' (running on port 80 for the occasion) SSLCertificateFile /etc/letsencrypt/live/v2.mysite.com/cert.pem SSLCertificateKeyFile /etc/letsencrypt/live/v2.mysite.com/privkey.pem SSLCertificateChainFile /etc/letsencrypt/live/v2.mysite.com/fullchain.pem from there, you can detect if your apache setup is running correctly by running this command : netstat -nltp | grep apache then, there's the nginx setup.. /etc/nginx/sites-enabled/00-default-ssl.conf : (mail.mysite.com runs iRedMail on ubuntu 20.04) server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name mail.mysite.com; root /var/www/html; index index.php index.html; include /etc/nginx/templates/misc.tmpl; include /etc/nginx/templates/ssl.tmpl; include /etc/nginx/templates/iredadmin.tmpl; include /etc/nginx/templates/roundcube.tmpl; include /etc/nginx/templates/sogo.tmpl; include /etc/nginx/templates/netdata.tmpl; include /etc/nginx/templates/php-catchall.tmpl; include /etc/nginx/templates/stub_status.tmpl; } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name mysite.com; root /home/rene/data1/htdocs/mysite.com; ssl_certificate /home/rene/data1/certificates/other-ssl/all.crt; ssl_certificate_key /home/rene/data1/certificates/other-ssl/mysite.com.key; ssl on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_protocols TLSv1.2 TLSv1.1 TLSv1; ssl_ciphers 'kEECDH+ECDSA+AES128 kEECDH+ECDSA+AES256 kEECDH+AES128 kEECDH+AES256 kEDH+AES128 kEDH+AES256 DES-CBC3-SHA +SHA !aNULL !eNULL !LOW !kECDH !DSS !MD5 !RC4 !EXP !PSK !SRP !CAMELLIA !SEED'; ssl_prefer_server_ciphers on; ssl_dhparam /etc/nginx/dhparam.pem; location / { proxy_pass https://192.168.178.21:444/; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Ssl on; proxy_connect_timeout 159s; proxy_send_timeout 60; proxy_read_timeout 60; send_timeout 60; resolver_timeout 60; } } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name v2.mysite.com; root /home/rene/data1/htdocs/mysite.com_v2; ssl_certificate /etc/letsencrypt/live/v2.mysite.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/v2.mysite.com/privkey.pem; ssl on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_protocols TLSv1.2 TLSv1.1 TLSv1; ssl_ciphers 'kEECDH+ECDSA+AES128 kEECDH+ECDSA+AES256 kEECDH+AES128 kEECDH+AES256 kEDH+AES128 kEDH+AES256 DES-CBC3-SHA +SHA !aNULL !eNULL !LOW !kECDH !DSS !MD5 !RC4 !EXP !PSK !SRP !CAMELLIA !SEED'; ssl_prefer_server_ciphers on; ssl_dhparam /etc/nginx/dhparam.pem; location / { proxy_pass https://192.168.178.21:447/; proxy_redirect off; proxy_buffering off; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Ssl on; proxy_connect_timeout 159s; proxy_send_timeout 60; proxy_read_timeout 60; send_timeout 60; resolver_timeout 60; } } from there, all you need to do is ufw allow 443 ufw allow 447 to get the firewall to allow the data through Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290480,290492#msg-290492 From vbl5968 at gmail.com Sat Jan 16 18:11:54 2021 From: vbl5968 at gmail.com (Vincent Blondel) Date: Sat, 16 Jan 2021 19:11:54 +0100 Subject: Howto Remove the Cache-Control request header. Message-ID: Hello, We want nginx to remove the request header Cache-Control before to proxy the request to the OCS. We do like this ... location /xxx { more_set_headers 'Strict-Transport-Security: max-age=31622400; includeSubDomains'; more_set_headers 'X-XSS-Protection: 1; mode=block'; more_set_headers 'X-Content-Type-Options: nosniff'; proxy_read_timeout 120s; proxy_set_header Connection ""; proxy_set_header via "HTTP/1.1 $hostname:443"; proxy_set_header Host $host; proxy_set_header Cache-Control ""; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_ssl_certificate /etc/nginx/keys/xxx.pem; proxy_ssl_certificate_key /etc/nginx/keys/xxx.key; proxy_pass https://XXX; proxy_redirect http:// https://; proxy_next_upstream error timeout invalid_header http_503; } but the request header Cache-Control is still being sent to the OCS. Thank You in advance for Your help. Sincerely, Vincent -------------- next part -------------- An HTML attachment was scrubbed... URL: From teward at thomas-ward.net Sat Jan 16 18:14:28 2021 From: teward at thomas-ward.net (Thomas Ward) Date: Sat, 16 Jan 2021 13:14:28 -0500 Subject: Howto Remove the Cache-Control request header. In-Reply-To: References: Message-ID: proxy_ignore_headers Cache-Control; ?Get BlueMail for Android ? -------- Original Message -------- From: Vincent Blondel Sent: Sat Jan 16 13:11:54 EST 2021 To: nginx at nginx.org Subject: Howto Remove the Cache-Control request header. Hello, We want nginx to remove the request header Cache-Control before to proxy the request to the OCS. We do like this ... location /xxx { more_set_headers 'Strict-Transport-Security: max-age=31622400; includeSubDomains'; more_set_headers 'X-XSS-Protection: 1; mode=block'; more_set_headers 'X-Content-Type-Options: nosniff'; proxy_read_timeout 120s; proxy_set_header Connection ""; proxy_set_header via "HTTP/1.1 $hostname:443"; proxy_set_header Host $host; proxy_set_header Cache-Control ""; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_ssl_certificate /etc/nginx/keys/xxx.pem; proxy_ssl_certificate_key /etc/nginx/keys/xxx.key; proxy_pass https://XXX; proxy_redirect http:// https://; proxy_next_upstream error timeout invalid_header http_503; } but the request header Cache-Control is still being sent to the OCS. Thank You in advance for Your help. Sincerely, Vincent ------------------------------------------------------------------------ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbl5968 at gmail.com Sat Jan 16 18:20:22 2021 From: vbl5968 at gmail.com (Vincent Blondel) Date: Sat, 16 Jan 2021 19:20:22 +0100 Subject: Howto Remove the Cache-Control request header. In-Reply-To: References: Message-ID: Thank You for the swift response Thomas. AFAICS, proxy_ignore_headers is to disable processing of certain response header fields from the proxied server. This is what I can find on nginx website doc. We want the request Cache-Control header to not be proxied to the OCS. -V On Sat, Jan 16, 2021 at 7:14 PM Thomas Ward wrote: > proxy_ignore_headers Cache-Control; > > Get BlueMail for Android > ------------------------------ > *From:* Vincent Blondel > *Sent:* Sat Jan 16 13:11:54 EST 2021 > *To:* nginx at nginx.org > *Subject:* Howto Remove the Cache-Control request header. > > Hello, > We want nginx to remove the request header Cache-Control before to proxy > the request to the OCS. > We do like this ... > > location /xxx { > more_set_headers 'Strict-Transport-Security: max-age=31622400; > includeSubDomains'; > more_set_headers 'X-XSS-Protection: 1; mode=block'; > more_set_headers 'X-Content-Type-Options: nosniff'; > proxy_read_timeout 120s; > proxy_set_header Connection ""; > proxy_set_header via "HTTP/1.1 $hostname:443"; > proxy_set_header Host $host; > proxy_set_header Cache-Control ""; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_ssl_certificate /etc/nginx/keys/xxx.pem; > proxy_ssl_certificate_key /etc/nginx/keys/xxx.key; > proxy_pass https://XXX; > proxy_redirect http:// https://; > proxy_next_upstream error timeout invalid_header http_503; > } > > but the request header Cache-Control is still being sent to the OCS. > > Thank You in advance for Your help. > Sincerely, > Vincent > > ------------------------------ > > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Jan 17 00:16:45 2021 From: francis at daoine.org (Francis Daly) Date: Sun, 17 Jan 2021 00:16:45 +0000 Subject: Howto Remove the Cache-Control request header. In-Reply-To: References: Message-ID: <20210117001645.GX23032@daoine.org> On Sat, Jan 16, 2021 at 07:11:54PM +0100, Vincent Blondel wrote: Hi there, > We want nginx to remove the request header Cache-Control before to proxy > the request to the OCS. > We do like this ... 'proxy_set_header Cache-Control "";' appears to work for me. Can you show one request to nginx with the header, and the matching request to the upstream server? When I use this config: == server { listen 7000; add_header X-7000 "cache :$http_cache_control:"; location /a { proxy_set_header Cache-Control ""; proxy_pass http://127.0.0.1:7001; } location /b { proxy_pass http://127.0.0.1:7001; } } server { listen 7001; add_header X-7001 "cache :$http_cache_control:"; location / { return 200 "7001: $request_uri\n"; } } == if I make a request to port 7000 that starts with /b and includes a Cache-Control header, I see in the X-7001 response header that that same Cache-Control header was sent to the upstream; but if I make a request that starts with /a, I see in the X-7001 response header that it was removed before the request was made to the upstream. That is: curl -i -H Cache-control:no-cache http://127.0.0.1:7000/b1 includes X-7001: cache :no-cache: X-7000: cache :no-cache: while curl -i -H Cache-control:no-cache http://127.0.0.1:7000/a1 includes X-7001: cache :: X-7000: cache :no-cache: Does your system respond differently? f -- Francis Daly francis at daoine.org From james_ at catbus.co.uk Tue Jan 19 12:47:11 2021 From: james_ at catbus.co.uk (James Beal) Date: Tue, 19 Jan 2021 12:47:11 +0000 Subject: nginx stuck in tight loop sometimes Message-ID: We have quite a high volume site, we have 4 front end nginx servers, each: * AMD EPYC 7402P 24-Core Processor * INTEL SSDPELKX020T8 ( 2TB NVMe ) * Dual? Broadcom BCM57416 NetXtreme-E 10GBase-T * 512GB of RAM We have a fairly complex nginx config with sharded caches as explained in https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-1/ We see this problem on : nginx version: nginx/1.19.6 built by gcc 8.3.0 (Debian 8.3.0-6) built with OpenSSL 1.1.1d? 10 Sep 2019 TLS SNI support enabled configure arguments: --add-module=/root/incubator-pagespeed-ngx-latest-stable --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_ssl_module --with-http_stub_status_module --with-pcre-jit --with-http_secure_link_module --with-http_v2_module --with-http_realip_module --with-stream_geoip_module --http-scgi-temp-path=/tmp --http-uwsgi-temp-path=/tmp --http-fastcgi-temp-path=/tmp --http-proxy-temp-path=/tmp --http-log-path=/var/log/nginx/access --error-log-path=/var/log/nginx/error --pid-path=/var/run/nginx.pid --conf-path=/etc/nginx/nginx.conf --sbin-path=/usr/sbin --prefix=/usr --with-threads Pagespeed is our only third party module and it is version 1.13.35.2-0 Some nginx process start to spin in a tight loop, strace shows: write(168, "H\0\0\0\0\0\0\0 W|\244\230U\0\0 at y\20\244\230U\0\0", 24) = -1 EAGAIN (Resource temporarily unavailable) write(168, "H\0\0\0\0\0\0\0 W|\244\230U\0\0 at y\20\244\230U\0\0", 24) = -1 EAGAIN (Resource temporarily unavailable) write(168, "H\0\0\0\0\0\0\0 W|\244\230U\0\0 at y\20\244\230U\0\0", 24) = -1 EAGAIN (Resource temporarily unavailable) write(168, "H\0\0\0\0\0\0\0 W|\244\230U\0\0 at y\20\244\230U\0\0", 24) = -1 EAGAIN (Resource temporarily unavailable) write(168, "H\0\0\0\0\0\0\0 W|\244\230U\0\0 at y\20\244\230U\0\0", 24) = -1 EAGAIN (Resource temporarily unavailable) write(168, "H\0\0\0\0\0\0\0 W|\244\230U\0\0 at y\20\244\230U\0\0", 24) = -1 EAGAIN (Resource temporarily unavailable) looking in /proc? root at ao3-front08:/proc/799697/fd# ls -l 168 l-wx------ 1 nginx nginx 64 Jan 18 22:05 168 -> 'pipe:[2914414548]' root at ao3-front08:/proc# grep 2914414548 /tmp/fds lr-x------ 1 nginx nginx 64 Jan 18 22:05 799697/fd/167 -> pipe:[2914414548] l-wx------ 1 nginx nginx 64 Jan 18 22:05 799697/fd/168 -> pipe:[2914414548] The issue happens more when load is higher. Has anyone some advice as my current hack of killing processes that have used more than 1800 seconds of cpu is wrong. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jan 19 13:08:35 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Jan 2021 16:08:35 +0300 Subject: nginx stuck in tight loop sometimes In-Reply-To: References: Message-ID: <20210119130835.GP1147@mdounin.ru> Hello! On Tue, Jan 19, 2021 at 12:47:11PM +0000, James Beal wrote: > We have quite a high volume site, we have 4 front end nginx servers, each: > * > AMD EPYC 7402P 24-Core Processor > * > INTEL SSDPELKX020T8 ( 2TB NVMe ) > * > Dual? Broadcom BCM57416 NetXtreme-E 10GBase-T > * > 512GB of RAM > We have a fairly complex nginx config with sharded caches as explained in https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-1/ > > We see this problem on : > > nginx version: nginx/1.19.6 > built by gcc 8.3.0 (Debian 8.3.0-6) > built with OpenSSL 1.1.1d? 10 Sep 2019 > TLS SNI support enabled > configure arguments: --add-module=/root/incubator-pagespeed-ngx-latest-stable --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_ssl_module --with-http_stub_status_module --with-pcre-jit --with-http_secure_link_module --with-http_v2_module --with-http_realip_module --with-stream_geoip_module --http-scgi-temp-path=/tmp --http-uwsgi-temp-path=/tmp --http-fastcgi-temp-path=/tmp --http-proxy-temp-path=/tmp --http-log-path=/var/log/nginx/access --error-log-path=/var/log/nginx/error --pid-path=/var/run/nginx.pid --conf-path=/etc/nginx/nginx.conf --sbin-path=/usr/sbin --prefix=/usr --with-threads > > Pagespeed is our only third party module and it is version 1.13.35.2-0 > > Some nginx process start to spin in a tight loop, strace shows: > > write(168, "H\0\0\0\0\0\0\0 W|\244\230U\0\0 at y\20\244\230U\0\0", 24) = -1 EAGAIN (Resource temporarily unavailable) > write(168, "H\0\0\0\0\0\0\0 W|\244\230U\0\0 at y\20\244\230U\0\0", 24) = -1 EAGAIN (Resource temporarily unavailable) > write(168, "H\0\0\0\0\0\0\0 W|\244\230U\0\0 at y\20\244\230U\0\0", 24) = -1 EAGAIN (Resource temporarily unavailable) > write(168, "H\0\0\0\0\0\0\0 W|\244\230U\0\0 at y\20\244\230U\0\0", 24) = -1 EAGAIN (Resource temporarily unavailable) > write(168, "H\0\0\0\0\0\0\0 W|\244\230U\0\0 at y\20\244\230U\0\0", 24) = -1 EAGAIN (Resource temporarily unavailable) > write(168, "H\0\0\0\0\0\0\0 W|\244\230U\0\0 at y\20\244\230U\0\0", 24) = -1 EAGAIN (Resource temporarily unavailable) > > looking in /proc? > > root at ao3-front08:/proc/799697/fd# ls -l 168 > l-wx------ 1 nginx nginx 64 Jan 18 22:05 168 -> 'pipe:[2914414548]' > > root at ao3-front08:/proc# grep 2914414548 /tmp/fds > lr-x------ 1 nginx nginx 64 Jan 18 22:05 799697/fd/167 -> pipe:[2914414548] > l-wx------ 1 nginx nginx 64 Jan 18 22:05 799697/fd/168 -> pipe:[2914414548] > > The issue happens more when load is higher. Has anyone some > advice as my current hack of killing processes that have used > more than 1800 seconds of cpu is wrong. Are you able to reproduce the problem without any 3rd party modules? Since nginx itself does not use pipes, this looks like a pagespeed problem. -- Maxim Dounin http://mdounin.ru/ From james_ at catbus.co.uk Tue Jan 19 13:33:54 2021 From: james_ at catbus.co.uk (James Beal) Date: Tue, 19 Jan 2021 13:33:54 +0000 Subject: nginx stuck in tight loop sometimes ( sorry for new thread ) Message-ID: Sorry I was subscribed via the digest so this ended up being a new thread ( i have subscribed for each message ). Are you able to reproduce the problem without any 3rd party modules? Since nginx itself does not use pipes, this looks like a pagespeed problem. Not really we use about 500mb/s of bandwidth with pagespeed turned on and we are using about 2 gigabits a second with it turned off ( although I think that is more hitting the limits of the interconnect ). I could try a different release. A two minute look at the nginx source shows pipes around upstream I think. Is there a method of working out where an nginx process is stuck ( It does not respond to the normal signals ). -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jan 19 15:10:11 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 Jan 2021 18:10:11 +0300 Subject: nginx stuck in tight loop sometimes ( sorry for new thread ) In-Reply-To: References: Message-ID: <20210119151011.GR1147@mdounin.ru> Hello! On Tue, Jan 19, 2021 at 01:33:54PM +0000, James Beal wrote: > > Are you able to reproduce the problem without any 3rd party > > modules? Since nginx itself does not use pipes, this looks > > like a pagespeed problem. > > Not really we use about 500mb/s of bandwidth with pagespeed > turned on and we are using about 2 gigabits a second with it > turned off ( although I think that is more hitting the limits of > the interconnect ). Well, so you'll have to debug what happens then. Debugger and strace are your friends. Most likely you'll end up fixing pagespeed (or working on removing it from your system). Some useful links from a quick look into pagespeed code below. > A two minute look at the nginx source shows pipes around > upstream I think. These aren't OS pipes, but rather nginx code named "ngx_event_pipe". > Is there a method of working out where an nginx process is stuck > ( It does not respond to the normal signals ). As long as the process is stuck in a 3rd party module code, spinning in a loop, the only thing you can do is to send the KILL signal to the process, which cannot be catched or ignored and kills the process. Or you can attach to the process with a debugger and take a look where it spins. I suspect this will be this code, which simply re-tries writing on EAGAIN: https://github.com/apache/incubator-pagespeed-ngx/blob/master/src/ngx_event_connection.cc#L151 The "TODO(oschaaf): should we worry about spinning here?" line in the code looks like exactly what you are seeing. The relevant issue is linked in the same file: https://github.com/apache/incubator-pagespeed-ngx/issues/1380 That is, the code is known to do not work under load for at least three years. This basically matches my impression from previous attempts to look into the pagespeed code: it is not expected to work. I would really recommend to reconsider its usage. If you need optimizations of the responses returned, consider doing this during your deployment. Hope this helps. -- Maxim Dounin http://mdounin.ru/ From gk at leniwiec.biz Tue Jan 19 15:44:20 2021 From: gk at leniwiec.biz (Grzegorz Kulewski) Date: Tue, 19 Jan 2021 16:44:20 +0100 Subject: Response headers adding/filtering during proxy with njs Message-ID: Hello, Is it possible to (at least) add (or better also) filter response headers (like for example setting or modifying or deleting cookies, of course without harming other cookies set by the upstream) dynamically from njs after it is returned from the upstream (or cache) and before it is sent to the client? I think something like that is possible with lua but I would really like to avoid using 3rd party modules and just go the njs route. If not possible - is there any workaround? And/or can it be easily added in next release(s)? Thank you in advance. -- Grzegorz Kulewski From vbl5968 at gmail.com Tue Jan 19 17:02:33 2021 From: vbl5968 at gmail.com (Vincent Blondel) Date: Tue, 19 Jan 2021 18:02:33 +0100 Subject: Howto Remove the Cache-Control request header. In-Reply-To: <20210117001645.GX23032@daoine.org> References: <20210117001645.GX23032@daoine.org> Message-ID: Hello Thank You for the help I made some errors in my checks and confirm it is doing the job Thank You Vincent On Sun, Jan 17, 2021 at 1:16 AM Francis Daly wrote: > On Sat, Jan 16, 2021 at 07:11:54PM +0100, Vincent Blondel wrote: > > Hi there, > > > We want nginx to remove the request header Cache-Control before to proxy > > the request to the OCS. > > We do like this ... > > 'proxy_set_header Cache-Control "";' appears to work for me. > > Can you show one request to nginx with the header, and the matching > request to the upstream server? > > When I use this config: > > == > server { > listen 7000; > add_header X-7000 "cache :$http_cache_control:"; > location /a { > proxy_set_header Cache-Control ""; > proxy_pass http://127.0.0.1:7001; > } > location /b { > proxy_pass http://127.0.0.1:7001; > } > } > > server { > listen 7001; > add_header X-7001 "cache :$http_cache_control:"; > location / { > return 200 "7001: $request_uri\n"; > } > } > == > > if I make a request to port 7000 that starts with /b and includes a > Cache-Control header, I see in the X-7001 response header that that same > Cache-Control header was sent to the upstream; but if I make a request > that starts with /a, I see in the X-7001 response header that it was > removed before the request was made to the upstream. > > That is: > > curl -i -H Cache-control:no-cache http://127.0.0.1:7000/b1 > > includes > > X-7001: cache :no-cache: > X-7000: cache :no-cache: > > while > > curl -i -H Cache-control:no-cache http://127.0.0.1:7000/a1 > > includes > > X-7001: cache :: > X-7000: cache :no-cache: > > Does your system respond differently? > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xeioex at nginx.com Tue Jan 19 19:53:10 2021 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 19 Jan 2021 22:53:10 +0300 Subject: Response headers adding/filtering during proxy with njs In-Reply-To: References: Message-ID: <7dd15ba3-0003-6fa2-fc1c-9d71426bcfe5@nginx.com> On 19.01.2021 18:44, Grzegorz Kulewski wrote: > Hello, > > Is it possible to (at least) add (or better also) filter response headers (like for example setting or modifying or deleting cookies, of course without harming other cookies set by the upstream) dynamically from njs after it is returned from the upstream (or cache) and before it is sent to the client? > > I think something like that is possible with lua but I would really like to avoid using 3rd party modules and just go the njs route. > > If not possible - is there any workaround? And/or can it be easily added in next release(s)? > > Thank you in advance. > Hi Grzegorz, Please, clarify what are you trying to do with Set-Cookie headers? Consider built-in directives for a typical modifications like proxy_cookie_domain, proxy_cookie_flags etc. If nothing is applicable take a look at the following example: nginx.conf: js_import main from http.js; js_set $modify_cookies? main.modify_cookies; location /modify_cookies { ??????????? add_header _ $modify_cookies; ??????????? proxy_pass?http://127.0.0.1:8080; ?} server { ????????listen?????? 127.0.0.1:8080; ????????location /modify_cookies { ????????????add_header Set-Cookie "XXXXXX"; ????????????add_header Set-Cookie "BB"; ????????????add_header Set-Cookie "YYYYYYY"; ????????????return 200; ????????} ????} http.js: function modify_cookies(r) { ????var cookies = r.headersOut['Set-Cookie']; ????r.headersOut['Set-Cookie'] = cookies.filter(v=>v.length > Number(r.args.len)); ????return ""; } curl?http://localhost:8000/modify_cookies?len=1?-v ... < Set-Cookie: XXXXXX < Set-Cookie: BB < Set-Cookie: YYYYYYY curl?http://localhost:8000/modify_cookies?len=3?-v ... < Set-Cookie: XXXXXX < Set-Cookie: YYYYYYY This works because arbitrary njs code can be executed when nginx evaluates a variable at runtime. The trick is to find an appropriate directives which supports variables and which is evaluated at appropriate moment of request processing. So, in the example an auxiliary variable is used to inject njs code after an upstream data is processed but response headers are not sent to a client yet. From D4v1d_4n0 at protonmail.ch Fri Jan 22 04:25:53 2021 From: D4v1d_4n0 at protonmail.ch (David Hu) Date: Fri, 22 Jan 2021 04:25:53 +0000 Subject: Why does the nginx.org main site not supporting TLS v1.3? Message-ID: I just did a cURL request just now using the command in Windows Powershell using the MinGW version of cURL. `./curl -vvvvv --http2-prior-knowledge --tlsv1.3 https://nginx.org` And my curl specs are as follows curl 7.74.0 (x86_64-pc-win32) libcurl/7.74.0 OpenSSL/1.1.1i (Schannel) zlib/1.2.11 brotli/1.0.9 zstd/1.4.8 WinIDN libssh2/1.9.0 nghttp2/1.42.0 Release-Date: 2020-12-09 Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps mqtt pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp Features: AsynchDNS HSTS HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile MultiSSL NTLM SPNEGO SSL SSPI TLS-SRP Unicode UnixSockets alt-svc brotli libz zstd However the output is as following: * Trying 3.125.197.172:443... * Connected to nginx.org (3.125.197.172) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: D:\curl-7.74.0_2-win64-mingw\bin\curl-ca-bundle.crt * CApath: none * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS alert, protocol version (582): * error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version * Closing connection 0 curl: (35) error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version Why does this happen? Can anyone explain the reason? It seems like the nginx main site is not supporting TLSv1.3 Thank you very much! -------------- next part -------------- A non-text attachment was scrubbed... Name: publickey - D4v1d_4n0 at protonmail.ch - 0x340A848D.asc Type: application/pgp-keys Size: 736 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From D4v1d_4n0 at protonmail.ch Fri Jan 22 04:50:15 2021 From: D4v1d_4n0 at protonmail.ch (David Hu) Date: Fri, 22 Jan 2021 04:50:15 +0000 Subject: Why does the nginx.org main site not supporting TLS v1.3? Message-ID: <3sKUHtYAkB6vic9kPIn0KL3Lx95zSLH1ENcxBJPZ3NzuMHrOaysqXzZ51mea2dGSrcglw-KUXemjwJT3PVS4a4hUQNFleOWNG88Sk5nIZMc=@protonmail.ch> So I have to downgrade to TLS v1.2. The full command input and the connection process can be shown as follows: ./curl -vvvvv --http2-prior-knowledge --tlsv1.2 https://nginx.org * Trying 52.58.199.22:443... * Connected to nginx.org (52.58.199.22) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: D:\curl-7.74.0_2-win64-mingw\bin\curl-ca-bundle.crt * CApath: none * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384 * ALPN, server accepted to use http/1.1 * Server certificate: * subject: CN=nginx.org * start date: Oct 29 16:45:05 2020 GMT * expire date: Jan 27 16:45:05 2021 GMT * subjectAltName: host "nginx.org" matched cert's "nginx.org" * issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3 * SSL certificate verify ok. > GET / HTTP/1.1 > Host: nginx.org > User-Agent: curl/7.74.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Server: nginx/1.19.0 < Date: Fri, 22 Jan 2021 04:43:32 GMT < Content-Type: text/html; charset=utf-8 < Content-Length: 12676 < Last-Modified: Tue, 15 Dec 2020 14:58:52 GMT < Connection: keep-alive < Keep-Alive: timeout=15 < ETag: "5fd8cf2c-3184" < Accept-Ranges: bytes < So I neogotiate with your server to force use HTTP/2 (i.e. H2) and ALPN is offering H2 and HTTP/1.1 but at the finally I only get the HTTP version HTTP/1.1 not H2. The same cURL specs and versions and specs as the above message. What have I done wrong or if it is your problem? Thanks again. Regards, -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From teward at thomas-ward.net Fri Jan 22 06:04:29 2021 From: teward at thomas-ward.net (Thomas Ward) Date: Fri, 22 Jan 2021 01:04:29 -0500 Subject: Why does the nginx.org main site not supporting TLS v1.3? In-Reply-To: <3sKUHtYAkB6vic9kPIn0KL3Lx95zSLH1ENcxBJPZ3NzuMHrOaysqXzZ51mea2dGSrcglw-KUXemjwJT3PVS4a4hUQNFleOWNG88Sk5nIZMc=@protonmail.ch> References: <3sKUHtYAkB6vic9kPIn0KL3Lx95zSLH1ENcxBJPZ3NzuMHrOaysqXzZ51mea2dGSrcglw-KUXemjwJT3PVS4a4hUQNFleOWNG88Sk5nIZMc=@protonmail.ch> Message-ID: <43ba4b1f-66b1-c4e2-cfa8-b69f26590e73@thomas-ward.net> So, I don't run the NGINX webserver, but I am pretty sure this is on the remote server to serve the protocol right.? SSLLabs test shows that TLS 1.3 is just not offered. https://www.ssllabs.com/ssltest/analyze.html?d=nginx.org&s=3.125.197.172&latest There's three other IPs (one IPv4 and two IPv6) that will very likely reflect the same tests as well. So to answer your original question: ?> What have I done wrong or if it is your problem? You didn't do anything wrong.? TLS 1.2 is the only protocol that's offered for SSL/TLS connections to the nginx.org site. Thomas On 1/21/21 11:50 PM, David Hu wrote: > So I have to downgrade to TLS v1.2. The full command input and the connection process can be shown as follows: > ./curl -vvvvv --http2-prior-knowledge --tlsv1.2 https://nginx.org > * Trying 52.58.199.22:443... > * Connected to nginx.org (52.58.199.22) port 443 (#0) > * ALPN, offering h2 > * ALPN, offering http/1.1 > * successfully set certificate verify locations: > * CAfile: D:\curl-7.74.0_2-win64-mingw\bin\curl-ca-bundle.crt > * CApath: none > * TLSv1.3 (OUT), TLS handshake, Client hello (1): > * TLSv1.3 (IN), TLS handshake, Server hello (2): > * TLSv1.2 (IN), TLS handshake, Certificate (11): > * TLSv1.2 (IN), TLS handshake, Server key exchange (12): > * TLSv1.2 (IN), TLS handshake, Server finished (14): > * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): > * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1): > * TLSv1.2 (OUT), TLS handshake, Finished (20): > * TLSv1.2 (IN), TLS handshake, Finished (20): > * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384 > * ALPN, server accepted to use http/1.1 > * Server certificate: > * subject: CN=nginx.org > * start date: Oct 29 16:45:05 2020 GMT > * expire date: Jan 27 16:45:05 2021 GMT > * subjectAltName: host "nginx.org" matched cert's "nginx.org" > * issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3 > * SSL certificate verify ok. >> GET / HTTP/1.1 >> Host: nginx.org >> User-Agent: curl/7.74.0 >> Accept: */* >> > * Mark bundle as not supporting multiuse > < HTTP/1.1 200 OK > < Server: nginx/1.19.0 > < Date: Fri, 22 Jan 2021 04:43:32 GMT > < Content-Type: text/html; charset=utf-8 > < Content-Length: 12676 > < Last-Modified: Tue, 15 Dec 2020 14:58:52 GMT > < Connection: keep-alive > < Keep-Alive: timeout=15 > < ETag: "5fd8cf2c-3184" > < Accept-Ranges: bytes > < > > > > So I neogotiate with your server to force use HTTP/2 (i.e. H2) and ALPN is offering H2 and HTTP/1.1 but at the finally I only get the HTTP version HTTP/1.1 not H2. The same cURL specs and versions and specs as the above message. What have I done wrong or if it is your problem? > > Thanks again. > Regards, > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From teward at thomas-ward.net Fri Jan 22 06:08:32 2021 From: teward at thomas-ward.net (Thomas Ward) Date: Fri, 22 Jan 2021 01:08:32 -0500 Subject: Why does the nginx.org main site not supporting TLS v1.3? In-Reply-To: <43ba4b1f-66b1-c4e2-cfa8-b69f26590e73@thomas-ward.net> References: <3sKUHtYAkB6vic9kPIn0KL3Lx95zSLH1ENcxBJPZ3NzuMHrOaysqXzZ51mea2dGSrcglw-KUXemjwJT3PVS4a4hUQNFleOWNG88Sk5nIZMc=@protonmail.ch> <43ba4b1f-66b1-c4e2-cfa8-b69f26590e73@thomas-ward.net> Message-ID: <243e8cae-91be-e286-1dd6-464faea6e0ef@thomas-ward.net> To clarify, I meant I don't run nginx.org's nginx server that they have.? ;) The remaining IP tests by SSLLabs shows the same behavior - https://www.ssllabs.com/ssltest/analyze.html?d=nginx.org&latest - so it's just a case of these servers being configured to only use TLS 1.2.? POSSIBLY they're using an older set of OpenSSL or similar libraries that don't have TLS 1.3 yet, but it's also just possible it's disabled - TLS 1.3 isn't exactly the most 'accepted' protocol yet by certain policies and standards, so that's a consideration too. Thomas On 1/22/21 1:04 AM, Thomas Ward wrote: > > So, I don't run the NGINX webserver, but I am pretty sure this is on > the remote server to serve the protocol right.? SSLLabs test shows > that TLS 1.3 is just not offered. > > https://www.ssllabs.com/ssltest/analyze.html?d=nginx.org&s=3.125.197.172&latest > > There's three other IPs (one IPv4 and two IPv6) that will very likely > reflect the same tests as well. > > So to answer your original question: > > ?> What have I done wrong or if it is your problem? > > You didn't do anything wrong.? TLS 1.2 is the only protocol that's > offered for SSL/TLS connections to the nginx.org site. > > > Thomas > > > On 1/21/21 11:50 PM, David Hu wrote: >> So I have to downgrade to TLS v1.2. The full command input and the connection process can be shown as follows: >> ./curl -vvvvv --http2-prior-knowledge --tlsv1.2https://nginx.org >> * Trying 52.58.199.22:443... >> * Connected to nginx.org (52.58.199.22) port 443 (#0) >> * ALPN, offering h2 >> * ALPN, offering http/1.1 >> * successfully set certificate verify locations: >> * CAfile: D:\curl-7.74.0_2-win64-mingw\bin\curl-ca-bundle.crt >> * CApath: none >> * TLSv1.3 (OUT), TLS handshake, Client hello (1): >> * TLSv1.3 (IN), TLS handshake, Server hello (2): >> * TLSv1.2 (IN), TLS handshake, Certificate (11): >> * TLSv1.2 (IN), TLS handshake, Server key exchange (12): >> * TLSv1.2 (IN), TLS handshake, Server finished (14): >> * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): >> * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1): >> * TLSv1.2 (OUT), TLS handshake, Finished (20): >> * TLSv1.2 (IN), TLS handshake, Finished (20): >> * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384 >> * ALPN, server accepted to use http/1.1 >> * Server certificate: >> * subject: CN=nginx.org >> * start date: Oct 29 16:45:05 2020 GMT >> * expire date: Jan 27 16:45:05 2021 GMT >> * subjectAltName: host "nginx.org" matched cert's "nginx.org" >> * issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3 >> * SSL certificate verify ok. >>> GET / HTTP/1.1 >>> Host: nginx.org >>> User-Agent: curl/7.74.0 >>> Accept: */* >>> >> * Mark bundle as not supporting multiuse >> < HTTP/1.1 200 OK >> < Server: nginx/1.19.0 >> < Date: Fri, 22 Jan 2021 04:43:32 GMT >> < Content-Type: text/html; charset=utf-8 >> < Content-Length: 12676 >> < Last-Modified: Tue, 15 Dec 2020 14:58:52 GMT >> < Connection: keep-alive >> < Keep-Alive: timeout=15 >> < ETag: "5fd8cf2c-3184" >> < Accept-Ranges: bytes >> < >> >> >> >> So I neogotiate with your server to force use HTTP/2 (i.e. H2) and ALPN is offering H2 and HTTP/1.1 but at the finally I only get the HTTP version HTTP/1.1 not H2. The same cURL specs and versions and specs as the above message. What have I done wrong or if it is your problem? >> >> Thanks again. >> Regards, >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From D4v1d_4n0 at protonmail.ch Fri Jan 22 10:19:55 2021 From: D4v1d_4n0 at protonmail.ch (David Hu) Date: Fri, 22 Jan 2021 10:19:55 +0000 Subject: Why does the nginx.org main site not supporting TLS v1.3? In-Reply-To: <243e8cae-91be-e286-1dd6-464faea6e0ef@thomas-ward.net> References: <3sKUHtYAkB6vic9kPIn0KL3Lx95zSLH1ENcxBJPZ3NzuMHrOaysqXzZ51mea2dGSrcglw-KUXemjwJT3PVS4a4hUQNFleOWNG88Sk5nIZMc=@protonmail.ch> <43ba4b1f-66b1-c4e2-cfa8-b69f26590e73@thomas-ward.net> <243e8cae-91be-e286-1dd6-464faea6e0ef@thomas-ward.net> Message-ID: OK. Thank you. But what about the HTTP/1.1 and HTTP/2 problem? As I mentioned before, I neogotiated with the server for H2 in the early ALPN. However the server only accepts HTTP/1.1 and why is that? My cURL has explicitly specified --http2-prior-knowledge but it still does not work. It still connects via HTTP/1.1. Thank you for all your answers!Regards, David Hu [PGP Public Key attached, key ID:?0x340A848D ; fingerprint: 340a 848d 4333 6873 d48f 5dad 8847 c44d 75c3 da38] ??????? Original Message ??????? On Thursday, January 21st, 2021 at 10:08 PM, Thomas Ward wrote: > To clarify, I meant I don't run nginx.org's nginx server that they have.? ;) > > The remaining IP tests by SSLLabs shows the same behavior - https://www.ssllabs.com/ssltest/analyze.html?d=nginx.org&latest - so it's just a case of these servers being configured to only use TLS 1.2.? POSSIBLY they're using an older set of OpenSSL or similar libraries that don't have TLS 1.3 yet, but it's also just possible it's disabled - TLS 1.3 isn't exactly the most 'accepted' protocol yet by certain policies and standards, so that's a consideration too. > > Thomas > > On 1/22/21 1:04 AM, Thomas Ward wrote: > > > So, I don't run the NGINX webserver, but I am pretty sure this is on the remote server to serve the protocol right.? SSLLabs test shows that TLS 1.3 is just not offered. > > > > https://www.ssllabs.com/ssltest/analyze.html?d=nginx.org&s=3.125.197.172&latest > > > > There's three other IPs (one IPv4 and two IPv6) that will very likely reflect the same tests as well. > > > > So to answer your original question: > > > > ?> What have I done wrong or if it is your problem? > > > > You didn't do anything wrong.? TLS 1.2 is the only protocol that's offered for SSL/TLS connections to the nginx.org site. > > > > Thomas > > > > On 1/21/21 11:50 PM, David Hu wrote: > > > > > So I have to downgrade to TLS v1.2. The full command input and the connection process can be shown as follows: > > > ./curl -vvvvv --http2-prior-knowledge --tlsv1.2 https://nginx.org > > > * Trying 52.58.199.22:443... > > > * Connected to nginx.org (52.58.199.22) port 443 (#0) > > > * ALPN, offering h2 > > > * ALPN, offering http/1.1 > > > * successfully set certificate verify locations: > > > * CAfile: D:\curl-7.74.0_2-win64-mingw\bin\curl-ca-bundle.crt > > > * CApath: none > > > * TLSv1.3 (OUT), TLS handshake, Client hello (1): > > > * TLSv1.3 (IN), TLS handshake, Server hello (2): > > > * TLSv1.2 (IN), TLS handshake, Certificate (11): > > > * TLSv1.2 (IN), TLS handshake, Server key exchange (12): > > > * TLSv1.2 (IN), TLS handshake, Server finished (14): > > > * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): > > > * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1): > > > * TLSv1.2 (OUT), TLS handshake, Finished (20): > > > * TLSv1.2 (IN), TLS handshake, Finished (20): > > > * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384 > > > * ALPN, server accepted to use http/1.1 > > > * Server certificate: > > > * subject: CN=nginx.org > > > * start date: Oct 29 16:45:05 2020 GMT > > > * expire date: Jan 27 16:45:05 2021 GMT > > > * subjectAltName: host "nginx.org" matched cert's "nginx.org" > > > * issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3 > > > * SSL certificate verify ok. > > > > > > > GET / HTTP/1.1 > > > > Host: nginx.org > > > > User-Agent: curl/7.74.0 > > > > Accept: */* > > > > > > * Mark bundle as not supporting multiuse > > > < HTTP/1.1 200 OK > > > < Server: nginx/1.19.0 > > > < Date: Fri, 22 Jan 2021 04:43:32 GMT > > > < Content-Type: text/html; charset=utf-8 > > > < Content-Length: 12676 > > > < Last-Modified: Tue, 15 Dec 2020 14:58:52 GMT > > > < Connection: keep-alive > > > < Keep-Alive: timeout=15 > > > < ETag: "5fd8cf2c-3184" > > > < Accept-Ranges: bytes > > > < > > > > > > > > > > > > So I neogotiate with your server to force use HTTP/2 (i.e. H2) and ALPN is offering H2 and HTTP/1.1 but at the finally I only get the HTTP version HTTP/1.1 not H2. The same cURL specs and versions and specs as the above message. What have I done wrong or if it is your problem? > > > > > > Thanks again. > > > Regards, > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From nginx_list at chezphil.org Fri Jan 22 16:47:07 2021 From: nginx_list at chezphil.org (Phil Endecott) Date: Fri, 22 Jan 2021 16:47:07 +0000 Subject: Can't get a simple proxy to cache anything Message-ID: <1611334027966@dmwebmail.dmwebmail.chezphil.org> Dear Experts, I am trying to set up a simple limited caching proxy; I have got proxying to work, but I can't get it to cache. I'm a software developer, working from my home office. I have a fast home network, a slow connection to the internet, and fast cloud servers (e.g. AWS). I'd like to be able to cache content from some specific domains locally, to make it faster. This is simple http static content - no ssl, no auth, no cookies. I've installed Debian packages of nginx 1.14.2 in a home server with a large disk. The nginx configuration is as follows: user www-data; worker_processes auto; pid /run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log debug; gzip on; proxy_cache_path /var/cache/nginx/proxy_cache use_temp_path=off inactive=10y levels=1:2 keys_zone=cachekeys:10M max_size=2G; server { listen 82 default_server; listen [::]:82 default_server; resolver 192.168.1.43; # A local box running dnsmasq as a DNS forwarder. proxy_cache cachekeys; proxy_cache_convert_head off; proxy_cache_key "$request_method$scheme$host$request_uri"; proxy_cache_revalidate on; proxy_http_version 1.1; proxy_redirect off; proxy_buffering on; location / { proxy_pass http://$host; } } } Here's what happens when I fetch a test file un-proxied: $ wget -S http://XXXX.org/index.html --2021-01-22 16:28:34-- http://XXXX.org/index.html Resolving XXXX.org (XXXX.org)... 34.255.xx.yy Connecting to XXXX.org (XXXX.org)|34.255.xx.yy|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 200 OK Date: Fri, 22 Jan 2021 16:28:34 GMT Server: Apache Last-Modified: Sun, 14 Jun 2020 14:53:25 GMT Accept-Ranges: bytes Content-Length: 9174 Vary: Accept-Encoding ETag: "23d6-5a80c78e6ed8d-gzip" X-Frame-Options: sameorigin Keep-Alive: timeout=30 Connection: Keep-Alive Content-Type: text/html Length: 9174 (9.0K) [text/html] Saving to: ?index.html.4? index.html.4 100%[===================>] 8.96K --.-KB/s in 0.005s 2021-01-22 16:28:34 (1.78 MB/s) - ?index.html.4? saved [9174/9174] Now I try to fetch using the proxy (hostname PPPP): $ export http_proxy=http://PPPP:82 $ wget -S http://XXXX.org/index.html --2021-01-22 16:31:16-- http://XXXX.org/index.html Resolving PPPP (PPPP)... fd31:4159:2600:0:21e:6ff:fe36:ec9d, 192.168.1.50 Connecting to PPPP (PPPP)|fd31:4159:2600:0:21e:6ff:fe36:ec9d|:82... connected. Proxy request sent, awaiting response... HTTP/1.1 200 OK Server: nginx/1.14.2 Date: Fri, 22 Jan 2021 16:31:16 GMT Content-Type: text/html Content-Length: 9174 Connection: keep-alive Last-Modified: Sun, 14 Jun 2020 14:53:25 GMT Vary: Accept-Encoding ETag: "23d6-5a80c78e6ed8d-gzip" X-Frame-Options: sameorigin Accept-Ranges: bytes Length: 9174 (9.0K) [text/html] Saving to: ?index.html.7? index.html.7 100%[====================================================>] 8.96K --.-KB/s in 0.005s 2021-01-22 16:31:16 (1.75 MB/s) - ?index.html.7? saved [9174/9174] So the proxy has successfully retrieved the file - but it has not been cached; there is nothing saved in /var/cache/nginx. Looking in the nginx debug log, I see this: 2021/01/22 16:25:38 [debug] 18695#18695: *18 http cacheable: 0 So my guess is that something about the http request or response, or in the nginx configuration, has caused nginx to decide that this response is not cacheable. Can anyone see what the problem is? A longer extract from the debug log follows. Thanks, Phil. 2021/01/22 16:25:37 [debug] 18695#18695: accept on [::]:82, ready: 0 2021/01/22 16:25:37 [debug] 18695#18695: posix_memalign: 00563640:256 @16 2021/01/22 16:25:37 [debug] 18695#18695: *18 accept: [fd31:4159:2600:0:21e:6ff:fe33:322b]:39882 fd:4 2021/01/22 16:25:37 [debug] 18695#18695: *18 event timer add: 4: 60000:1061794893 2021/01/22 16:25:37 [debug] 18695#18695: *18 reusable connection: 1 2021/01/22 16:25:37 [debug] 18695#18695: *18 epoll add event: fd:4 op:1 ev:80002001 2021/01/22 16:25:37 [debug] 18695#18695: *18 http wait request handler 2021/01/22 16:25:37 [debug] 18695#18695: *18 posix_memalign: 00563770:256 @16 2021/01/22 16:25:37 [debug] 18695#18695: *18 malloc: 00563888:1024 2021/01/22 16:25:37 [debug] 18695#18695: *18 recv: eof:0, avail:1 2021/01/22 16:25:37 [debug] 18695#18695: *18 recv: fd:4 196 of 1024 2021/01/22 16:25:37 [debug] 18695#18695: *18 reusable connection: 0 2021/01/22 16:25:37 [debug] 18695#18695: *18 posix_memalign: 0056CB80:4096 @16 2021/01/22 16:25:37 [debug] 18695#18695: *18 http process request line 2021/01/22 16:25:37 [debug] 18695#18695: *18 http request line: "GET http://XXXX.org/index.html HTTP/1.1" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http uri: "/index.html" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http args: "" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http exten: "html" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http process request header line 2021/01/22 16:25:37 [debug] 18695#18695: *18 http header: "User-Agent: Wget/1.18 (linux-gnu)" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http header: "Accept: */*" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http header: "Accept-Encoding: identity" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http header: "Host: XXXX.org" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http header: "Connection: Keep-Alive" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http header: "Proxy-Connection: Keep-Alive" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http header done 2021/01/22 16:25:37 [debug] 18695#18695: *18 event timer del: 4: 1061794893 2021/01/22 16:25:37 [debug] 18695#18695: *18 generic phase: 0 2021/01/22 16:25:37 [debug] 18695#18695: *18 rewrite phase: 1 2021/01/22 16:25:37 [debug] 18695#18695: *18 test location: "/" 2021/01/22 16:25:37 [debug] 18695#18695: *18 using configuration "/" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http cl:-1 max:1048576 2021/01/22 16:25:37 [debug] 18695#18695: *18 rewrite phase: 3 2021/01/22 16:25:37 [debug] 18695#18695: *18 post rewrite phase: 4 2021/01/22 16:25:37 [debug] 18695#18695: *18 generic phase: 5 2021/01/22 16:25:37 [debug] 18695#18695: *18 access phase: 6 2021/01/22 16:25:37 [debug] 18695#18695: *18 access phase: 7 2021/01/22 16:25:37 [debug] 18695#18695: *18 access phase: 8 2021/01/22 16:25:37 [debug] 18695#18695: *18 post access phase: 9 2021/01/22 16:25:37 [debug] 18695#18695: *18 generic phase: 10 2021/01/22 16:25:37 [debug] 18695#18695: *18 generic phase: 11 2021/01/22 16:25:37 [debug] 18695#18695: *18 http script copy: "http://" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http script var: "XXXX.org" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http init upstream, client timer: 0 2021/01/22 16:25:37 [debug] 18695#18695: *18 epoll add event: fd:4 op:3 ev:80002005 2021/01/22 16:25:37 [debug] 18695#18695: *18 http script var: "GET" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http script var: "http" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http script var: "XXXX.org" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http script var: "/index.html" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http script var: "" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http cache key: "GEThttpXXXX.org/index.html" 2021/01/22 16:25:37 [debug] 18695#18695: *18 add cleanup: 0056DA80 2021/01/22 16:25:37 [debug] 18695#18695: *18 http file cache exists: -5 e:0 2021/01/22 16:25:37 [debug] 18695#18695: *18 cache file: "/var/cache/nginx/proxy_cache/9/2d304c870502df72ddc8aad3e08ab229" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http upstream cache: -5 2021/01/22 16:25:37 [debug] 18695#18695: *18 posix_memalign: 0059F040:4096 @16 2021/01/22 16:25:37 [debug] 18695#18695: *18 http script copy: "Host" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http script var: "XXXX.org" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http script copy: "Connection" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http script copy: "close" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http script copy: "" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http script copy: "" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http script copy: "" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http script copy: "" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http proxy header: "User-Agent: Wget/1.18 (linux-gnu)" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http proxy header: "Accept: */*" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http proxy header: "Accept-Encoding: identity" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http proxy header: "Proxy-Connection: Keep-Alive" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http proxy header: "GET /index.html HTTP/1.1 Host: XXXX.org Connection: close User-Agent: Wget/1.18 (linux-gnu) Accept: */* Accept-Encoding: identity Proxy-Connection: Keep-Alive " 2021/01/22 16:25:37 [debug] 18695#18695: *18 http cleanup add: 0056DB70 2021/01/22 16:25:37 [debug] 18695#18695: *18 http finalize request: -4, "/index.html?" a:1, c:2 2021/01/22 16:25:37 [debug] 18695#18695: *18 http request count:2 blk:0 2021/01/22 16:25:37 [debug] 18695#18695: *18 http run request: "/index.html?" 2021/01/22 16:25:37 [debug] 18695#18695: *18 http upstream check client, write event:1, "/index.html" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream resolve: "/index.html?" 2021/01/22 16:25:38 [debug] 18695#18695: *18 name was resolved to 34.255.xx.yy 2021/01/22 16:25:38 [debug] 18695#18695: *18 get rr peer, try: 1 2021/01/22 16:25:38 [debug] 18695#18695: *18 stream socket 22 2021/01/22 16:25:38 [debug] 18695#18695: *18 epoll add connection: fd:22 ev:80002005 2021/01/22 16:25:38 [debug] 18695#18695: *18 connect to 34.255.xx.yy:80, fd:22 #19 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream connect: -2 2021/01/22 16:25:38 [debug] 18695#18695: *18 posix_memalign: 005634C0:128 @16 2021/01/22 16:25:38 [debug] 18695#18695: *18 event timer add: 22: 60000:1061795045 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream request: "/index.html?" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream send request handler 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream send request 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream send request body 2021/01/22 16:25:38 [debug] 18695#18695: *18 chain writer buf fl:1 s:172 2021/01/22 16:25:38 [debug] 18695#18695: *18 chain writer in: 0059F240 2021/01/22 16:25:38 [debug] 18695#18695: *18 writev: 172 of 172 2021/01/22 16:25:38 [debug] 18695#18695: *18 chain writer out: 00000000 2021/01/22 16:25:38 [debug] 18695#18695: *18 event timer del: 22: 1061795045 2021/01/22 16:25:38 [debug] 18695#18695: *18 event timer add: 22: 60000:1061795073 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream request: "/index.html?" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream process header 2021/01/22 16:25:38 [debug] 18695#18695: *18 malloc: 005A0048:4096 2021/01/22 16:25:38 [debug] 18695#18695: *18 recv: eof:0, avail:1 2021/01/22 16:25:38 [debug] 18695#18695: *18 recv: fd:22 1448 of 3751 2021/01/22 16:25:38 [debug] 18695#18695: *18 http proxy status 200 "200 OK" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http proxy header: "Date: Fri, 22 Jan 2021 16:25:38 GMT" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http proxy header: "Server: Apache" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http proxy header: "Last-Modified: Sun, 14 Jun 2020 14:53:25 GMT" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http proxy header: "Accept-Ranges: bytes" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http proxy header: "Content-Length: 9174" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http proxy header: "Vary: Accept-Encoding" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http proxy header: "ETag: "23d6-5a80c78e6ed8d-gzip"" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http proxy header: "X-Frame-Options: sameorigin" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http proxy header: "Connection: close" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http proxy header: "Content-Type: text/html" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http proxy header done 2021/01/22 16:25:38 [debug] 18695#18695: *18 HTTP/1.1 200 OK Server: nginx/1.14.2 Date: Fri, 22 Jan 2021 16:25:38 GMT Content-Type: text/html Content-Length: 9174 Connection: keep-alive Last-Modified: Sun, 14 Jun 2020 14:53:25 GMT Vary: Accept-Encoding ETag: "23d6-5a80c78e6ed8d-gzip" X-Frame-Options: sameorigin Accept-Ranges: bytes 2021/01/22 16:25:38 [debug] 18695#18695: *18 write new buf t:1 f:0 0059F5E0, pos 0059F5E0, size: 302 file: 0, size: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 http write filter: l:0 f:0 s:302 2021/01/22 16:25:38 [debug] 18695#18695: *18 http cacheable: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 http file cache free, fd: -1 2021/01/22 16:25:38 [debug] 18695#18695: *18 http proxy filter init s:200 h:0 c:0 l:9174 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream process upstream 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe read upstream: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe preread: 1157 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe buf free s:0 t:1 f:0 005A0048, pos 005A02C4, size: 1157 file: 0, size: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe length: 9174 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe write downstream: 1 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe write busy: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe write: out:00000000, f:0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe read upstream: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe buf free s:0 t:1 f:0 005A0048, pos 005A02C4, size: 1157 file: 0, size: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe length: 9174 2021/01/22 16:25:38 [debug] 18695#18695: *18 event timer: 22, old: 1061795073, new: 1061795105 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream request: "/index.html?" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream dummy handler 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream request: "/index.html?" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream process upstream 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe read upstream: 1 2021/01/22 16:25:38 [debug] 18695#18695: *18 readv: eof:0, avail:1 2021/01/22 16:25:38 [debug] 18695#18695: *18 readv: 1, last:2303 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe recv chain: 2303 2021/01/22 16:25:38 [debug] 18695#18695: *18 input buf #0 2021/01/22 16:25:38 [debug] 18695#18695: *18 malloc: 005A1050:4096 2021/01/22 16:25:38 [debug] 18695#18695: *18 readv: eof:0, avail:1 2021/01/22 16:25:38 [debug] 18695#18695: *18 readv: 1, last:4096 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe recv chain: 593 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe buf in s:1 t:1 f:0 005A0048, pos 005A02C4, size: 3460 file: 0, size: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe buf free s:0 t:1 f:0 005A1050, pos 005A1050, size: 593 file: 0, size: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe length: 5714 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe write downstream: 1 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe write busy: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe write buf ls:1 005A02C4 3460 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe write: out:0059F7F0, f:0 2021/01/22 16:25:38 [debug] 18695#18695: *18 http output filter "/index.html?" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http copy filter: "/index.html?" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http postpone filter "/index.html?" 0059F7F0 2021/01/22 16:25:38 [debug] 18695#18695: *18 write old buf t:1 f:0 0059F5E0, pos 0059F5E0, size: 302 file: 0, size: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 write new buf t:1 f:0 005A0048, pos 005A02C4, size: 3460 file: 0, size: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 http write filter: l:0 f:1 s:3762 2021/01/22 16:25:38 [debug] 18695#18695: *18 http write filter limit 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 writev: 3762 of 3762 2021/01/22 16:25:38 [debug] 18695#18695: *18 http write filter 00000000 2021/01/22 16:25:38 [debug] 18695#18695: *18 http copy filter: 0 "/index.html?" 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe write busy: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe write: out:00000000, f:0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe read upstream: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe buf free s:0 t:1 f:0 005A1050, pos 005A1050, size: 593 file: 0, size: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe buf free s:0 t:1 f:0 005A0048, pos 005A0048, size: 0 file: 0, size: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe length: 5714 2021/01/22 16:25:38 [debug] 18695#18695: *18 event timer: 22, old: 1061795073, new: 1061795105 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream request: "/index.html?" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream dummy handler 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream request: "/index.html?" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream process upstream 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe read upstream: 1 2021/01/22 16:25:38 [debug] 18695#18695: *18 readv: eof:0, avail:1 2021/01/22 16:25:38 [debug] 18695#18695: *18 readv: 2, last:4096 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe recv chain: 2896 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe buf free s:0 t:1 f:0 005A1050, pos 005A1050, size: 3489 file: 0, size: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe buf free s:0 t:1 f:0 005A0048, pos 005A0048, size: 0 file: 0, size: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe length: 5714 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe write downstream: 1 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe write busy: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe write: out:00000000, f:0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe read upstream: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe buf free s:0 t:1 f:0 005A1050, pos 005A1050, size: 3489 file: 0, size: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe buf free s:0 t:1 f:0 005A0048, pos 005A0048, size: 0 file: 0, size: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe length: 5714 2021/01/22 16:25:38 [debug] 18695#18695: *18 event timer: 22, old: 1061795073, new: 1061795109 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream request: "/index.html?" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream dummy handler 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream request: "/index.html?" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream process upstream 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe read upstream: 1 2021/01/22 16:25:38 [debug] 18695#18695: *18 readv: eof:1, avail:1 2021/01/22 16:25:38 [debug] 18695#18695: *18 readv: 2, last:4096 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe recv chain: 2225 2021/01/22 16:25:38 [debug] 18695#18695: *18 input buf #1 2021/01/22 16:25:38 [debug] 18695#18695: *18 readv: eof:1, avail:0 2021/01/22 16:25:38 [debug] 18695#18695: *18 readv: 1, last:2478 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe recv chain: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe buf in s:1 t:1 f:0 005A1050, pos 005A1050, size: 4096 file: 0, size: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe buf free s:0 t:1 f:0 005A0048, pos 005A0048, size: 1618 file: 0, size: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe length: 1618 2021/01/22 16:25:38 [debug] 18695#18695: *18 input buf #2 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe write downstream: 1 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe write downstream flush in 2021/01/22 16:25:38 [debug] 18695#18695: *18 http output filter "/index.html?" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http copy filter: "/index.html?" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http postpone filter "/index.html?" 0059F8E0 2021/01/22 16:25:38 [debug] 18695#18695: *18 write new buf t:1 f:0 005A1050, pos 005A1050, size: 4096 file: 0, size: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 write new buf t:1 f:0 005A0048, pos 005A0048, size: 1618 file: 0, size: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 http write filter: l:0 f:0 s:5714 2021/01/22 16:25:38 [debug] 18695#18695: *18 http write filter limit 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 writev: 5714 of 5714 2021/01/22 16:25:38 [debug] 18695#18695: *18 http write filter 00000000 2021/01/22 16:25:38 [debug] 18695#18695: *18 http copy filter: 0 "/index.html?" 2021/01/22 16:25:38 [debug] 18695#18695: *18 pipe write downstream done 2021/01/22 16:25:38 [debug] 18695#18695: *18 event timer: 22, old: 1061795073, new: 1061795113 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream exit: 00000000 2021/01/22 16:25:38 [debug] 18695#18695: *18 finalize http upstream request: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 finalize http proxy request 2021/01/22 16:25:38 [debug] 18695#18695: *18 free rr peer 1 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 close http upstream connection: 22 2021/01/22 16:25:38 [debug] 18695#18695: *18 free: 005634C0, unused: 88 2021/01/22 16:25:38 [debug] 18695#18695: *18 event timer del: 22: 1061795073 2021/01/22 16:25:38 [debug] 18695#18695: *18 reusable connection: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 http upstream temp fd: -1 2021/01/22 16:25:38 [debug] 18695#18695: *18 http output filter "/index.html?" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http copy filter: "/index.html?" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http postpone filter "/index.html?" BE8617F4 2021/01/22 16:25:38 [debug] 18695#18695: *18 write new buf t:0 f:0 00000000, pos 00000000, size: 0 file: 0, size: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 http write filter: l:1 f:0 s:0 2021/01/22 16:25:38 [debug] 18695#18695: *18 http copy filter: 0 "/index.html?" 2021/01/22 16:25:38 [debug] 18695#18695: *18 http finalize request: 0, "/index.html?" a:1, c:1 2021/01/22 16:25:38 [debug] 18695#18695: *18 set http keepalive handler 2021/01/22 16:25:38 [debug] 18695#18695: *18 http close request 2021/01/22 16:25:38 [debug] 18695#18695: *18 http log handler 2021/01/22 16:25:38 [debug] 18695#18695: *18 run cleanup: 0056DA80 2021/01/22 16:25:38 [debug] 18695#18695: *18 free: 005A1050 2021/01/22 16:25:38 [debug] 18695#18695: *18 free: 005A0048 2021/01/22 16:25:38 [debug] 18695#18695: *18 free: 0056CB80, unused: 4 2021/01/22 16:25:38 [debug] 18695#18695: *18 free: 0059F040, unused: 1545 2021/01/22 16:25:38 [debug] 18695#18695: *18 free: 00563888 2021/01/22 16:25:38 [debug] 18695#18695: *18 hc free: 00000000 2021/01/22 16:25:38 [debug] 18695#18695: *18 hc busy: 00000000 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 tcp_nodelay 2021/01/22 16:25:38 [debug] 18695#18695: *18 reusable connection: 1 2021/01/22 16:25:38 [debug] 18695#18695: *18 event timer add: 4: 65000:1061800113 2021/01/22 16:25:38 [debug] 18695#18695: *18 http keepalive handler 2021/01/22 16:25:38 [debug] 18695#18695: *18 malloc: 00563888:1024 2021/01/22 16:25:38 [debug] 18695#18695: *18 recv: eof:1, avail:1 2021/01/22 16:25:38 [debug] 18695#18695: *18 recv: fd:4 0 of 1024 2021/01/22 16:25:38 [info] 18695#18695: *18 client fd31:4159:2600:0:21e:6ff:fe33:322b closed keepalive connection 2021/01/22 16:25:38 [debug] 18695#18695: *18 close http connection: 4 2021/01/22 16:25:38 [debug] 18695#18695: *18 event timer del: 4: 1061800113 2021/01/22 16:25:38 [debug] 18695#18695: *18 reusable connection: 0 2021/01/22 16:25:38 [debug] 18695#18695: *18 free: 00563888 2021/01/22 16:25:38 [debug] 18695#18695: *18 free: 00563640, unused: 24 2021/01/22 16:25:38 [debug] 18695#18695: *18 free: 00563770, unused: 184 From mdounin at mdounin.ru Sat Jan 23 15:14:47 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 23 Jan 2021 18:14:47 +0300 Subject: Can't get a simple proxy to cache anything In-Reply-To: <1611334027966@dmwebmail.dmwebmail.chezphil.org> References: <1611334027966@dmwebmail.dmwebmail.chezphil.org> Message-ID: <20210123151447.GB1147@mdounin.ru> Hello! On Fri, Jan 22, 2021 at 04:47:07PM +0000, Phil Endecott wrote: [...] > So the proxy has successfully retrieved the file - but it has not been cached; > there is nothing saved in /var/cache/nginx. > > Looking in the nginx debug log, I see this: > > 2021/01/22 16:25:38 [debug] 18695#18695: *18 http cacheable: 0 > > So my guess is that something about the http request or response, or > in the nginx configuration, has caused nginx to decide that this response > is not cacheable. > > Can anyone see what the problem is? You haven't configured any proxy_cache_valid directives (see http://nginx.org/r/proxy_cache_valid for details), and the response doesn't have any cache validity headers, such as "Expires" or "Cache-Control: max-age=...". -- Maxim Dounin http://mdounin.ru/ From nginx_list at chezphil.org Sun Jan 24 12:56:35 2021 From: nginx_list at chezphil.org (Phil Endecott) Date: Sun, 24 Jan 2021 12:56:35 +0000 Subject: Can't get a simple proxy to cache anything In-Reply-To: <20210123151447.GB1147@mdounin.ru> References: <20210123151447.GB1147@mdounin.ru> Message-ID: <1611492995956@dmwebmail.dmwebmail.chezphil.org> Maxim Dounin wrote: > You haven't configured any proxy_cache_valid directives (see > http://nginx.org/r/proxy_cache_valid for details), and the > response doesn't have any cache validity headers, such as > "Expires" or "Cache-Control: max-age=...". Thanks Maxim! It now seems to work. I guess I was expecting the proxy_cache_path inactive=... time to determine how long things were cached for. I think I have a better understanding now. Let me check if I have this right: The upstream server doesn't specify any Cache-Control or Expires headers. It does supply an ETag and Last-Modified. I'd like to cache these files for a long time (say 18 months), but revalidate using the ETag (say after 5 minutes). So I have: proxy_cache_path ... inactive=18M; proxy_cache_revalidate on; proxy_cache_valid 200 5m; # Beware M = Month, m = minute. Is that right? I have some other questions: 1. What proxy_cache_key should I be using? I am unsure of the difference between e.g. host and proxy_host, and uri and request_uri. I currently have proxy_cache_convert_head off; proxy_cache_key "$request_method$scheme$host$request_uri"; Do I need "$is_args$args" ? 2. Is there any way to indicate the cache status in the access log? I.e. hit/miss/revalidated/uncachable etc. Many thanks, Phil. From nginx-forum at forum.nginx.org Mon Jan 25 06:08:13 2021 From: nginx-forum at forum.nginx.org (dahekik) Date: Mon, 25 Jan 2021 01:08:13 -0500 Subject: nginx-quic http3 reverse proxy problem In-Reply-To: <9635A345-CD65-4FEA-B559-5B2FE9400927@nginx.com> References: <9635A345-CD65-4FEA-B559-5B2FE9400927@nginx.com> Message-ID: hey maybe this " what is a reverse proxy guide " may help https://www.namecheap.com/guru-guides/what-is-reverse-proxy-server/ but you can try to build NGINX with HTTP/3 support enabled: % ./configure \ --prefix=$PWD \ --with-http_ssl_module \ --with-http_v2_module \ --with-http_v3_module \ --with-openssl=../quiche/deps/boringssl \ --with-quiche=../quiche % make Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289999,290557#msg-290557 From bob at transentia.com.au Mon Jan 25 08:35:49 2021 From: bob at transentia.com.au (Bob Brown) Date: Mon, 25 Jan 2021 08:35:49 +0000 Subject: How to use NGINX as LDAP -> LDAPS forward proxy Message-ID: I have a collection of smallish internal-facing apps sitting on a server. I have been asked to 'secure' these apps. The apps currently: + provide HTTP service to clients + make use of a number of internal SOAP services + use LDAP (Active Directory) for user authentication The various apps are written in Java, Groovy and Python. Rather than hack each app, I would like to take a more system-based approach and completely interpose nginx between them and the rest of the world: I would like to have the apps ONLY talk to nginx on localhost and have nginx stand in for the apps. All (certificate) management will then be centralised. I assume that nginx will be more efficient at handling SSL/TLS as well... I believe that I can use nginx (...there seem lots of example materials) to handle: * reverse proxy https(from world) -> http(to localhost) for client access * forward proxy SOAP(over http, from localhost) -> SOAP(over https, to world) with mutual authentication I am unsure of the LDAP->LDAPS aspect. Is this possible? Are there any HOWTO documents/pages/blogs/... detailing this? I have seen very few examples of how this might happen. I tried to replicate: https://jackiechen.blog/2019/01/24/nginx-sample-config-of-http-and-ldaps-reverse-proxy/ This gave me errors about ssl_certificate not being usable at the specific location in the config file. I assume new versions of nginx use a slightly different config file format? Suggestions/thoughts gratefully received. -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Mon Jan 25 14:18:26 2021 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Mon, 25 Jan 2021 17:18:26 +0300 Subject: How to use NGINX as LDAP -> LDAPS forward proxy In-Reply-To: References: Message-ID: Hi Bob, hope you're doing well these days. On Mon, Jan 25, 2021 at 08:35:49AM +0000, Bob Brown wrote: > I have a collection of smallish internal-facing apps sitting on a server. > > I have been asked to 'secure' these apps. > > The apps currently: > + provide HTTP service to clients > + make use of a number of internal SOAP services > + use LDAP (Active Directory) for user authentication > > The various apps are written in Java, Groovy and Python. > > Rather than hack each app, I would like to take a more system-based approach > and completely interpose nginx between them and the rest of the world: I > would like to have the apps ONLY talk to nginx on localhost and have nginx > stand in for the apps. All (certificate) management will then be centralised. > I assume that nginx will be more efficient at handling SSL/TLS as well... > > I believe that I can use nginx (...there seem lots of example materials) to handle: > > * reverse proxy https(from world) -> http(to localhost) for client access > * forward proxy SOAP(over http, from localhost) -> SOAP(over https, to world) > with mutual authentication > > I am unsure of the LDAP->LDAPS aspect. > > Is this possible? Yes, it's possible. > Are there any HOWTO documents/pages/blogs/... detailing this? nginx has the ngx_http_auth_request_module and that's the recommend way to work with authentication requests. I'd recommend to take a look on an OSS solution, developed inside NGINX, for integration with LDAP service. Please take a look: https://github.com/nginxinc/nginx-ldap-auth > I have seen very few examples of how this might happen. > I tried to replicate: https://jackiechen.blog/2019/01/24/nginx-sample-config-of-http-and-ldaps-reverse-proxy/ > > This gave me errors about ssl_certificate not being usable at the specific > location in the config file. I assume new versions of nginx use a slightly > different config file format? All versions of nginx use the same configuration file format. > Suggestions/thoughts gratefully received. -- Sergey Osokin From mdounin at mdounin.ru Mon Jan 25 16:02:42 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Jan 2021 19:02:42 +0300 Subject: Can't get a simple proxy to cache anything In-Reply-To: <1611492995956@dmwebmail.dmwebmail.chezphil.org> References: <20210123151447.GB1147@mdounin.ru> <1611492995956@dmwebmail.dmwebmail.chezphil.org> Message-ID: <20210125160242.GF1147@mdounin.ru> Hello! On Sun, Jan 24, 2021 at 12:56:35PM +0000, Phil Endecott wrote: > Maxim Dounin wrote: > > You haven't configured any proxy_cache_valid directives (see > > http://nginx.org/r/proxy_cache_valid for details), and the > > response doesn't have any cache validity headers, such as > > "Expires" or "Cache-Control: max-age=...". > > Thanks Maxim! It now seems to work. > > I guess I was expecting the proxy_cache_path inactive=... time > to determine how long things were cached for. I think I have a > better understanding now. Let me check if I have this right: > > The upstream server doesn't specify any Cache-Control or Expires > headers. It does supply an ETag and Last-Modified. I'd like to > cache these files for a long time (say 18 months), but revalidate > using the ETag (say after 5 minutes). So I have: > > proxy_cache_path ... inactive=18M; > proxy_cache_revalidate on; > proxy_cache_valid 200 5m; # Beware M = Month, m = minute. > > Is that right? The "inactive=" parameter controls how long cache items are stored if they aren't accessed at all. It is basically a way to control the total size of your cache. Usually 18 months is way too long for this time. > I have some other questions: > > 1. What proxy_cache_key should I be using? I am unsure of the difference > between e.g. host and proxy_host, and uri and request_uri. I currently > have > > proxy_cache_convert_head off; > proxy_cache_key "$request_method$scheme$host$request_uri"; What cache key to use depends on your particular use case. The one you are using is good enough as long as you use simple proxying. > Do I need "$is_args$args" ? No, request arguments are already included in the $request_uri variable (see http://nginx.org/r/$request_uri). > 2. Is there any way to indicate the cache status in the access log? I.e. > hit/miss/revalidated/uncachable etc. The cache status is available in the $upstream_cache_status variable (http://nginx.org/r/$upstream_cache_status). If you want it in the logs, you can do so by configuring appropriate log_format (see http://nginx.org/r/log_format). -- Maxim Dounin http://mdounin.ru/ From nginx_list at chezphil.org Mon Jan 25 16:26:49 2021 From: nginx_list at chezphil.org (Phil Endecott) Date: Mon, 25 Jan 2021 16:26:49 +0000 Subject: proxy_cache_valid depending on $host Message-ID: <1611592009842@dmwebmail.dmwebmail.chezphil.org> Dear Experts, I wanted to write this: proxy_cache_valid 200 5m; if ($host ~ foo) { proxy_cache_valid 200 30d; } but proxy_cache_valid is not allowed in "if" blocks. Is there some work-around to have different cache validity times for different hosts, in a caching proxy? Thanks, Phil. From mdounin at mdounin.ru Mon Jan 25 16:36:37 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Jan 2021 19:36:37 +0300 Subject: Why does the nginx.org main site not supporting TLS v1.3? In-Reply-To: References: <3sKUHtYAkB6vic9kPIn0KL3Lx95zSLH1ENcxBJPZ3NzuMHrOaysqXzZ51mea2dGSrcglw-KUXemjwJT3PVS4a4hUQNFleOWNG88Sk5nIZMc=@protonmail.ch> <43ba4b1f-66b1-c4e2-cfa8-b69f26590e73@thomas-ward.net> <243e8cae-91be-e286-1dd6-464faea6e0ef@thomas-ward.net> Message-ID: <20210125163637.GG1147@mdounin.ru> Hello! On Fri, Jan 22, 2021 at 10:19:55AM +0000, David Hu wrote: > OK. Thank you. But what about the HTTP/1.1 and HTTP/2 problem? > As I mentioned before, I neogotiated with the server for H2 in > the early ALPN. However the server only accepts HTTP/1.1 and why > is that? My cURL has explicitly specified > --http2-prior-knowledge but it still does not work. It still > connects via HTTP/1.1. The answer is quite simple: the server only accepts HTTP/1.x. That's quite normal considering that HTTP/2 introduces quite a few additional attack vectors, while the nginx.org site contains only a few resources per page, so HTTP/2 have no benefits for the site. (Further, since the site doesn't use SSL by default and rather have it available for those who want to use SSL for some reason, using HTTP/2 is essentially not possible by default.) -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Jan 25 16:39:01 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 25 Jan 2021 19:39:01 +0300 Subject: proxy_cache_valid depending on $host In-Reply-To: <1611592009842@dmwebmail.dmwebmail.chezphil.org> References: <1611592009842@dmwebmail.dmwebmail.chezphil.org> Message-ID: <20210125163901.GH1147@mdounin.ru> Hello! On Mon, Jan 25, 2021 at 04:26:49PM +0000, Phil Endecott wrote: > Dear Experts, > > I wanted to write this: > > proxy_cache_valid 200 5m; > if ($host ~ foo) { > proxy_cache_valid 200 30d; > } > > but proxy_cache_valid is not allowed in "if" blocks. > Is there some work-around to have different cache validity times > for different hosts, in a caching proxy? Using different server{} blocks with different server_name's is the way to go, see http://nginx.org/r/server_name. -- Maxim Dounin http://mdounin.ru/ From nginx_list at chezphil.org Tue Jan 26 17:20:42 2021 From: nginx_list at chezphil.org (Phil Endecott) Date: Tue, 26 Jan 2021 17:20:42 +0000 Subject: proxy_cache_valid depending on $host In-Reply-To: <20210125163901.GH1147@mdounin.ru> References: <20210125163901.GH1147@mdounin.ru> Message-ID: <1611681642282@dmwebmail.dmwebmail.chezphil.org> Maxim Dounin wrote: > On Mon, Jan 25, 2021 at 04:26:49PM +0000, Phil Endecott wrote: >> Is there some work-around to have different cache validity times >> for different hosts, in a caching proxy? > > Using different server{} blocks with different server_name's is the > way to go, see http://nginx.org/r/server_name. Thanks again, that is the obvious solution and it does seem to work. I was imagining the the server_name would be matched only against the Host: header in a regular HTTP request, not the hostname in the request line in a proxy request - but it seems to do what I want. Regards, Phil. From nginx-forum at forum.nginx.org Tue Jan 26 20:01:33 2021 From: nginx-forum at forum.nginx.org (sanflores) Date: Tue, 26 Jan 2021 15:01:33 -0500 Subject: upstream sent no valid HTTP/1.0 header while reading response header from upstream, Message-ID: <0565a26639de55d940ddc9fffea187b0.NginxMailingListEnglish@forum.nginx.org> I'm using tomcat as a backend (using websocket), and is hostead on AWS behind a balancer. After a some time (acording to timeouts) I'm starting to get the error that the upstream sent no valid HTTP/1.0, but what is strange, is that I'm only using HTTP/1.1, this is my configuration: ------------ # This file was overwritten during deployment by .ebextensions/customs.config user nginx; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; worker_processes auto; worker_rlimit_nofile 133650; events { worker_connections 10240; } http { include /etc/nginx/mime.types; default_type application/octet-stream; client_max_body_size 0; # disable any limits to avoid HTTP 413 log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" JSESSIONID="$cookie_JSESSIONID"'; log_format main_ext '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" ' '"$host" sn="$server_name" ' 'rt=$request_time ' 'ua="$upstream_addr" us="$upstream_status" ' 'ut="$upstream_response_time" ul="$upstream_response_length" ' 'cs=$upstream_cache_status JSESSIONID="$cookie_JSESSIONID"'; include conf.d/*.conf; map $http_upgrade $connection_upgrade { default "upgrade"; } server { listen 80 default_server; access_log /var/log/nginx/access.log main_ext; client_header_timeout 60; client_body_timeout 60; keepalive_timeout 60; gzip on; gzip_comp_level 4; gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript; # Include the Elastic Beanstalk generated locations (We are commenting this in order to add some CORS configuration) # include conf.d/elasticbeanstalk/*.conf; location / { proxy_pass http://127.0.0.1:8080; proxy_http_version 1.1; proxy_buffers 16 16k; proxy_buffer_size 16k; proxy_set_header Connection $connection_upgrade; proxy_set_header Upgrade $http_upgrade; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location ~* \.(jpe?g|png|gif|ico|css|woff)$ { proxy_pass http://127.0.0.1:8080; proxy_http_version 1.1; add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; } location /nginx_status { stub_status on; allow 127.0.0.1; deny all; } } } ----------- There is nothing in conf.d/*.conf I do have a lot of Writing, but I'm guessing that's because of the websocket. # curl localhost/nginx_status;lsof -n -u nginx | awk '{print $5}' | sort | uniq -c | sort -nr Active connections: 1029 server accepts handled requests 62963 62963 342949 Reading: 0 Writing: 1016 Waiting: 13 2053 IPv4 338 REG 81 unix 28 0000 24 CHR 20 DIR 2 FIFO 1 TYPE If I set the timeout on the balancer and proxy_read_timeout on 4.000 seconds (the max available), I'm able to live without errors for an hour until I need to restart the tomcat server. We haven't make any configuration change for the last year but I'm guessing I'm hitting some limit, but I can't find what is it Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290573,290573#msg-290573 From grzegorz.czesnik at hotmail.com Wed Jan 27 10:27:26 2021 From: grzegorz.czesnik at hotmail.com (=?iso-8859-2?Q?Grzegorz_Cze=B6nik?=) Date: Wed, 27 Jan 2021 10:27:26 +0000 Subject: 2 nginx servers Message-ID: Hi, I am thinking of a solution: I want to set up two nginx servers: the first as a reverse proxy that will direct - for starters - to the second nginx server, which will hold two simple static pages as a web server. Will such a solution be practical? What do you think? Grzegorz -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Wed Jan 27 11:49:22 2021 From: r at roze.lv (Reinis Rozitis) Date: Wed, 27 Jan 2021 13:49:22 +0200 Subject: 2 nginx servers In-Reply-To: References: Message-ID: <002701d6f4a2$74165bf0$5c4313d0$@roze.lv> > I want to set up two nginx servers: the first as a reverse proxy that will direct - for starters - to the second nginx server, which will hold two simple static pages as a web server. It's fully possible to have such a setup. > Will such a solution be practical? What do you think? Without knowing more details it's hard to tell. What is the purpose of the proxy server? Why a static page needs to be proxed? Etc For static content nginx is quite effective and highly performant so usually there is no need for additional layers. rr From grzegorz.czesnik at hotmail.com Wed Jan 27 12:03:51 2021 From: grzegorz.czesnik at hotmail.com (=?iso-8859-2?Q?Grzegorz_Cze=B6nik?=) Date: Wed, 27 Jan 2021 12:03:51 +0000 Subject: 2 nginx servers In-Reply-To: <002701d6f4a2$74165bf0$5c4313d0$@roze.lv> References: <002701d6f4a2$74165bf0$5c4313d0$@roze.lv> Message-ID: I forgot to write about it, but I want to install wordpres on the second server in the future - hence this solution came to mind :) -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Reinis Rozitis Sent: Wednesday, January 27, 2021 12:49 PM To: nginx at nginx.org Subject: RE: 2 nginx servers > I want to set up two nginx servers: the first as a reverse proxy that > will direct - for starters - to the second nginx server, which will hold two simple static pages as a web server. It's fully possible to have such a setup. > Will such a solution be practical? What do you think? Without knowing more details it's hard to tell. What is the purpose of the proxy server? Why a static page needs to be proxed? Etc For static content nginx is quite effective and highly performant so usually there is no need for additional layers. rr _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From ryanbgould at gmail.com Wed Jan 27 16:55:24 2021 From: ryanbgould at gmail.com (Ryan Gould) Date: Wed, 27 Jan 2021 08:55:24 -0800 Subject: HTTP/3 with Firefox and forms Message-ID: hello all you amazing developers, i check?https://hg.nginx.org/nginx-quic every day for new updates being posted.? on monday (Jan 25 2021) i noticed these five new updates: https://hg.nginx.org/nginx-quic/rev/6422455c92b4 https://hg.nginx.org/nginx-quic/rev/916a2e1d6617 https://hg.nginx.org/nginx-quic/rev/cb8185bd0507 https://hg.nginx.org/nginx-quic/rev/58acdba9b3b2 https://hg.nginx.org/nginx-quic/rev/e1eb7f4ca9f1 the latest build seems to have a problem with submitting forms and the latest?production and developer versions of Firefox.? i am not having the same problem with Edge or Chrome. my backend is?PHP 7.3.26 on a?Debian 10.7.? it doesnt actually do any POSTing in Firefox.? php is?not getting any data at all.? these forms are running code thats been untouched for five years or so. reverting to my Jan 11?2021 build of nginx resolves the problem for forms and Firefox. this is probably a problem with Mozilla, but if you have any fixes... thank you for your incredible work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Wed Jan 27 17:25:05 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 27 Jan 2021 20:25:05 +0300 Subject: HTTP/3 with Firefox and forms In-Reply-To: References: Message-ID: <64E274C9-A403-4F5C-AA02-30FBA2FEA49D@nginx.com> Hi Ryan, Thanks for reporting this. Do you observe any errors in nginx error log? ? Roman Arutyunyan arut at nginx.com > On 27 Jan 2021, at 19:55, Ryan Gould wrote: > > hello all you amazing developers, > > i check https://hg.nginx.org/nginx-quic every day for new updates being > posted. on monday (Jan 25 2021) i noticed these five new updates: > > https://hg.nginx.org/nginx-quic/rev/6422455c92b4 > https://hg.nginx.org/nginx-quic/rev/916a2e1d6617 > https://hg.nginx.org/nginx-quic/rev/cb8185bd0507 > https://hg.nginx.org/nginx-quic/rev/58acdba9b3b2 > https://hg.nginx.org/nginx-quic/rev/e1eb7f4ca9f1 > > the latest build seems to have a problem with submitting forms and the > latest production and developer versions of Firefox. i am not having > the same problem with Edge or Chrome. > > my backend is PHP 7.3.26 on a Debian 10.7. it doesnt actually do any > POSTing in Firefox. php is not getting any data at all. these forms > are running code thats been untouched for five years or so. > > reverting to my Jan 11 2021 build of nginx resolves the problem for > forms and Firefox. > > this is probably a problem with Mozilla, but if you have any fixes... > > thank you for your incredible work. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From ffjr at hotmail.com Thu Jan 28 04:06:03 2021 From: ffjr at hotmail.com (Federico Felman) Date: Thu, 28 Jan 2021 04:06:03 +0000 Subject: Send request and response via POST Message-ID: Hello guys, I?ve been assigned to send the request and response to another server via POST. I was able to compile the information needed in log_by_lua_block but I wasn?t able to send it since resty-http says it is disabled in that context. It would be best if it was done without anything external since I don?t know much about the server in which it will be deployed. I will appreciate all the help, I ran out of ideas. Thank you!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From wshaya at osisoft.com Thu Jan 28 18:10:06 2021 From: wshaya at osisoft.com (William Shaya) Date: Thu, 28 Jan 2021 18:10:06 +0000 Subject: Nginx Ingress Controller with Prometheus Metrics Issue Message-ID: Hi, I have a Kubernetes cluster and am using the nginx ingress controller. I have followed the instructions at: https://docs.nginx.com/nginx-ingress-controller/logging-and-monitoring/prometheus/ I can see these metrics in Prometheus, however, I DO NOT see: Workqueue metrics. Note: the workqueue is a queue used by the Ingress Controller to process changes to the relevant resources in the cluster like Ingress resources. The Ingress Controller uses only one queue. The metrics for that queue will have the label name="taskQueue" * workqueue_depth. Current depth of the workqueue. * workqueue_queue_duration_second. How long in seconds an item stays in the workqueue before being requested. * workqueue_work_duration_seconds. How long in seconds processing an item from the workqueue takes. Any help is determining what the issue is here would be greatly appreciated. I basically am interested in analyzing the time spent for a request in the ingress controller Thanks Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jan 28 19:35:43 2021 From: nginx-forum at forum.nginx.org (fdjohnson77) Date: Thu, 28 Jan 2021 14:35:43 -0500 Subject: Need Help with NGINX and RTMP set up Message-ID: I am new to this....I am using Ubuntu (latest version, 20.04), I have installed Nginx (latest version), all appears to be working when I check the config and syntax. I am trying to steam live to church online platform using vmix and/or OBS. Vmix and OBS both appear to be communicating to the rtmp server, but when I go to my church.online platform and input the video embed code nothing appears in the preview window. Is my nginx config correct? Nginx.conf worker_processes auto; events { worker_connections 1024; } ## HLS server streaming rtmp { server { listen 1935; # Listen on standard RTMP port chunk_size 4000; application live{ live on; deny play all; push rtmp://localhost/show; on_publish http://localhost:3001/auth; on_publish_done http://localhost:3001/done; } application show { live on; # Turn on HLS hls on; hls_nested on; hls_fragment_naming system; hls_path /Users/toan/Sites/mnt/hls/; hls_fragment 3; hls_playlist_length 60; # disable consuming the stream from nginx as rtmp deny play all; } } } #end hls server stream http { sendfile off; tcp_nopush on; #aio on; directio 512; default_type application/octet-stream; server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } } server { listen 8080; location /hls { # Disable cache add_header Cache-Control no-cache; # CORS setup add_header 'Access-Control-Allow-Origin' '*' always; add_header 'Access-Control-Expose-Headers' 'Content-Length'; # allow CORS preflight requests if ($request_method = 'OPTIONS') { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Max-Age' 1728000; add_header 'Content-Type' 'text/plain charset=UTF-8'; add_header 'Content-Length' 0; return 204; } types { application/vnd.apple.mpegurl m3u8; video/mp2t ts; } root /Users/toan/Sites/mnt/; } } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290598,290598#msg-290598 From nginx-forum at forum.nginx.org Thu Jan 28 20:01:46 2021 From: nginx-forum at forum.nginx.org (dlouwers) Date: Thu, 28 Jan 2021 15:01:46 -0500 Subject: Auth_request and multiple cookies from the authentication server In-Reply-To: <20201019093113.GS30691@daoine.org> References: <20201019093113.GS30691@daoine.org> Message-ID: Hello Francis, Unfortunately combining the headers to one Set-Cookie header doesn't work either. It seems that nginx parses the values and just drops all after the first. Therefore it looks like there is no workaround and we will need to fix or indeed switch to something else. Best, Dirk Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289487,290599#msg-290599 From o.garrett at f5.com Fri Jan 29 14:52:18 2021 From: o.garrett at f5.com (Owen Garrett) Date: Fri, 29 Jan 2021 14:52:18 +0000 Subject: Nginx Ingress Controller with Prometheus Metrics Issue In-Reply-To: References: Message-ID: Hi Bill, The workqueue_ metrics relate to the processing of configuration updates. Config updates can be triggered by an update to an Ingress Resource, Endpoints in a monitored Service, and other events. The purpose of these metrics is to observe the backlog to determine how quickly IC can respond to changes. They probably do not provide the data you want (?I basically am interested in analyzing the time spent for a request in the ingress controller?)? When you use the Ingress Controller with NGINX open source or NGINX Plus, you can get this data from the log files: see https://github.com/nginxinc/kubernetes-ingress/tree/v1.10.0/examples/custom-log-format and the NGINX variable $request_time http://nginx.org/en/docs/http/ngx_http_log_module.html#var_request_time When you use it with NGINX Plus, you additionally get aggregate request time data in the Prometheus data Feel free to raise additional questions about Ingress Controller on the github page: https://github.com/nginxinc/kubernetes-ingress/issues Owen From: nginx on behalf of William Shaya Reply to: "nginx at nginx.org" Date: Thursday, 28 January 2021 at 18:10 To: "nginx at nginx.org" Subject: Nginx Ingress Controller with Prometheus Metrics Issue EXTERNAL MAIL: nginx-bounces at nginx.org Hi, I have a Kubernetes cluster and am using the nginx ingress controller. I have followed the instructions at: https://docs.nginx.com/nginx-ingress-controller/logging-and-monitoring/prometheus/ I can see these metrics in Prometheus, however, I DO NOT see: Workqueue metrics. Note: the workqueue is a queue used by the Ingress Controller to process changes to the relevant resources in the cluster like Ingress resources. The Ingress Controller uses only one queue. The metrics for that queue will have the label name="taskQueue" ? workqueue_depth. Current depth of the workqueue. ? workqueue_queue_duration_second. How long in seconds an item stays in the workqueue before being requested. ? workqueue_work_duration_seconds. How long in seconds processing an item from the workqueue takes. Any help is determining what the issue is here would be greatly appreciated. I basically am interested in analyzing the time spent for a request in the ingress controller Thanks Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Jan 29 20:26:36 2021 From: nginx-forum at forum.nginx.org (loopback_proxy) Date: Fri, 29 Jan 2021 15:26:36 -0500 Subject: Stream response from upstream to all proxy_cache_lock'ed client Message-ID: <19f60755a5dc973d351e223cff6330c5.NginxMailingListEnglish@forum.nginx.org> Hi, I was looking into using proxy_cache_lock mechanism to collapse upstream requests and reduce traffic. It works great right out of the box but one issue I found was that, if there are n client requests proxy_cache_locked, only one of those clients get the response as soon as the upstream sends the response to Nginx, the rest of n-1 clients wait till the response is fully flushed to the cache file by Nginx, after which the locked requests serve the response as HITs from the cached file. Please correct me if this understanding of mine is incorrect. If it is correct I have two questions. 1. Are there any efforts to support streaming the response back to all the proxy_cache_lock'ed clients simultaneously? If this is not prioritized, I would like to know the reasoning behind it, so i can make an informed decision on how should i proceed next. 2. Why was 500ms chosen as the wait time value for ngx_file_cache_lock_wait event? Making locked requests wait half-a-second would drive up the TTFB for live streaming customers. Thank you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,290604,290604#msg-290604 From ryanbgould at gmail.com Sat Jan 30 15:00:37 2021 From: ryanbgould at gmail.com (Ryan Gould) Date: Sat, 30 Jan 2021 07:00:37 -0800 Subject: HTTP/3 with Firefox and forms Message-ID: no, i am not seeing a thing in the logs.? i looked in the php error logs and the nginx error logs, and even the non-error logs.? the page returns immediately after the POST is attempted.? it is not spending any time debating or processing or round-tripping. i can also verify the most recently added patch did not fix the problem (not that it was supposed to):? https://hg.nginx.org/nginx-quic/rev/dbe33ef9cd9a Date: Wed, 27 Jan 2021 20:25:05 +0300 From: Roman Arutyunyan To:?nginx at nginx.org [mailto:nginx at nginx.org] Subject: Re: HTTP/3 with Firefox and forms Message-ID: <64E274C9-A403-4F5C-AA02-30FBA2FEA49D at nginx.com [mailto:64E274C9-A403-4F5C-AA02-30FBA2FEA49D at nginx.com]> Content-Type: text/plain; charset="utf-8" Hi Ryan, Thanks for reporting this. Do you observe any errors in nginx error log? ? Roman Arutyunyan arut at nginx.com [mailto:arut at nginx.com] > On 27 Jan 2021, at 19:55, Ryan Gould?wrote: > > hello all you amazing developers, > > i check?https://hg.nginx.org/nginx-quic [https://hg.nginx.org/nginx-quic]?every day for new updates being > posted. on monday (Jan 25 2021) i noticed these five new updates: > >?https://hg.nginx.org/nginx-quic/rev/6422455c92b4 [https://hg.nginx.org/nginx-quic/rev/6422455c92b4] >?https://hg.nginx.org/nginx-quic/rev/916a2e1d6617 [https://hg.nginx.org/nginx-quic/rev/916a2e1d6617] >?https://hg.nginx.org/nginx-quic/rev/cb8185bd0507 [https://hg.nginx.org/nginx-quic/rev/cb8185bd0507] >?https://hg.nginx.org/nginx-quic/rev/58acdba9b3b2 [https://hg.nginx.org/nginx-quic/rev/58acdba9b3b2] >?https://hg.nginx.org/nginx-quic/rev/e1eb7f4ca9f1 [https://hg.nginx.org/nginx-quic/rev/e1eb7f4ca9f1] > > the latest build seems to have a problem with submitting forms and the > latest production and developer versions of Firefox. i am not having > the same problem with Edge or Chrome. > > my backend is PHP 7.3.26 on a Debian 10.7. it doesnt actually do any > POSTing in Firefox. php is not getting any data at all. these forms > are running code thats been untouched for five years or so. > > reverting to my Jan 11 2021 build of nginx resolves the problem for > forms and Firefox. > > this is probably a problem with Mozilla, but if you have any fixes... > > thank you for your incredible work. > _______________________________________________ > nginx mailing list >?nginx at nginx.org [mailto:nginx at nginx.org] >?http://mailman.nginx.org/mailman/listinfo/nginx [http://mailman.nginx.org/mailman/listinfo/nginx] -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Sat Jan 30 15:40:39 2021 From: arut at nginx.com (Roman Arutyunyan) Date: Sat, 30 Jan 2021 18:40:39 +0300 Subject: HTTP/3 with Firefox and forms In-Reply-To: References: Message-ID: Hi Ryan, We have found a problem with POSTing request body and already have a patch that changes body parsing and fixes the issue. It will be committed after internal review. Hopefully it?s the same issue. Until then you can just check out older code. ? Roman Arutyunyan arut at nginx.com > On 30 Jan 2021, at 18:00, Ryan Gould wrote: > > no, i am not seeing a thing in the logs. i looked > in the php error logs and the nginx error logs, and > even the non-error logs. the page returns immediately > after the POST is attempted. it is not spending any > time debating or processing or round-tripping. > > i can also verify the most recently added patch > did not fix the problem (not that it was supposed to): > https://hg.nginx.org/nginx-quic/rev/dbe33ef9cd9a > >> Date: Wed, 27 Jan 2021 20:25:05 +0300 >> From: Roman Arutyunyan >> To: nginx at nginx.org >> Subject: Re: HTTP/3 with Firefox and forms >> Message-ID: <64E274C9-A403-4F5C-AA02-30FBA2FEA49D at nginx.com > >> Content-Type: text/plain; charset="utf-8" >> >> Hi Ryan, >> >> Thanks for reporting this. >> >> Do you observe any errors in nginx error log? >> >> ? >> Roman Arutyunyan >> arut at nginx.com >> >> > On 27 Jan 2021, at 19:55, Ryan Gould wrote: >> > >> > hello all you amazing developers, >> > >> > i check https://hg.nginx.org/nginx-quic every day for new updates being >> > posted. on monday (Jan 25 2021) i noticed these five new updates: >> > >> > https://hg.nginx.org/nginx-quic/rev/6422455c92b4 >> > https://hg.nginx.org/nginx-quic/rev/916a2e1d6617 >> > https://hg.nginx.org/nginx-quic/rev/cb8185bd0507 >> > https://hg.nginx.org/nginx-quic/rev/58acdba9b3b2 >> > https://hg.nginx.org/nginx-quic/rev/e1eb7f4ca9f1 >> > >> > the latest build seems to have a problem with submitting forms and the >> > latest production and developer versions of Firefox. i am not having >> > the same problem with Edge or Chrome. >> > >> > my backend is PHP 7.3.26 on a Debian 10.7. it doesnt actually do any >> > POSTing in Firefox. php is not getting any data at all. these forms >> > are running code thats been untouched for five years or so. >> > >> > reverting to my Jan 11 2021 build of nginx resolves the problem for >> > forms and Firefox. >> > >> > this is probably a problem with Mozilla, but if you have any fixes... >> > >> > thank you for your incredible work. >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: