From nginx-forum at forum.nginx.org Sun Mar 1 00:00:59 2020 From: nginx-forum at forum.nginx.org (bsmither) Date: Sat, 29 Feb 2020 19:00:59 -0500 Subject: No Release File In-Reply-To: References: Message-ID: In the file: /etc/apt/sources.list.d/nginx.list I changed: deb https://nginx.org/packages/mainline/ubuntu tricia nginx to: deb https://nginx.org/packages/mainline/ubuntu bionic nginx Then the "apt-get update" worked. However, I used the Mint Software Manager to install Nginx, but the progress indicator stalled at about 80%. I closed the Manager acknowledging the message that a task was still running. Of course, that probably caused another problem: ----- E: Could not get lock /var/lib/dpkg/lock-frontend - open (11: Resource temporarily unavailable) E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it? ----- I work on that later. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287202,287204#msg-287204 From r at roze.lv Sun Mar 1 00:08:29 2020 From: r at roze.lv (Reinis Rozitis) Date: Sun, 1 Mar 2020 02:08:29 +0200 Subject: No Release File In-Reply-To: References: Message-ID: <000001d5ef5d$89b8e310$9d2aa930$@roze.lv> > E: The repository 'http://nginx.org/packages/ubuntu tricia Release' does not > have a Release file. > N: Updating from such a repository can't be done securely, and is therefore > disabled by default. > ----- > Are there any other instructions available to get Nginx 1.17 downloaded? You should probably use the bionic repo since there is no direct repo for the Linux Mint (tricia). rr From nginx-forum at forum.nginx.org Sun Mar 1 00:29:40 2020 From: nginx-forum at forum.nginx.org (bsmither) Date: Sat, 29 Feb 2020 19:29:40 -0500 Subject: No Release File In-Reply-To: <000001d5ef5d$89b8e310$9d2aa930$@roze.lv> References: <000001d5ef5d$89b8e310$9d2aa930$@roze.lv> Message-ID: Where is the Bionic repo? If you are referring to the default repository for all things Linux Mint, there was only Nginx 1.14. Anyway, as is always the case, something went wrong. I don't have a 'modules-available' directory, and all the items in 'modules-enabled' end in '.removed' with link (broken). Curses. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287202,287206#msg-287206 From r at roze.lv Sun Mar 1 01:08:38 2020 From: r at roze.lv (Reinis Rozitis) Date: Sun, 1 Mar 2020 03:08:38 +0200 Subject: No Release File In-Reply-To: References: <000001d5ef5d$89b8e310$9d2aa930$@roze.lv> Message-ID: <000101d5ef65$f0dc4340$d294c9c0$@roze.lv> > Where is the Bionic repo? > > If you are referring to the default repository for all things Linux Mint, there > was only Nginx 1.14. I mean the nginx bionic repo (here you can see the available Ubuntu versions http://nginx.org/packages/mainline/ubuntu/dists/ ) But it seems you have already used that. rr From nginx-forum at forum.nginx.org Sun Mar 1 05:06:31 2020 From: nginx-forum at forum.nginx.org (bsmither) Date: Sun, 01 Mar 2020 00:06:31 -0500 Subject: Force Reinstall Message-ID: Does the package at: http://nginx.org/packages/mainline/ubuntu/dists/bionic/ contain the instruction to create all the needed config files and folders? Having managed to go from 1.14 to 1.17, I find anomalies: at least one missing folder and its files, undeleted files with the suffix '.deleted', etc. If needed, catch up here: https://forums.linuxmint.com/viewtopic.php?f=47&t=313323 The --reinstall did not actually do enough. Note: this topic is a continuation of: https://forum.nginx.org/read.php?2,287202 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287208,287208#msg-287208 From stefano.serano at ngway.it Sun Mar 1 19:40:02 2020 From: stefano.serano at ngway.it (Stefano Serano) Date: Sun, 1 Mar 2020 19:40:02 +0000 Subject: R: problem with proxy pass In-Reply-To: <000901d5ed95$c669a520$533cef60$@roze.lv> References: <67eae834efef46f9a31705b4a8c65edce1c25fc2b1494e6489f91b07bc8fe2a0@ngway.it> <000901d5ed95$c669a520$533cef60$@roze.lv> Message-ID: <3ac56e6b8c5444b7acf49c878ec6c35b@ngway.it> Hi. You're right, i think i've to better explain. Here my situation: 1. I've two HIDS nodes that use port 1515 TCP for agents authentication, and 1514 UDP to receive logs from agents. If I point agents from outside and inside my network directly to the nodes, no problem arises. 2. I've moved these nodes to another network: 10.0.0.0 and added a new centos 7 machine that I want to use as proxy to forward ports 1515 and 1514UDP t my two HIDS nodes. This machine is configured with two ethernet adapers: one configured to communicate with the nodes on network 10.0.0.0, and another configured to communicate with the agents outside my network(publicated throughout my firewall) ad with the agents in my local network 192.x.x.x. Now, on my proxy machine I've: disabled Firewalld, Disable Selinux and installed nginx with this configuration: -------------------------------------- user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 10000; } stream { upstream master { server 10.0.0.7:1515; } upstream mycluster { hash $remote_addr consistent; server 10.0.0.7:1514; server 10.0.0.6:1514; } server { listen 1515; proxy_pass master; } server { listen 1514 udp; proxy_pass mycluster; } #error_log /var/log/nginx/error.log debug; } -------------------------------------- All the agents from outside my network have no problem, the can authenticate themselves to my HIDS Nodes over port 1515 TCP and send logs over port 1514 UDP. The agents in my local network(192.x.x.x)) instead, are able to authenticate over port 1515 TCP, but not to send logs over 1514 UDP. The agents log said that they are unable to connect over that port. If I temporally change the port 1514 UDP to 1514 TCP in my HIDS nodes, and make the same change on Nginx configuration, they are able to send logs like nothing happen, but I can't use this solution because i would need to change the port in all agents configuration manually, so I need to make the port 1514 udp work. Hope i've make the situation more clear, have a nice day. Stefano Serano Tel: 0331-726090 Fax: 0331-728229 e-mail: stefano.serano at ngway.it http://www.ngway.it -----Messaggio originale----- Da: nginx Per conto di Reinis Rozitis Inviato: gioved? 27 febbraio 2020 18:46 A: nginx at nginx.org Oggetto: RE: problem with proxy pass > From the hosts outside i've no connection problem, but from inside they are unable to connect to the port. No firewall are enable on Nginx LB( Centos 7 machine by the way) and Selinux is disabled. By "from inside" you mean other hosts in LAN or the same centos machine? If first then it's most likely firewall (limited outbond udp on the clients) or routing related. Without knowing the details/network topology there is not much to suggest - try to test if the clients can connect to any other (open) port, icmp ping the centos machine or inspect the network activity with tcpdump. rr _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx Ai sensi dell'art. 13 del Regolamento UE 2016/679 (GDPR), si informa che gli eventuali dati personali indicati in questo documento sono trattati dallo Scrivente secondo i principi di correttezza liceit? e trasparenza. L?informativa completa ? disponibile a richiesta presso i ns uffici o all?indirizzo email: info at ngway.it. Si informa inoltre che le informazioni contenute nella presente comunicazione e i relativi allegati possono essere riservate e sono, comunque, destinate esclusivamente alle persone o alla Societ? destinatari. La diffusione, distribuzione e/o copiatura del documento trasmesso da parte di qualsiasi soggetto diverso dal destinatario ? proibita, ai sensi dell?art. 616 c.p. Se avete ricevuto questo messaggio per errore, vi preghiamo di distruggerlo. From r at roze.lv Sun Mar 1 23:09:53 2020 From: r at roze.lv (Reinis Rozitis) Date: Mon, 2 Mar 2020 01:09:53 +0200 Subject: problem with proxy pass In-Reply-To: <3ac56e6b8c5444b7acf49c878ec6c35b@ngway.it> References: <67eae834efef46f9a31705b4a8c65edce1c25fc2b1494e6489f91b07bc8fe2a0@ngway.it> <000901d5ed95$c669a520$533cef60$@roze.lv> <3ac56e6b8c5444b7acf49c878ec6c35b@ngway.it> Message-ID: <000201d5f01e$8447a2b0$8cd6e810$@roze.lv> > The agents in my local network(192.x.x.x)) instead, are able to authenticate > over port 1515 TCP, but not to send logs over 1514 UDP. The agents log said > that they are unable to connect over that port. > > If I temporally change the port 1514 UDP to 1514 TCP in my HIDS nodes, and > make the same change on Nginx configuration, they are able to send logs > like nothing happen This gives more things to test: (I would also change the error_log level to notice and see if there is anything logged) 1. Can you test from any client in the lan (192.x.x.x) that you are able to connect to the nginx udp port Iand send some message /csee if it lands in the backends), for example with netcat: nc -u your.centos.ip 1514 2. See if you are able to actually connect from the centos box to the backends: nc -u 10.0.0.7 1514 With two network interfaces there might be also routing issues and depending on the configuration you could need to specify the outgoing 10.x interface with proxy_bind (https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html?#proxy_bind) Something like: server { listen 1514 udp; proxy_pass mycluster; proxy_bind 10.x.x.x; // the ip of the centos machine } rr From stefano.serano at ngway.it Mon Mar 2 08:25:10 2020 From: stefano.serano at ngway.it (Stefano Serano) Date: Mon, 2 Mar 2020 08:25:10 +0000 Subject: problem with proxy pass In-Reply-To: <000201d5f01e$8447a2b0$8cd6e810$@roze.lv> References: <67eae834efef46f9a31705b4a8c65edce1c25fc2b1494e6489f91b07bc8fe2a0@ngway.it> <000901d5ed95$c669a520$533cef60$@roze.lv> <3ac56e6b8c5444b7acf49c878ec6c35b@ngway.it>, <000201d5f01e$8447a2b0$8cd6e810$@roze.lv> Message-ID: <8817abce27b74acf86aca32ee93d0d8b@ngway.it> Hi. i've changed the configuration: stream { upstream master { server 10.0.0.7:1515; } upstream mycluster { hash $remote_addr consistent; server 10.0.0.7:1514; server 10.0.0.6:1514; } server { listen 1515; proxy_pass master; } server { listen 1514 udp; proxy_pass mycluster; proxy_bind 10.0.0.8; } } Execute this command on agent: nc -vnzu -w 1 192.168.1.5 1514 to check if is abel to connect to my Nxinx LB port, the result is positive: Ncat: Version 7.50 ( https://nmap.org/ncat ) Ncat: Connected to 192.168.1.5:1514. Ncat: UDP packet sent successfully Ncat: 1 bytes sent, 0 bytes received in 2.01 seconds. Same from LB to my HIDS node: nc -vnzu -w 5 10.0.0.6 1514 Ncat: Version 7.50 ( https://nmap.org/ncat ) Ncat: Connected to 10.0.0.6:1514. Ncat: UDP packet sent successfully but my agents are still unable to send logs over port 1514 UDP ________________________________ Da: nginx per conto di Reinis Rozitis Inviato: luned? 2 marzo 2020 00:09:53 A: nginx at nginx.org Oggetto: RE: problem with proxy pass > The agents in my local network(192.x.x.x)) instead, are able to authenticate > over port 1515 TCP, but not to send logs over 1514 UDP. The agents log said > that they are unable to connect over that port. > > If I temporally change the port 1514 UDP to 1514 TCP in my HIDS nodes, and > make the same change on Nginx configuration, they are able to send logs > like nothing happen This gives more things to test: (I would also change the error_log level to notice and see if there is anything logged) 1. Can you test from any client in the lan (192.x.x.x) that you are able to connect to the nginx udp port Iand send some message /csee if it lands in the backends), for example with netcat: nc -u your.centos.ip 1514 2. See if you are able to actually connect from the centos box to the backends: nc -u 10.0.0.7 1514 With two network interfaces there might be also routing issues and depending on the configuration you could need to specify the outgoing 10.x interface with proxy_bind (https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html?#proxy_bind) Something like: server { listen 1514 udp; proxy_pass mycluster; proxy_bind 10.x.x.x; // the ip of the centos machine } rr _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx Ai sensi dell'art. 13 del Regolamento UE 2016/679 (GDPR), si informa che gli eventuali dati personali indicati in questo documento sono trattati dallo Scrivente secondo i principi di correttezza liceit? e trasparenza. L'informativa completa ? disponibile a richiesta presso i ns uffici o all'indirizzo email: info at ngway.it. Si informa inoltre che le informazioni contenute nella presente comunicazione e i relativi allegati possono essere riservate e sono, comunque, destinate esclusivamente alle persone o alla Societ? destinatari. La diffusione, distribuzione e/o copiatura del documento trasmesso da parte di qualsiasi soggetto diverso dal destinatario ? proibita, ai sensi dell'art. 616 c.p. Se avete ricevuto questo messaggio per errore, vi preghiamo di distruggerlo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From phillip.odam at rosettahealth.com Mon Mar 2 15:32:39 2020 From: phillip.odam at rosettahealth.com (Phillip Odam) Date: Mon, 2 Mar 2020 10:32:39 -0500 Subject: Using NGINX to reverse proxy hundreds of ports Message-ID: <560bfe2c-7634-77b8-6d8a-f8c8c7f38e6c@rosettahealth.com> Hi So far from my reading and testing of NGINX I can't find a compact way of configuring NGINX as I've done here with HAProxy config. Disregard the bind on a port range, I get that the NGINX listen statement works on an individual port basis, so the equivalent of what's below in NGINX would at the very least require 300 listen statements. So far from what I've been able to tell, and what I'm wanting to confirm, there's also no way to avoid having 300 upstreams? One for each port between 2000 and 2299 so that 10.1.0.1:2000 is proxied to 127.0.0.1:2000 and 10.1.0.1:2001 is proxied to 127.0.0.1:2001 for example. Is this correct, is there no way to avoid having 300 upstreams if I'm needing proxy 10.1.0.1:X to 127.0.0.1:X. Based on a quick back of the napkin calculation, I'd be looking at around 2,000 to 2,500 lines of configuration in NGINX if a new upstream is required for each and every port I'm needing to handle. If this is correct does anyone know what the impact on memory use would be having so much configuration for NGINX? FYI I've tried referencing my own declared variables from within the upstream as well as referencing $server_port but of course these don't appear to be in scope. I think in this particular case HAProxy is a better fit but I'm interested in seeing what can be done with NGINX as it's typically my go to solution. frontend inbound bind 10.1.0.1:2000-2299 mode tcp acl use_local src 127.0.0.0/8 10.1.0.0/24 use_backend local-app if use_local default_backend balanced-app backend balanced-app balance roundrobin mode tcp option tcp-check server self 127.0.0.1 check port 2000 server srv2 10.1.0.2 check port 2000 backend local-app mode tcp option tcp-check server self 127.0.0.1 check port 2000 Thanks Phillip -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Mon Mar 2 18:07:39 2020 From: r at roze.lv (Reinis Rozitis) Date: Mon, 2 Mar 2020 20:07:39 +0200 Subject: problem with proxy pass In-Reply-To: <8817abce27b74acf86aca32ee93d0d8b@ngway.it> References: <67eae834efef46f9a31705b4a8c65edce1c25fc2b1494e6489f91b07bc8fe2a0@ngway.it> <000901d5ed95$c669a520$533cef60$@roze.lv> <3ac56e6b8c5444b7acf49c878ec6c35b@ngway.it>, <000201d5f01e$8447a2b0$8cd6e810$@roze.lv> <8817abce27b74acf86aca32ee93d0d8b@ngway.it> Message-ID: <000001d5f0bd$75f87500$61e95f00$@roze.lv> > but my agents are still unable to send logs over port 1514 UDP Well at least the nginx setup seems in working order. Now do you see any more detailed messages on the agents (like extended ip/port info / connection error)? Also you could inspect the network traffic to see if the centos box receives connections. For example with: tcpdump -n -v -i eth0 udp port 1514 (replace the eth0 with whatever your interface name for the 192.x network is on the centos box) rr From kaushalshriyan at gmail.com Mon Mar 2 18:14:07 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Mon, 2 Mar 2020 23:44:07 +0530 Subject: Suggest strong cipher suites Message-ID: Hi, We are using Nginx Web server on CentOS Linux release 7.7.1908 (Core) *OpenSSL Version* #openssl version OpenSSL 1.0.2k-fips 26 Jan 2017 # *Nginx Version* #rpm -qa | grep nginx nginx-1.16.1-1.el7.x86_64 # Can someone please suggest me to use strong cipher suites for SSL/TLS encryption. Thanks in advance and I look forward to hearing from you. Best Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Mon Mar 2 19:02:16 2020 From: r at roze.lv (Reinis Rozitis) Date: Mon, 2 Mar 2020 21:02:16 +0200 Subject: Using NGINX to reverse proxy hundreds of ports In-Reply-To: <560bfe2c-7634-77b8-6d8a-f8c8c7f38e6c@rosettahealth.com> References: <560bfe2c-7634-77b8-6d8a-f8c8c7f38e6c@rosettahealth.com> Message-ID: <000901d5f0c5$1725e280$4571a780$@roze.lv> > I get that the NGINX listen statement works on an individual port basis, so the equivalent of what's below in NGINX would at the very least require 300 listen statements. You can listen on a port range (see below). > FYI I've tried referencing my own declared variables from within the upstream as well as referencing $server_port but of course these don't appear to be in scope. Depends if you want nginx to perform any active healthchecks and what kind of backends those are. If it is http and you just need to redirect traffic instead of defining upstreams a configuration with (way) less lines could be: server { listen 10.1.0.1:2000-2299; location / { proxy_pass http://127.0.0.1:$server_port; } } A fallback to another proxy could be configured via error_page (502). Something like: error_page 502 @fallback; location @fallback { proxy_pass http://10.1.0.2:$server_port; } But I don't think there is a way (at least in the base vanilla) nginx to configure upstreams in a dynamic way with port ranges as in upstream {} doesn't support variables for the server definitions. rr From phillip.odam at nitorgroup.com Mon Mar 2 21:20:03 2020 From: phillip.odam at nitorgroup.com (Phillip Odam) Date: Mon, 2 Mar 2020 16:20:03 -0500 Subject: Using NGINX to reverse proxy hundreds of ports In-Reply-To: <000901d5f0c5$1725e280$4571a780$@roze.lv> References: <560bfe2c-7634-77b8-6d8a-f8c8c7f38e6c@rosettahealth.com> <000901d5f0c5$1725e280$4571a780$@roze.lv> Message-ID: Nice, hadn?t noticed the port range capability. The proxy_pass and just directly referencing the ip would make it nice and concise. Unfortunately I am wanting to balance across multiple backends, but this is all good to know come different requirements. Cheers On Mon, Mar 2, 2020 at 2:02 PM Reinis Rozitis wrote: > > I get that the NGINX listen statement works on an individual port basis, > so the equivalent of what's below in NGINX would at the very least require > 300 listen statements. > > You can listen on a port range (see below). > > > > > FYI I've tried referencing my own declared variables from within the > upstream as well as referencing $server_port but of course these don't > appear to be in scope. > > Depends if you want nginx to perform any active healthchecks and what kind > of backends those are. > > If it is http and you just need to redirect traffic instead of defining > upstreams a configuration with (way) less lines could be: > > server { > listen 10.1.0.1:2000-2299; > location / { proxy_pass http://127.0.0.1:$server_port; } > } > > > A fallback to another proxy could be configured via error_page (502). > Something like: > > error_page 502 @fallback; > location @fallback { proxy_pass http://10.1.0.2:$server_port; } > > But I don't think there is a way (at least in the base vanilla) nginx to > configure upstreams in a dynamic way with port ranges as in upstream {} > doesn't support variables for the server definitions. > > rr > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Mar 3 09:06:06 2020 From: nginx-forum at forum.nginx.org (Sevenshy) Date: Tue, 03 Mar 2020 04:06:06 -0500 Subject: [alert] epoll_ctl(1, 575) failed (17: File exists) In-Reply-To: References: Message-ID: <64356817ba865173c2fdeca304520d58.NginxMailingListEnglish@forum.nginx.org> Have U deal with it? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276815,287235#msg-287235 From nginx-forum at forum.nginx.org Tue Mar 3 12:53:41 2020 From: nginx-forum at forum.nginx.org (atomino) Date: Tue, 03 Mar 2020 07:53:41 -0500 Subject: Webdav error accessing with Finder Message-ID: I am trying to acces the folder with Osx using Finder , after authentication with the right password it will respond with a message box : original text in Italian : Si ? verificato un errore durante la connessione al server "xxxxxxx.it". Contatta l'amministratore di sistema per maggiori informazioni. ...translation An error occurred during the connection to the server "xxxxxxx.it" Contact the system administrator for more info. Below the section of my configuration file. Can you help me to fix the issue. Thanks in advance. location /dav/ { alias /var/www/webdav/pub/; auth_basic "Restricted Content"; # The file containing authorized users auth_basic_user_file /etc/nginx/.htpasswd; # dav allowed method dav_methods PUT DELETE MKCOL COPY MOVE; # Allow current scope perform specified DAV method #dav_ext_methods PROPFIND OPTIONS; # In this folder, newly created folder or file is to have specified permission. # If none is given, default is user:rw. If all or group permission is specified, user could be skipped dav_access user:rw group:rw all:r; # Temporary folder client_body_temp_path /var/dav; # MAX size of uploaded file, 0 mean unlimited client_max_body_size 0; # Allow autocreate folder here if necessary create_full_put_path on; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287236,287236#msg-287236 From themadbeaker at gmail.com Tue Mar 3 14:23:58 2020 From: themadbeaker at gmail.com (J.R.) Date: Tue, 3 Mar 2020 08:23:58 -0600 Subject: Suggest strong cipher suites Message-ID: > Can someone please suggest me to use strong cipher suites for SSL/TLS > encryption. Thanks in advance and I look forward to hearing from you. Select your products / versions and what settings you want... It should give you a good jumpstart on configuration settings: https://ssl-config.mozilla.org/ Can't get much easier than that... From mdounin at mdounin.ru Tue Mar 3 15:15:59 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Mar 2020 18:15:59 +0300 Subject: nginx-1.17.9 Message-ID: <20200303151559.GP12894@mdounin.ru> Changes with nginx 1.17.9 03 Mar 2020 *) Change: now nginx does not allow several "Host" request header lines. *) Bugfix: nginx ignored additional "Transfer-Encoding" request header lines. *) Bugfix: socket leak when using HTTP/2. *) Bugfix: a segmentation fault might occur in a worker process if OCSP stapling was used. *) Bugfix: in the ngx_http_mp4_module. *) Bugfix: nginx used status code 494 instead of 400 if errors with code 494 were redirected with the "error_page" directive. *) Bugfix: socket leak when using subrequests in the njs module and the "aio" directive. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Mar 3 16:16:06 2020 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 3 Mar 2020 11:16:06 -0500 Subject: [nginx-announce] nginx-1.17.9 In-Reply-To: <20200303151613.GQ12894@mdounin.ru> References: <20200303151613.GQ12894@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.17.9 for Windows https:// kevinworthington.com/nginxwin1179 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington On Tue, Mar 3, 2020 at 10:16 AM Maxim Dounin wrote: > Changes with nginx 1.17.9 03 Mar > 2020 > > *) Change: now nginx does not allow several "Host" request header > lines. > > *) Bugfix: nginx ignored additional "Transfer-Encoding" request header > lines. > > *) Bugfix: socket leak when using HTTP/2. > > *) Bugfix: a segmentation fault might occur in a worker process if OCSP > stapling was used. > > *) Bugfix: in the ngx_http_mp4_module. > > *) Bugfix: nginx used status code 494 instead of 400 if errors with > code > 494 were redirected with the "error_page" directive. > > *) Bugfix: socket leak when using subrequests in the njs module and the > "aio" directive. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xeioex at nginx.com Tue Mar 3 17:40:08 2020 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 3 Mar 2020 20:40:08 +0300 Subject: njs-0.3.9 Message-ID: Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release proceeds to extend the coverage of ECMAScript specifications. Notable new features: - Promises API for "fs" module. : var fs = require('fs').promises; : fs.readFile('/file/path').then(data => r.return(200, data)); - detached r.subrequest(): Running a subrequest in the log phase : nginx.conf: : ... : js_set $js_log js_log; : ... : log_format subrequest_log "...$js_log"; : access_log /log/path.log subrequest_log; : : nginx.js: : function js_log(r) { : r.subrequest('/_log', {detached:true}); : return ''; : } You can learn more about njs: - Overview and introduction: http://nginx.org/en/docs/njs/ - Presentation: https://youtu.be/Jc_L6UffFOs - Using node modules with njs: http://nginx.org/en/docs/njs/node_modules.html Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.3.9 03 Mar 2020 nginx modules: *) Feature: added detached mode for r.subrequest(). Responses to detached subrequests are ignored. Unlike ordinary subrequests, a detached subrequest can be created inside a variable handler. Core: *) Feature: added promises API for "fs" module. Thanks to Artem S. Povalyukhin. *) Feature: extended "fs" module. Added access(), symlink(), unlink(), realpath() and friends. Thanks to Artem S. Povalyukhin. *) Improvement: introduced memory-efficient ordinary arrays. *) Improvement: lexer refactoring. *) Bugfix: fixed matching of native functions in backtraces. *) Bugfix: fixed callback invocations in "fs" module. Thanks to Artem S. Povalyukhin. *) Bugfix: fixed Object.getOwnPropertySymbols(). *) Bugfix: fixed heap-buffer-overflow in njs_json_append_string(). *) Bugfix: fixed encodeURI() and decodeURI() according to the specification. *) Bugfix: fixed Number.prototype.toPrecision(). *) Bugfix: fixed handling of space argument in JSON.stringify(). *) Bugfix: fixed JSON.stringify() with Number() and String() objects. *) Bugfix: fixed Unicode Escaping in JSON.stringify() according to specification. *) Bugfix: fixed non-native module importing. Thanks to ??? (Hong Zhi Dao). *) Bugfix: fixed njs.dump() with the Date() instance in a container. From nginx-forum at forum.nginx.org Wed Mar 4 07:41:15 2020 From: nginx-forum at forum.nginx.org (galew) Date: Wed, 04 Mar 2020 02:41:15 -0500 Subject: Elasticsearch Native Binary Protocol through NGiNX Stream Message-ID: <728fb1871fb7ca2bb5e7f4368b2b5455.NginxMailingListEnglish@forum.nginx.org> Hi, I have tried to ask from Elasticsearch forums and googled everywhere, but with no help so I registered here. I am using NGiNX to cover my Elasticsearch clusters and all the clients connect through them. Everything else works fine to both the http and non-http traffic. The problem is the Liferay client using Elasticsearch Native Binary Protocol. Without NGiNX everything works right so NGiNX somehow does not understand this. Using Elasticsearch 6.8.6 Nginx 1.15.9 Red Hat 7.7 nginx.conf ---clip---- stream { include /etc/nginx/conf.d/elasticsearch_tcp.conf; } elasticsearch_tcp_conf server { proxy_buffer_size 16k; listen 10.100.5.10:8090; proxy_pass 10.20.1.10:9300; Any ideas for what I could try please? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287254,287254#msg-287254 From stefano.serano at ngway.it Wed Mar 4 08:15:43 2020 From: stefano.serano at ngway.it (Stefano Serano) Date: Wed, 4 Mar 2020 08:15:43 +0000 Subject: problem with proxy pass In-Reply-To: <000001d5f0bd$75f87500$61e95f00$@roze.lv> References: <67eae834efef46f9a31705b4a8c65edce1c25fc2b1494e6489f91b07bc8fe2a0@ngway.it> <000901d5ed95$c669a520$533cef60$@roze.lv> <3ac56e6b8c5444b7acf49c878ec6c35b@ngway.it>, <000201d5f01e$8447a2b0$8cd6e810$@roze.lv> <8817abce27b74acf86aca32ee93d0d8b@ngway.it>, <000001d5f0bd$75f87500$61e95f00$@roze.lv> Message-ID: Hi. Here the result from tcpdump: from inside my network 192.168.1.10.60221 > 192.168.1.3.fujitsu-dtcns: UDP, length 107 192.168.1.3.fujitsu-dtcns > 192.168.1.10.60221: UDP, length 85 >From all agenst fro outside my network: any.public.ip.address.56916 > 151.1.210.45.fujitsu-dtcns: UDP, length 298 There the info form client inside my network: 2020/03/04 09:12:52 ossec-agentd: WARNING: (4101): Waiting for server reply (not started). Tried: '192.168.1.3. 2020/03/04 09:13:03 ossec-agentd: INFO: Trying to connect to server (192.168.1.3:1514/udp). Hope this help ________________________________ Da: nginx per conto di Reinis Rozitis Inviato: luned? 2 marzo 2020 19:07:39 A: nginx at nginx.org Oggetto: RE: problem with proxy pass > but my agents are still unable to send logs over port 1514 UDP Well at least the nginx setup seems in working order. Now do you see any more detailed messages on the agents (like extended ip/port info / connection error)? Also you could inspect the network traffic to see if the centos box receives connections. For example with: tcpdump -n -v -i eth0 udp port 1514 (replace the eth0 with whatever your interface name for the 192.x network is on the centos box) rr _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx Ai sensi dell'art. 13 del Regolamento UE 2016/679 (GDPR), si informa che gli eventuali dati personali indicati in questo documento sono trattati dallo Scrivente secondo i principi di correttezza liceit? e trasparenza. L'informativa completa ? disponibile a richiesta presso i ns uffici o all'indirizzo email: info at ngway.it. Si informa inoltre che le informazioni contenute nella presente comunicazione e i relativi allegati possono essere riservate e sono, comunque, destinate esclusivamente alle persone o alla Societ? destinatari. La diffusione, distribuzione e/o copiatura del documento trasmesso da parte di qualsiasi soggetto diverso dal destinatario ? proibita, ai sensi dell'art. 616 c.p. Se avete ricevuto questo messaggio per errore, vi preghiamo di distruggerlo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Wed Mar 4 11:16:33 2020 From: r at roze.lv (Reinis Rozitis) Date: Wed, 4 Mar 2020 13:16:33 +0200 Subject: problem with proxy pass In-Reply-To: References: <67eae834efef46f9a31705b4a8c65edce1c25fc2b1494e6489f91b07bc8fe2a0@ngway.it> <000901d5ed95$c669a520$533cef60$@roze.lv> <3ac56e6b8c5444b7acf49c878ec6c35b@ngway.it>, <000201d5f01e$8447a2b0$8cd6e810$@roze.lv> <8817abce27b74acf86aca32ee93d0d8b@ngway.it>, <000001d5f0bd$75f87500$61e95f00$@roze.lv> Message-ID: <000001d5f216$5d0a3510$171e9f30$@roze.lv> > Hi. > Here the result from tcpdump: > from inside my network > 192.168.1.10.60221 > 192.168.1.3.fujitsu-dtcns: UDP, length 107 > 192.168.1.3.fujitsu-dtcns > 192.168.1.10.60221: UDP, length 85 > > From all agenst fro outside my network: > any.public.ip.address.56916 > 151.1.210.45.fujitsu-dtcns: UDP, length 298 > > > There the info form client inside my network: > 2020/03/04 09:12:52 ossec-agentd: WARNING: (4101): Waiting for server reply (not started). Tried: '192.168.1.3. > 2020/03/04 09:13:03 ossec-agentd: INFO: Trying to connect to server (192.168.1.3:1514/udp). Is there a third interface on the centos box with 151.1.210.45 ip? Is it being NATed to the 192.168.x address? >From the provided tcpdump it's a bit unclear which case does work and which doesn't? As with the 192.168.x the server seems to send the response while the "outside" doesn't or you just left it out? Anyways this doesn't seem to be nginx related and you might want to look up on the product support/docs: https://www.ossec.net/docs/faq/unexpected.html#agent-won-t-connect-to-the-manager-or-the-agent-always-shows-never-connected Or maybe this helps: https://unix.stackexchange.com/a/154928 rr From roland-brieden at web.de Wed Mar 4 19:54:49 2020 From: roland-brieden at web.de (roland-brieden at web.de) Date: Wed, 4 Mar 2020 20:54:49 +0100 Subject: nginx 1.17.9-1~bionic - 400 error Message-ID: An HTML attachment was scrubbed... URL: From thresh at nginx.com Thu Mar 5 07:20:50 2020 From: thresh at nginx.com (Konstantin Pavlov) Date: Thu, 5 Mar 2020 10:20:50 +0300 Subject: nginx 1.17.9-1~bionic - 400 error In-Reply-To: References: Message-ID: <9ac01f96-e899-3e06-3cd4-61846b150fd1@nginx.com> Hi Roland, 04.03.2020 22:54, roland-brieden at web.de wrote: > Hey Guys. > After todays update to nginx 1.17.9-1~bionic all my websites crashes > into 400 error. > Going back to nginx 1.17.8-1~bionic and all websites works ok. > What can i do? I would like to try and reproduce the issue you're having since I'm responsible for the nginx packages we build and ship. Would it be possible for you to have a dump of configuration (via nginx -T) sent here or privately? If it contains private information or cannot be stripped of sensitive things, can you provide something minimal that you can reproduce the problem with? Thank you, -- Konstantin Pavlov https://www.nginx.com/ From mdounin at mdounin.ru Thu Mar 5 13:21:11 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 5 Mar 2020 16:21:11 +0300 Subject: nginx 1.17.9-1~bionic - 400 error In-Reply-To: References: Message-ID: <20200305132111.GZ12894@mdounin.ru> Hello! On Wed, Mar 04, 2020 at 08:54:49PM +0100, roland-brieden at web.de wrote: > Hey Guys. > After todays update to nginx 1.17.9-1~bionic all my websites crashes > into 400 error. > Going back to nginx 1.17.8-1~bionic and all websites works ok. > What can i do? First of all, check your error logs. The reason for any 400 error returned by nginx is logged into error log at the "info" level. Note though that the "info" level is below the default logging level, so you may need to adjust your configuration to see relevant errors. See http://nginx.org/r/error_log for details. The most likely reason is that 400 errors you see are somehow related to duplicate Host or Transfer-Encoding, as outlined in CHANGES: *) Change: now nginx does not allow several "Host" request header lines. *) Bugfix: nginx ignored additional "Transfer-Encoding" request header lines. Probably in your configuration requests with duplicate headers are generated somehow, leading to 400. Just in case, relevant commits are: http://hg.nginx.org/nginx/rev/aca005d232ff http://hg.nginx.org/nginx/rev/fe5976aae0e3 http://hg.nginx.org/nginx/rev/4f18393a1d51 -- Maxim Dounin http://mdounin.ru/ From willy at gardiol.org Fri Mar 6 08:29:20 2020 From: willy at gardiol.org (Willy Gardiol) Date: Fri, 06 Mar 2020 09:29:20 +0100 Subject: Issue with NGINX and proxy: HTTP/1.1 505 HTTP Version not supported Message-ID: <1909b8d65924d88954f6711f52e16fff@gardiol.org> Hi all! first time poster here so please excuse my manners and correct me where i am wrong. I use NGINX 1.16.1 on Gentoo as reverse proxy server to expose some services to an external web site. I am exposing many things but one is giving me headaches. I am trying to expose the web UI of an HP network printer. I have this in my nginx.conf (trimming lines to the ones relevant): server { listen 80; access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log debug; location /printer/ { proxy_pass http://192.168.1.XX/; } } I can access the printer from the proxy server, no problems, with: curl http://192.168.1.XX But if i try, on the same proxy server this: curl http://127.0.0.1/printer/ I get the error: 2020/03/06 08:42:02 [debug] 12870#0: *2 connect to 192.168.1.XX:80, fd:11 #3 2020/03/06 08:42:02 [debug] 12870#0: *2 http upstream connect: -2 2020/03/06 08:42:02 [debug] 12870#0: *2 http finalize request: -4, "/printer/?" a:1, c:2 [snip] 2020/03/06 08:42:02 [debug] 12870#0: *2 http run request: "/printer/?" 2020/03/06 08:42:02 [debug] 12870#0: *2 http upstream check client, write event:1, "/printer/" 2020/03/06 08:42:02 [debug] 12870#0: *2 http upstream request: "/printer/?" 2020/03/06 08:42:02 [debug] 12870#0: *2 http upstream send request handler 2020/03/06 08:42:02 [debug] 12870#0: *2 http upstream send request 2020/03/06 08:42:02 [debug] 12870#0: *2 http upstream send request body [snip] 2020/03/06 08:42:02 [debug] 12870#0: *2 http upstream request: "/printer/?" 2020/03/06 08:42:02 [debug] 12870#0: *2 http upstream process header [snip] 2020/03/06 08:42:02 [debug] 12870#0: *2 http proxy status 505 "505 HTTP Version not supported" 2020/03/06 08:42:02 [debug] 12870#0: *2 http proxy header: "X-Content-Type-Options: no-sniff" 2020/03/06 08:42:02 [debug] 12870#0: *2 http proxy header: "Cache-Control: no-cache, no-store, must-revalidate" 2020/03/06 08:42:02 [debug] 12870#0: *2 http proxy header: "Server: gSOAP/2.7" 2020/03/06 08:42:02 [debug] 12870#0: *2 http proxy header: "Content-Length: 0" 2020/03/06 08:42:02 [debug] 12870#0: *2 http proxy header: "Connection: close" 2020/03/06 08:42:02 [debug] 12870#0: *2 http proxy header done 2020/03/06 08:42:02 [debug] 12870#0: *2 HTTP/1.1 505 HTTP Version not supported^M So my guess NGINX is doing something which the web printer does not like... What could i try or do? thank you for your time and response. -- Willy Gardiol willy at gardiol.org www.gardiol.org www.trackaway.org -> Track YOUR way the way you want! From nginx-forum at forum.nginx.org Fri Mar 6 16:13:06 2020 From: nginx-forum at forum.nginx.org (v_label) Date: Fri, 06 Mar 2020 11:13:06 -0500 Subject: subrequests - huge body, mirror response from proxy_pass Message-ID: Hi all. I want to mirror a response from 'proxy_pass` backend. I need to send it not only back to a client but also to one more service. I was thinking to make it by subrequest in a filter module. There is a problem with sending subrequest body. 1) I can get a response body by 'ngx_http_read_client_request_body' or 2) I can wait for the last chain in a filter body and just save all body chunks. and then send it as a subrequest body. Both approaches work. But the problem is that some response bodies are big(several Gb), and keep whole body in RAM is not efficient. Is there a way to send subrequest body gradually by chunks in the process of receiving them from 'proxy_pass' backed ? I need smthg like ngx_http_proxy_module does when it is sending a response to an upstream but for subrequest. Any ideas? Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287272,287272#msg-287272 From kohenkatz at gmail.com Fri Mar 6 18:15:11 2020 From: kohenkatz at gmail.com (Moshe Katz) Date: Fri, 6 Mar 2020 13:15:11 -0500 Subject: Issue with NGINX and proxy: HTTP/1.1 505 HTTP Version not supported In-Reply-To: <1909b8d65924d88954f6711f52e16fff@gardiol.org> References: <1909b8d65924d88954f6711f52e16fff@gardiol.org> Message-ID: It looks like HP has decided to only support HTTP 1.0 or HTTP 1.1 on this printer, though it is not clear which one they are using since you didn't show the headers of your request direct to the printer. If you do `curl -v http://192.168.1.XX` it will show you the headers, which include the HTTP version that was used. It will be either 1.0 or 1.1. You can then use the `proxy_http_version` directive like this (substitute 1.1 for 1.0 if needed) to force the version used: server { listen 80; access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log debug; location /printer/ { proxy_pass http://192.168.1.XX/; proxy_http_version 1.1; } } Moshe On Fri, Mar 6, 2020 at 3:29 AM Willy Gardiol wrote: > > Hi all! > first time poster here so please excuse my manners and correct me where > i am wrong. > > I use NGINX 1.16.1 on Gentoo as reverse proxy server to expose some > services to an external web site. > > I am exposing many things but one is giving me headaches. > > I am trying to expose the web UI of an HP network printer. > > I have this in my nginx.conf (trimming lines to the ones relevant): > server { > listen 80; > access_log /var/log/nginx/localhost.access_log main; > error_log /var/log/nginx/localhost.error_log debug; > > location /printer/ { > proxy_pass http://192.168.1.XX/; > } > } > > I can access the printer from the proxy server, no problems, with: > curl http://192.168.1.XX > > But if i try, on the same proxy server this: > curl http://127.0.0.1/printer/ > > I get the error: > 2020/03/06 08:42:02 [debug] 12870#0: *2 connect to 192.168.1.XX:80, > fd:11 #3 > 2020/03/06 08:42:02 [debug] 12870#0: *2 http upstream connect: -2 > 2020/03/06 08:42:02 [debug] 12870#0: *2 http finalize request: -4, > "/printer/?" a:1, c:2 > [snip] > 2020/03/06 08:42:02 [debug] 12870#0: *2 http run request: "/printer/?" > 2020/03/06 08:42:02 [debug] 12870#0: *2 http upstream check client, > write event:1, "/printer/" > 2020/03/06 08:42:02 [debug] 12870#0: *2 http upstream request: > "/printer/?" > 2020/03/06 08:42:02 [debug] 12870#0: *2 http upstream send request > handler > 2020/03/06 08:42:02 [debug] 12870#0: *2 http upstream send request > 2020/03/06 08:42:02 [debug] 12870#0: *2 http upstream send request body > [snip] > 2020/03/06 08:42:02 [debug] 12870#0: *2 http upstream request: > "/printer/?" > 2020/03/06 08:42:02 [debug] 12870#0: *2 http upstream process header > [snip] > 2020/03/06 08:42:02 [debug] 12870#0: *2 http proxy status 505 "505 HTTP > Version not supported" > 2020/03/06 08:42:02 [debug] 12870#0: *2 http proxy header: > "X-Content-Type-Options: no-sniff" > 2020/03/06 08:42:02 [debug] 12870#0: *2 http proxy header: > "Cache-Control: no-cache, no-store, must-revalidate" > 2020/03/06 08:42:02 [debug] 12870#0: *2 http proxy header: "Server: > gSOAP/2.7" > 2020/03/06 08:42:02 [debug] 12870#0: *2 http proxy header: > "Content-Length: 0" > 2020/03/06 08:42:02 [debug] 12870#0: *2 http proxy header: "Connection: > close" > 2020/03/06 08:42:02 [debug] 12870#0: *2 http proxy header done > 2020/03/06 08:42:02 [debug] 12870#0: *2 HTTP/1.1 505 HTTP Version not > supported^M > > So my guess NGINX is doing something which the web printer does not > like... > > What could i try or do? > > thank you for your time and response. > > > > > > -- > Willy Gardiol > willy at gardiol.org > www.gardiol.org > www.trackaway.org -> Track YOUR way the way you want! > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilio.fernandes70 at gmail.com Mon Mar 9 08:14:51 2020 From: emilio.fernandes70 at gmail.com (Emilio Fernandes) Date: Mon, 9 Mar 2020 10:14:51 +0200 Subject: aarch64 packages for other Linux flavors Message-ID: Hello Nginx team! At https://nginx.org/en/linux_packages.html I see that only Ubuntu LTS versions support and provide packages for aarch64/arm64 architecture. Is there a chance to provide such for the other OSes too ? I am particularly interested in the latest versions of CentOS & Alpine. I know that I could use the packages provided by the OS but they update the version much later than the official release. Gracias! Emilio -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.schabbach at fluent-software.de Mon Mar 9 11:56:33 2020 From: s.schabbach at fluent-software.de (s.schabbach at fluent-software.de) Date: Mon, 9 Mar 2020 12:56:33 +0100 Subject: Sub-Filter auf SignalR ASP.NET Core Hub Message-ID: <01d801d5f609$d465af30$7d310d90$@fluent-software.de> Hey, I need your help to solve an issue with the SignalR library. For those of you how do not know: It is a library to establish an bidirectional connection between server and browser in an webapp using javascript. On my server I have a multi-application environment, were the web application is resolved using the ?webapp? location and then using a reverse proxy to the application running on localhost. The same procedure I use for several services, where the location is ?xxx-service?. For fixing the URL?s in the server response, I use several sub_filters to align urls from ?/resource? to ?/webui/resource? to make links, formulars and so on going to the /webui/ location again. The same thing I do with the SignalR hub the client connects to. These hub is going to be changed from /hub to /webui/hub. But these results in the following error: [2020-03-09T10:38:26.393Z] Information: Normalizing '/webui/hub' to 'https://helitest.fluent-software.de:9003/webui/hub'. Utils.js:204:39 Firefox kann keine Verbindung zu dem Server unter wss://helitest.fluent-software.de:9003/webui/hub?id=5usy9hS6jVcGhNuk5ig5cA aufbauen. WebSocketTransport.js:88:32 [2020-03-09T10:38:27.071Z] Error: Failed to start the transport 'WebSockets': Error: There was an error with the transport. Utils.js:198:39 [2020-03-09T10:38:42.158Z] Information: SSE connected to https://helitest.fluent-software.de:9003/webui/hub?id=a6T9oscwfbe-l0CRzOvCtw Utils.js:204:39 [2020-03-09T10:38:42.175Z] Error: Connection disconnected with error 'Error: Server returned handshake error: Handshake was canceled.'. Utils.js:198:39 Error: Server returned handshake error: Handshake was canceled. In the Access log I see an 404 Error, but don?t know how to handle them: 192.168.7.242 - - [09/Mar/2020:11:38:42 +0100] "GET /webui/hub?id=a6T9oscwfbe-l0CRzOvCtw HTTP/1.1" 200 80 "https://helitest.fluent-software.de:9003/webui/Orders" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:73.0) Gecko/20100101 Firefox/73.0" 192.168.7.242 - - [09/Mar/2020:11:38:42 +0100] "POST /webui/hub?id=a6T9oscwfbe-l0CRzOvCtw HTTP/1.1" 404 37 "https://helitest.fluent-software.de:9003/webui/Orders" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:73.0) Gecko/20100101 Firefox/73.0" Any ideas how to solve that? Kind regards, Sebastian. -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.grigorov at gmail.com Tue Mar 10 09:23:25 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Tue, 10 Mar 2020 11:23:25 +0200 Subject: aarch64 packages for other Linux flavors In-Reply-To: References: Message-ID: On Mon, Mar 9, 2020 at 10:15 AM Emilio Fernandes < emilio.fernandes70 at gmail.com> wrote: > Hello Nginx team! > > At https://nginx.org/en/linux_packages.html I see that only Ubuntu LTS > versions support and provide packages for aarch64/arm64 architecture. Is > there a chance to provide such for the other OSes too ? I am particularly > interested in the latest versions of CentOS & Alpine. I know that I could > use the packages provided by the OS but they update the version much later > than the official release. > +1 for this suggestion from me! > > Gracias! > Emilio > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From thresh at nginx.com Tue Mar 10 11:04:14 2020 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 10 Mar 2020 14:04:14 +0300 Subject: aarch64 packages for other Linux flavors In-Reply-To: References: Message-ID: Hi Emilio, Martin, 10.03.2020 12:23, Martin Grigorov wrote: > > > On Mon, Mar 9, 2020 at 10:15 AM Emilio Fernandes > > wrote: > > Hello Nginx team! > > At https://nginx.org/en/linux_packages.html I see that only Ubuntu > LTS versions support and provide packages for aarch64/arm64 > architecture. Is there a chance to provide such for the other OSes > too ? I am particularly interested in the latest versions of CentOS > & Alpine. I know that I could use the packages provided by the OS > but they update the version much later than the official release. > > > +1 for this suggestion from me! Thanks for your interest in our packages! By CentOS, do you want/need packages built for 8? Asking because I believe 7 is not officially released for Aarch64 - it's rather a community build which doesnt fall into something we can support. Thanks again, -- Konstantin Pavlov https://www.nginx.com/ From emilio.fernandes70 at gmail.com Tue Mar 10 12:50:53 2020 From: emilio.fernandes70 at gmail.com (Emilio Fernandes) Date: Tue, 10 Mar 2020 14:50:53 +0200 Subject: aarch64 packages for other Linux flavors In-Reply-To: References: Message-ID: Hi Konstantin, El mar., 10 mar. 2020 a las 13:04, Konstantin Pavlov () escribi?: > Hi Emilio, Martin, > > 10.03.2020 12:23, Martin Grigorov wrote: > > > > > > On Mon, Mar 9, 2020 at 10:15 AM Emilio Fernandes > > > > wrote: > > > > Hello Nginx team! > > > > At https://nginx.org/en/linux_packages.html I see that only Ubuntu > > LTS versions support and provide packages for aarch64/arm64 > > architecture. Is there a chance to provide such for the other OSes > > too ? I am particularly interested in the latest versions of CentOS > > & Alpine. I know that I could use the packages provided by the OS > > but they update the version much later than the official release. > > > > > > +1 for this suggestion from me! > > Thanks for your interest in our packages! > > By CentOS, do you want/need packages built for 8? Asking because I > believe 7 is not officially released for Aarch64 - it's rather a > community build which doesnt fall into something we can support. > Yes, CentOS 8 is fine for us! At http://isoredirect.centos.org/centos/7/isos/ there is "for CentOS 7 AltArch AArch64" [1]. Is this the one you prefer not to support ? 1. https://wiki.centos.org/SpecialInterestGroup/AltArch Thank you! Emilio > > Thanks again, > > -- > Konstantin Pavlov > https://www.nginx.com/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thresh at nginx.com Tue Mar 10 13:31:14 2020 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 10 Mar 2020 16:31:14 +0300 Subject: aarch64 packages for other Linux flavors In-Reply-To: References: Message-ID: <4e388ac4-8291-9e19-0774-351af78a4445@nginx.com> Hello, 10.03.2020 15:50, Emilio Fernandes wrote: > Hi Konstantin, > Thanks for your interest in our packages! > > By CentOS, do you want/need packages built for 8?? Asking because I > believe 7 is not officially released for Aarch64 - it's rather a > community build which doesnt fall into something we can support. > > > Yes, CentOS 8 is fine for us! > At?http://isoredirect.centos.org/centos/7/isos/?there is?"for CentOS 7 > AltArch AArch64" [1]. Is this the one you prefer not to support ? > > 1.?https://wiki.centos.org/SpecialInterestGroup/AltArch Our policy is to provide packages for officially upstream-supported distributions. https://wiki.centos.org/FAQ/General#What_architectures_are_supported.3F states that they only support x86_64, and aarch64 is unofficial. -- Konstantin Pavlov https://www.nginx.com/ From emilio.fernandes70 at gmail.com Tue Mar 10 14:30:15 2020 From: emilio.fernandes70 at gmail.com (Emilio Fernandes) Date: Tue, 10 Mar 2020 16:30:15 +0200 Subject: aarch64 packages for other Linux flavors In-Reply-To: <4e388ac4-8291-9e19-0774-351af78a4445@nginx.com> References: <4e388ac4-8291-9e19-0774-351af78a4445@nginx.com> Message-ID: Hi Konstantin, El mar., 10 mar. 2020 a las 15:31, Konstantin Pavlov () escribi?: > Hello, > > 10.03.2020 15:50, Emilio Fernandes wrote: > > Hi Konstantin, > > Thanks for your interest in our packages! > > > > By CentOS, do you want/need packages built for 8? Asking because I > > believe 7 is not officially released for Aarch64 - it's rather a > > community build which doesnt fall into something we can support. > > > > > > Yes, CentOS 8 is fine for us! > > At http://isoredirect.centos.org/centos/7/isos/ there is "for CentOS 7 > > AltArch AArch64" [1]. Is this the one you prefer not to support ? > > > > 1. https://wiki.centos.org/SpecialInterestGroup/AltArch > > Our policy is to provide packages for officially upstream-supported > distributions. > > https://wiki.centos.org/FAQ/General#What_architectures_are_supported.3F > states that they only support x86_64, and aarch64 is unofficial. > I understand! CentOS 8 and Alpine 3 are just fine! Thank you! Emilio > -- > Konstantin Pavlov > https://www.nginx.com/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Mar 11 22:36:10 2020 From: nginx-forum at forum.nginx.org (MAXMAXarena) Date: Wed, 11 Mar 2020 18:36:10 -0400 Subject: Prevent direct access to files but allow download from site Message-ID: Good evening, I would like to block direct access to files in a folder on my site, but allow downloading from the site. Specifically, I want to be able to download a file from the site's html tag: Download TXT But do not allow direct access and download, using the browser or other tools such as curl or wget. So block access to the link: https://domain.com/assets/file/test.xt is it possible to obtain such a result? Thank you Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287297,287297#msg-287297 From abbot at monksofcool.net Wed Mar 11 22:57:03 2020 From: abbot at monksofcool.net (Ralph Seichter) Date: Wed, 11 Mar 2020 23:57:03 +0100 Subject: Prevent direct access to files but allow download from site In-Reply-To: References: Message-ID: <871rpywlds.fsf@wedjat.horus-it.com> * MAXMAXarena: > I want to be able to download a file from the site's html tag [...] > But do not allow direct access and download, using the browser or > other tools such as curl or wget. Public access and restricted access are mutually exclusive. It also makes nearly no difference what utility is used to access a URL pointing to the text file you gave as an example, because they'll all send a HTTP GET request, which is what the web server expects. Attempting to limit access based on a User-Agent header or similar, to identify the client type, is easily circumvented. -Ralph From francis at daoine.org Wed Mar 11 23:49:29 2020 From: francis at daoine.org (Francis Daly) Date: Wed, 11 Mar 2020 23:49:29 +0000 Subject: Webdav error accessing with Finder In-Reply-To: References: Message-ID: <20200311234929.GD26683@daoine.org> On Tue, Mar 03, 2020 at 07:53:41AM -0500, atomino wrote: Hi there, > I am trying to acces the folder with Osx using Finder , > after authentication with the right password it will respond > with a message box : > ...translation > An error occurred during the connection to the server "xxxxxxx.it" > Contact the system administrator for more info. What do the nginx access log and the nginx error log show when this happens? Perhaps that will help show where the first problem is. (Perhaps the dav_ext_methods directive should be uncommented?) Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Mar 12 00:00:05 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 12 Mar 2020 00:00:05 +0000 Subject: Sub-Filter auf SignalR ASP.NET Core Hub In-Reply-To: <01d801d5f609$d465af30$7d310d90$@fluent-software.de> References: <01d801d5f609$d465af30$7d310d90$@fluent-software.de> Message-ID: <20200312000005.GE26683@daoine.org> On Mon, Mar 09, 2020 at 12:56:33PM +0100, s.schabbach at fluent-software.de wrote: Hi there, > In the Access log I see an 404 Error, but don?t know how to handle them: > > 192.168.7.242 - - [09/Mar/2020:11:38:42 +0100] "GET /webui/hub?id=a6T9oscwfbe-l0CRzOvCtw HTTP/1.1" 200 80 "https://helitest.fluent-software.de:9003/webui/Orders" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:73.0) Gecko/20100101 Firefox/73.0" > > 192.168.7.242 - - [09/Mar/2020:11:38:42 +0100] "POST /webui/hub?id=a6T9oscwfbe-l0CRzOvCtw HTTP/1.1" 404 37 "https://helitest.fluent-software.de:9003/webui/Orders" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:73.0) Gecko/20100101 Firefox/73.0" > > Any ideas how to solve that? What is your nginx config that handles the requests for /webui/hub ? If it does proxy_pass to something else, does *that* do something different with a GET and a POST? Cheers, f -- Francis Daly francis at daoine.org From lists at lazygranch.com Thu Mar 12 00:45:02 2020 From: lists at lazygranch.com (lists) Date: Wed, 11 Mar 2020 17:45:02 -0700 Subject: Prevent direct access to files but allow download from site In-Reply-To: <871rpywlds.fsf@wedjat.horus-it.com> Message-ID: You could make it harder to pass around the URL if it is dynamic. That is make the url session related. You can do a search on "uncrawlable" and then exactly the opposite of what they suggest. That is most people want to be crawled, so their advice is backwards. One thing to watch out for is Google dorking. Even if Google doesn't crawl, the html/javascript/etc leave clues to file locations. When I am on a server I will copy what is needed to block wget, curl, nutch, screamingfrog, etc. I have a number of Nginx maps I use to block and flag troublemakers. ? Original Message ? From: abbot at monksofcool.net Sent: March 11, 2020 3:57 PM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Re: Prevent direct access to files but allow download from site * MAXMAXarena: > I want to be able to download a file from the site's html tag [...] > But do not allow direct access and download, using the browser or > other tools such as curl or wget. Public access and restricted access are mutually exclusive. It also makes nearly no difference what utility is used to access a URL pointing to the text file you gave as an example, because they'll all send a HTTP GET request, which is what the web server expects. Attempting to limit access based on a User-Agent header or similar, to identify the client type, is easily circumvented. -Ralph _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu Mar 12 01:23:15 2020 From: nginx-forum at forum.nginx.org (MAXMAXarena) Date: Wed, 11 Mar 2020 21:23:15 -0400 Subject: Prevent direct access to files but allow download from site In-Reply-To: References: Message-ID: <7cc7be7f372d7795102de17aa406fa64.NginxMailingListEnglish@forum.nginx.org> Hello @Ralph Seichter, what do you mean by "mutually exclusive"? As for the tools I mentioned, it was just an example. Are you telling me I can't solve this problem? Hello @garic, thanks for this answer, it made me understand some things. But I don't think I understand everything you suggest to me. Are you suggesting me how to make the link uncrawlable, but how to block direct access? For example, if the user downloads the file, then goes to the download history, sees the url, copies it and re-downloads the file. How can I prevent this from happening? Maybe you've already given me the solution, but not being an expert, i need more details if it's not a problem, thanks. I found this stackoverflow topic that is interesting: https://stackoverflow.com/questions/9756837/prevent-html5-video-from-being-downloaded-right-click-saved Read the @Tzshand answer modified by @Timo Schwarzer with 28 positive votes, basically it's what I would like to do, but in my case they are pdf files and I use Nginx not Apache. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287297,287302#msg-287302 From lists at lazygranch.com Thu Mar 12 02:18:51 2020 From: lists at lazygranch.com (lists at lazygranch.com) Date: Wed, 11 Mar 2020 19:18:51 -0700 Subject: Prevent direct access to files but allow download from site In-Reply-To: <7cc7be7f372d7795102de17aa406fa64.NginxMailingListEnglish@forum.nginx.org> References: <7cc7be7f372d7795102de17aa406fa64.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200311191851.4a5a583e.lists@lazygranch.com> Answers intermixed below. On Wed, 11 Mar 2020 21:23:15 -0400 "MAXMAXarena" wrote: > Hello @Ralph Seichter, > what do you mean by "mutually exclusive"? > As for the tools I mentioned, it was just an example. > Are you telling me I can't solve this problem? > > > Hello @garic, > thanks for this answer, it made me understand some things. But I > don't think I understand everything you suggest to me. > > Are you suggesting me how to make the link uncrawlable, but how to > block direct access? If you are going to block access to a file, you should protect it from the shall we say the less sophisticated user right down to the unrelenting robots.txt ignoring crawler. Most of the webmasters want to be crawled, so they have posted what stops the crawlers from reaching your files. Things like ajax used to be a problem. Over the years the crawlers have become smarter. But you can search blogs for crawling problems and then do just the opposite of their suggestion. That is make your file hard to reach. In other words, this is a research project. > > For example, if the user downloads the file, then goes to the download > history, sees the url, copies it and re-downloads the file. How can I > prevent this from happening? > > Maybe you've already given me the solution, but not being an expert, > i need more details if it's not a problem, thanks. > In the http block, "include" these files which contain maps. You will have to create these files. --------------------- include /etc/nginx/mapbadagentlarge ; include /etc/nginx/mapbadrefer ; include /etc/nginx/mapbaduri; ------------------------------------------------- Here is how I use maps. First in the location at the webroot: --- location / { index index.html index.htm; if ($bad_uri) { return 444; } if ($badagent) { return 403; } if ($bad_referer) { return 403; } ------- 403 is forbidden, but really that is found and forbidden. 444 is no reply. Technically every internet request deserves an answer but if they are wget-ing or whatever I hearby grant permission to 444 them (no answer) if you want. A sample mapbaduri file follows. Basically place any word you find inappropriate to be in the URL in this file. You need to use caution that the words you put in here are not containined in a URL that is legitimate. Incidentally these samples are real life. --------------------------------- map $request_uri $bad_uri { default 0; ~*simpleboot 1; ~*bitcoin 1; ~*wallet 1; } --------------------------------------------- Next up and more relevant to your question is my mapbadagentlarge file. This is where you trap curl and wget. There are many lists of bad agents online. The "" seems to trap those lines with no agent. ------------------------------------- map $http_user_agent $badagent { default 0; "" 1; ~*SpiderLing 1; ~*apitool 1; ~*pagefreezer 1; ~*curl 1; ~*360Spider 1; } ---------------------- The mapbadrefer is up to you. If you find a website you don't want linking to your website, you make a file as follows: ---------------------------------------- map $http_referer $bad_referer { default 0; "~articlevault.info" 1; "~picooly.pw" 1; "~pictylox.pw" 1; "~imageri.pw" 1; "~mimgolo.pw" 1; "~rightimage.co" 1; "~pagefreezer.com" 1; } ----------------------------------------------- Note that as you block these website you will probably loose google rank. > I found this stackoverflow topic that is interesting: > https://stackoverflow.com/questions/9756837/prevent-html5-video-from-being-downloaded-right-click-saved > > Read the @Tzshand answer modified by @Timo Schwarzer with 28 positive > votes, basically it's what I would like to do, but in my case they > are pdf files and I use Nginx not Apache. > Right click trapping is pretty old school. If you do trap right clicks, you should provide a means to save desired links using a left click. I don't know how to do that but I'm sure the code exists. Don't forget the dynamic url. That prevents the url from being reused outside of the session. I've been on the receiving end of those dynamic URLs but never found the need to write one. So that will be a research project for you. I'm in the camp that you can probably never perfectly make a file a secret and yet serve it, but you can block many users. It is like you can block the script kiddie, but a nation state will get you. > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,287297,287302#msg-287302 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From abbot at monksofcool.net Thu Mar 12 02:20:53 2020 From: abbot at monksofcool.net (Ralph Seichter) Date: Thu, 12 Mar 2020 03:20:53 +0100 Subject: Prevent direct access to files but allow download from site In-Reply-To: <7cc7be7f372d7795102de17aa406fa64.NginxMailingListEnglish@forum.nginx.org> References: <7cc7be7f372d7795102de17aa406fa64.NginxMailingListEnglish@forum.nginx.org> Message-ID: <87o8t2wby2.fsf@wedjat.horus-it.com> * MAXMAXarena: > what do you mean by "mutually exclusive"? I am assuming you have looked up the definition, so I'm not sure in what way the term could be be misunderstood? -Ralph From nginx-forum at forum.nginx.org Thu Mar 12 02:36:06 2020 From: nginx-forum at forum.nginx.org (j94305) Date: Wed, 11 Mar 2020 22:36:06 -0400 Subject: Prevent direct access to files but allow download from site In-Reply-To: References: Message-ID: <1dbba4ce438d08435c3fd0712095ea20.NginxMailingListEnglish@forum.nginx.org> I would generally say this is not possible in the way you describe it. There are two ways, however, this could be implemented: 1. You use one-time links to content files: all content retrieval URLs will get a parameter expires=X (how long the link should be valid) and a signature (e.g., an HMAC with a secret only known to the NGINX server). Retrieval won't go through mere file access, but a handler that verifies the addtional parameters. If they check out, you redirect to an internal location serving the file. 2. You use a session context: whenever a page validly serving a link to a certain content is delivered, you set a cookie. Retrievals to files require the cookie to be present. No cookie, no access. Cheers, --j. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287297,287305#msg-287305 From kaushalshriyan at gmail.com Thu Mar 12 03:48:57 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Thu, 12 Mar 2020 09:18:57 +0530 Subject: TLS 1.3 not offered and downgraded to a weaker protocol Message-ID: Hi, I am running nginx version: nginx/1.16.1 on CentOS Linux release 7.7.1908 (Core). I have configured *ssl_protocols TLSv1.2 TLSv1.3*; in /etc/nginx/nginx.conf. #nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful Now when I am running testssl.sh (https://testssl.sh/) which is a Testing TLS/SSL encryption tool, I see the below output Testing protocols via sockets except NPN+ALPN > SSLv2 not offered (OK) > SSLv3 not offered (OK) > TLS 1 not offered > TLS 1.1 not offered > TLS 1.2 offered (OK) > TLS 1.3 not offered and downgraded to a weaker protocol > NPN/SPDY h2, http/1.1 (advertised) > ALPN/HTTP2 h2, http/1.1 (offered) Any clue regarding "TLS 1.3 not offered and downgraded to a weaker protocol" ? Please let me know if you need any additional information. Thanks in advance and I look forward to hearing from you. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Thu Mar 12 05:23:14 2020 From: lists at lazygranch.com (lists) Date: Wed, 11 Mar 2020 22:23:14 -0700 Subject: TLS 1.3 not offered and downgraded to a weaker protocol In-Reply-To: Message-ID: <0bdphaja8u3hb5svme4l768j.1583990594002@lazygranch.com> An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Mar 12 09:47:21 2020 From: nginx-forum at forum.nginx.org (MAXMAXarena) Date: Thu, 12 Mar 2020 05:47:21 -0400 Subject: Prevent direct access to files but allow download from site In-Reply-To: <87o8t2wby2.fsf@wedjat.horus-it.com> References: <87o8t2wby2.fsf@wedjat.horus-it.com> Message-ID: <6b065f0d6d463b674a2bb3cadc424c69.NginxMailingListEnglish@forum.nginx.org> Hi, thank you for your help, but as I said, being an expert, I have difficulty understanding certain things. If you know how to solve my problem, a small example would help me. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287297,287312#msg-287312 From nginx-forum at forum.nginx.org Thu Mar 12 09:49:05 2020 From: nginx-forum at forum.nginx.org (MAXMAXarena) Date: Thu, 12 Mar 2020 05:49:05 -0400 Subject: Prevent direct access to files but allow download from site In-Reply-To: <20200311191851.4a5a583e.lists@lazygranch.com> References: <20200311191851.4a5a583e.lists@lazygranch.com> Message-ID: Thanks for all this information, I try to study and apply what you told me. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287297,287313#msg-287313 From nginx-forum at forum.nginx.org Thu Mar 12 11:42:33 2020 From: nginx-forum at forum.nginx.org (MAXMAXarena) Date: Thu, 12 Mar 2020 07:42:33 -0400 Subject: Prevent direct access to files but allow download from site In-Reply-To: <1dbba4ce438d08435c3fd0712095ea20.NginxMailingListEnglish@forum.nginx.org> References: <1dbba4ce438d08435c3fd0712095ea20.NginxMailingListEnglish@forum.nginx.org> Message-ID: <65a6791023ed7c002df75b6fb22dd5c6.NginxMailingListEnglish@forum.nginx.org> j94305 Wrote: ------------------------------------------------------- > 2. You use a session context: whenever a page validly serving a link > to a certain content is delivered, you set a cookie. Retrievals to > files require the cookie to be present. No cookie, no access. > > Cheers, > --j. Hi, the second option seem interesting and relatively "simple" solutions, but I am having some problems. I put a pdf file in the domain.com/assets/file/test.pdf directory I created a cookie when a user logs in. document.cookie = "user_logged = 1"; On Nginx I created this rule: location ~ ^/assets/file/ { if ($http_cookie ~* "user_logged") { allow all; } root /path/to/root; } I also tried this: location ~ ^/assets/file/ { if ($cookie_user_logged = "1") { allow all; } root /path/to/root; } But it seems not to work correctly, the user either manages to download from the direct link https://domain.com/assets/file/test.pdf from the browser, and from the a href tag of the site, or fails from either side. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287297,287315#msg-287315 From themadbeaker at gmail.com Thu Mar 12 16:00:48 2020 From: themadbeaker at gmail.com (J.R.) Date: Thu, 12 Mar 2020 11:00:48 -0500 Subject: Prevent direct access to files but allow download from site Message-ID: Without you being more specific on HOW you want to block direct downloads and how extreme you want to prevent it, then it's all just a wild guess what kind of solution you want. >From the example link you gave for stackoverflow, it sounds like you just want to prevent hotlinking (i.e. downloading without the client sending a proper referral URL)... The nginx equivalent of the apache blocking via referral can be found: http://nginx.org/en/docs/http/ngx_http_referer_module.html You just set the 'valid_referers' you want, then create a simple 'if' statement in a location to return a '403 forbidden'... if ($invalid_referer) { return 403; } From nginx-forum at forum.nginx.org Thu Mar 12 17:12:12 2020 From: nginx-forum at forum.nginx.org (MAXMAXarena) Date: Thu, 12 Mar 2020 13:12:12 -0400 Subject: Prevent direct access to files but allow download from site In-Reply-To: References: Message-ID: <3a38478f59c7d3307e00e2c22316f693.NginxMailingListEnglish@forum.nginx.org> Hi, thanks again for the reply. HOW I want to block I don't know, I am on this forum for this reason. I thought I was clear, I don't know how to explain it in different words. I want to prevent the user from downloading the file without being logged on my site. The user MUST BE ABLE to download the file from the article pages when LOGGED. If the user is NOT LOGGED, he cannot download the file, therefore even recovering the url, he must receive an error or any other type of block. I have already tried using the parameter valid_referes, it seems that I have to specify the domain and also the address of the article pages, but they are dynamic, the article is not always the same. Maybe I'm doing something wrong. If I enter my domain as valid_referer, the user can still download the file by retrieving the link of the site, from the download history (if he has previously downloaded the file) Example of an article address domain.com/my-first-article On this page there is a link to download a file located in this path: domain.com/assets/file/test.pdf The user does not need to be able to copy and paste this link into the brower to access the file directly. I'm continuing to try, but maybe I'm missing some operating logic. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287297,287319#msg-287319 From vbart at nginx.com Thu Mar 12 19:24:51 2020 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 12 Mar 2020 22:24:51 +0300 Subject: Unit 1.16.0 release Message-ID: <6239865.uYKsFurtZ4@vbart-workstation> Hi, I'm glad to announce a new release of NGINX Unit. ------------------------------------------------------------------- To all Unit package maintainers: please don't miss the new '--tmp' configure option. It specifies the directory where the Unit daemon stores temporary files (i.e. large request bodies) at runtime. ------------------------------------------------------------------- In this release, we continue improving the functionality related to proxying and static media asset handling. Now, the new 'upstreams' object enables creating server groups for weighted round-robin load balancing: { "listeners": { "*:80": { "pass": "upstreams/rr-lb" } }, "upstreams": { "rr-lb": { "servers": { "192.168.0.100:8080": { }, "192.168.0.101:8080": { "weight": 2 } } } } } See the docs for details: - https://unit.nginx.org/configuration/#configuration-upstreams So far, it's rather basic, but many more proxying and load-balancing features are planned for future releases. By its design, the new 'fallback' option is somewhat similar to the 'try_files' directive in nginx. It allows proceeding to another action if a file isn't available: { "share": "/data/www/", "fallback": { "pass": "applications/php" } } In the example above, an attempt is made first to serve a request with a file from the "/data/www/" directory. If there's no such file, the request is passed to the "php" application. Also, you can chain such fallback actions: { "share": "/data/www/", "fallback": { "share": "/data/cache/", "fallback": { "proxy": "http://127.0.0.1:9000" } } } More info: - https://unit.nginx.org/configuration/#configuration-fallback Finally, configurations you upload can use line (//) and block (/* */) comments. Now, Unit doesn't complain; instead, it strips them from the JSON payload. This comes in handy if you store your configuration in a file and edit it manually. Changes with Unit 1.16.0 12 Mar 2020 *) Feature: basic load-balancing support with round-robin. *) Feature: a "fallback" option that performs an alternative action if a request can't be served from the "share" directory. *) Feature: reduced memory consumption by dumping large request bodies to disk. *) Feature: stripping UTF-8 BOM and JavaScript-style comments from uploaded JSON. *) Bugfix: negative address matching in router might work improperly in combination with non-negative patterns. *) Bugfix: Java Spring applications failed to run; the bug had appeared in 1.10.0. *) Bugfix: PHP 7.4 was broken if it was built with thread safety enabled. *) Bugfix: compatibility issues with some Python applications. To keep the finger on the pulse, see our further plans in the roadmap here: - https://github.com/orgs/nginx/projects/1 Also, good news for macOS users! Now, there's a Homebrew tap for Unit: - https://unit.nginx.org/installation/#homebrew Stay healthy! wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Thu Mar 12 22:03:31 2020 From: nginx-forum at forum.nginx.org (j94305) Date: Thu, 12 Mar 2020 18:03:31 -0400 Subject: Prevent direct access to files but allow download from site In-Reply-To: <65a6791023ed7c002df75b6fb22dd5c6.NginxMailingListEnglish@forum.nginx.org> References: <1dbba4ce438d08435c3fd0712095ea20.NginxMailingListEnglish@forum.nginx.org> <65a6791023ed7c002df75b6fb22dd5c6.NginxMailingListEnglish@forum.nginx.org> Message-ID: The key requirement you mentioned now: the user needs to be logged in. So, the next question is: how do we know the user is logged in. It can't be just a simple cookie because that could be faked (I could add "LOGGED_IN=1" without the site authorizing this), and therefore there is no security at all. Maybe added obscurity :-) What you need to do is issue a cookie that can only have been created by NGINX, e.g., something like a string "{user}/{checksum}", where {user} is the id of the logged-in user, and {checksum} is an HMAC of a secret only known to NGINX and that user id. In consequence, without knowing the secret, nobody can fake this cookie. If NGINX gets the cookie, it can determine the HMAC of the user id and compare against the checksum provided, in order to check the validity of the request. The HMAC computation and validation can quickly be done with a few lines of Javascript, and would even allow for some sort of single sign-on across all services capable of using these functions (also in other NGINX instances knowing this secret). Maybe you want to add some form of expiration to such cookies. The classical way, however, is to use https and something like a basic authentication (auth_basic directive), and require valid authentication for those URIs referring to protected files. If your authentication scheme is a bit more complex, you may want to use auth_request or some form of initial login plus a secured session cookie as described above. Essentially, this principle (with JWT) is also used by more novel schemes such as OpenID Connect. Let's see if this helps you further. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287297,287324#msg-287324 From nginx-forum at forum.nginx.org Thu Mar 12 22:23:34 2020 From: nginx-forum at forum.nginx.org (j94305) Date: Thu, 12 Mar 2020 18:23:34 -0400 Subject: Elasticsearch Native Binary Protocol through NGiNX Stream In-Reply-To: <728fb1871fb7ca2bb5e7f4368b2b5455.NginxMailingListEnglish@forum.nginx.org> References: <728fb1871fb7ca2bb5e7f4368b2b5455.NginxMailingListEnglish@forum.nginx.org> Message-ID: <728246e1a6a0f854c96f8e146f9a6baf.NginxMailingListEnglish@forum.nginx.org> I assume Liferay is throwing exceptions. Are these timeouts or indications of broken connections? A typical problem with the Elasticsearch Native Protocol is that it does not like third-party tear-downs of connections it uses (e.g., by NGINX or some load balancer). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287254,287325#msg-287325 From r at roze.lv Thu Mar 12 22:30:18 2020 From: r at roze.lv (Reinis Rozitis) Date: Fri, 13 Mar 2020 00:30:18 +0200 Subject: Prevent direct access to files but allow download from site In-Reply-To: <3a38478f59c7d3307e00e2c22316f693.NginxMailingListEnglish@forum.nginx.org> References: <3a38478f59c7d3307e00e2c22316f693.NginxMailingListEnglish@forum.nginx.org> Message-ID: <000d01d5f8bd$cf2abe50$6d803af0$@roze.lv> > The user MUST BE ABLE to download the file from the article pages when > LOGGED. > If the user is NOT LOGGED, he cannot download the file, therefore even > recovering the url, he must receive an error or any other type of block. It's rather difficult to achieve that only with a webserver (as typically a webserver itself has no idea about users being logged in or out and just to rely on a cookie is possible but rather weak check). While you can use the secure link module (https://nginx.org/en/docs/http/ngx_http_secure_link_module.html ) with expiration a more common way would be to implement the download check in the application itself and use the nginx X-Accel-Redirect feature - https://www.nginx.com/resources/wiki/start/topics/examples/xsendfile/ Without knowing what kind of app (php/python/js/perl etc) are you running it's hard to give an exact example but the gist of the idea is to: - place the files outside webroot - configure the path as an internal nginx location - the application then checks if the user has an active session, then sends the X-Accel-Redirect header with the particular file to nginx which sends the file to user. There should be plenty of samples on internet. rr From abbot at monksofcool.net Fri Mar 13 00:15:58 2020 From: abbot at monksofcool.net (Ralph Seichter) Date: Fri, 13 Mar 2020 01:15:58 +0100 Subject: Prevent direct access to files but allow download from site In-Reply-To: <3a38478f59c7d3307e00e2c22316f693.NginxMailingListEnglish@forum.nginx.org> References: <3a38478f59c7d3307e00e2c22316f693.NginxMailingListEnglish@forum.nginx.org> Message-ID: <87o8t1un29.fsf@wedjat.horus-it.com> * MAXMAXarena: > The user MUST BE ABLE to download the file from the article pages when > LOGGED. If the user is NOT LOGGED, he cannot download the file, > therefore even recovering the url, he must receive an error or any > other type of block. You describe restricted access, not public access. That differs from your OP where you wanted to have it both ways. See NGINX docs, section "Restricting Access with HTTP Basic Authentication". The latter can be replaced with LDAP or whatever auth mechanism you prefer should basic authentication not be to your taste. -Ralph From sathish.create at gmail.com Fri Mar 13 05:47:40 2020 From: sathish.create at gmail.com (satscreate) Date: Thu, 12 Mar 2020 22:47:40 -0700 (MST) Subject: How to establish secure connection between NGINX <-> https upstream API Message-ID: <1584078460778-0.post@n2.nabble.com> Using below config, According to this, https://docs.nginx.com/nginx/admin-guide/security-controls/securing-http-traffic-upstream/# server { listen 80; server_name nginx_server_name; #... upstream dev { zone dev 64k; server backend.example.com:443; } location /upstream { proxy_pass https://$upstream$request_uri; proxy_ssl_certificate /etc/nginx/client.pem; proxy_ssl_certificate_key /etc/nginx/client.key; proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; proxy_ssl_ciphers HIGH:!aNULL:!MD5; proxy_ssl_trusted_certificate /etc/nginx/trusted_ca_cert.crt; proxy_ssl_verify on; proxy_ssl_verify_depth 2; proxy_ssl_session_reuse on; } } What is below client.pem & client.key? is this the nginx client files which needs to be created and signed with CA? or is that a backend.example.com ssl certs? What is trusted_ca_cert.crt;? Is this related to backend.example.com? how can i obtain this? Steps i did: Created csr & key using openssl with CN as nginx_server_name signed & Got the cert (client.crt) -> client.pem configured both client.pem & .key in config But getting below exception when i hit the API. upstream SSL certificate verify error: (19:self signed certificate in certificate chain) while SSL handshaking to upstream, client: , server: , request: "POST /getsomething HTTP/1.1", upstream: "https://backend.example.com:443/getsomething", host: "nginx_server_ip" -- Sent from: http://nginx.2469901.n2.nabble.com/ From sathish.create at gmail.com Fri Mar 13 07:10:15 2020 From: sathish.create at gmail.com (satscreate) Date: Fri, 13 Mar 2020 00:10:15 -0700 (MST) Subject: upstream SSL certificate does not match "dev_server" while SSL handshaking to upstream Message-ID: <1584083415541-0.post@n2.nabble.com> Hi Team, Am trying to establish encrypted communication between NGINX <-> API's (POST, GET) with below configuration. But am facing some ssl handshake issue. *Config:* upstream dev_server { zone dev_server 64k; server dev1.sysmac.com:443; server dev2.sysmac.com:443; server dev3.sysmac.com:443; } server { ssl_certificate /etc/nginx/ssl/nginx-bundle.crt; ssl_certificate_key /etc/nginx/ssl/nginx.key; ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; ssl_session_cache shared:SSL:10m; ssl_session_tickets off; resolver 8.8.8.8 valid=300s; resolver_timeout 5s; ssl_session_timeout 5m; add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"; add_header X-Frame-Options SAMEORIGIN; add_header X-Content-Type-Options nosniff; ssl_dhparam /etc/ssl/certs/dhparam.pem; # Policy section # location = /_dosomething { internal; proxy_pass https://$upstream$request_uri; proxy_ssl_protocols TLSv1.2 TLSv1.3; proxy_ssl_ciphers HIGH:!aNULL:!MD5; proxy_ssl_trusted_certificate /etc/ssl/certs/ca-bundle.trust.crt; proxy_ssl_verify on; proxy_ssl_verify_depth 2; proxy_ssl_session_reuse on; proxy_ssl_server_name on; } } *Error:* upstream SSL certificate does not match "dev_server" while SSL handshaking to upstream, client: , server: , request: "POST /dosomething HTTP/1.1", upstream: "https://:443/dosomething", host: "" *Verified with openssl:* openssl s_client -servername NAME -connect dev1.sysmac.com:443 -showcerts -CApath /etc/ssl/certs/ca-bundle.trust.crt CONNECTED(00000003) depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global Root CA verify return:1 depth=1 C = US, O = DigiCert Inc, CN = DigiCert SHA2 Secure Server CA verify return:1 depth=0 C = US, ST = , L = , O = , OU = , CN = dev5.sysmac.com verify return:1 --- Certificate chain 0 s:/C=US/ST=/L=/O=/OU=/CN=g4t7453.houston.hpe.com i:/C=US/O=DigiCert Inc/CN=DigiCert SHA2 Secure Server CA -----BEGIN CERTIFICATE----- MIIHdzCCBl+gAwIBAgIQAblIEjggyGk4cIxk4xfU6TANBgkqhkiG9w0BAQsFADBN MQswCQYDVQQGEw............... -----END CERTIFICATE----- 1 s:/C=US/O=DigiCert Inc/CN=DigiCert SHA2 Secure Server CA i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Global Root CA -----BEGIN CERTIFICATE----- MIIElDCCA3ygAwIBAgIQAf2j627KdciIQ4tyS8+8kTANBgkqhkiG9w0BAQsFADBh MQswCQYDVQQGEwJVUzEVM...... -----END CERTIFICATE----- 2 s:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Global Root CA i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Global Root CA -----BEGIN CERTIFICATE----- MIIDrzCCApegAwIBAgIQCD..... -----END CERTIFICATE----- --- Server certificate subject=/C=US/ST=/L=/O=/OU=servers/CN=dev5.sysmac.com issuer=/C=US/O=DigiCert Inc/CN=DigiCert SHA2 Secure Server CA --- No client certificate CA names sent Peer signing digest: SHA512 Server Temp Key: ECDH, P-256, 256 bits --- SSL handshake has read 4746 bytes and written 428 bytes --- New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384 Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES256-GCM-SHA384 Session-ID: Session-ID-ctx: Master-Key: Key-Arg : None Krb5 Principal: None PSK identity: None PSK identity hint: None TLS session ticket lifetime hint: 300 (seconds) TLS session ticket: 0000 - 83 b1 99 75 73 6e 7c 05-33 1b 02 70 67 68 1f b4 ...usn|.3..pgh.. 00a0 - 18 2b b0 1f 18 20 24 a4-ac ab e4 62 57 f6 1b 53 .+... $....bW..S 00b0 - c3 d8 db 4b 15 cb 82 de-78 52 21 03 c6 25 24 06 ...K....xR!..%$. Start Time: 1584081168 Timeout : 300 (sec) Verify return code: 0 (ok) --- *Questions:* 1. All of my upstream servers has ssl certificate configured with same ssl contains CN=dev5.sysmac.com which i can see from openssl. In such case is this the reason am getting not found error from upstream block? 2. If not how to deal with such cases? 3. Also looking for debugging the same for ssl certificate does not match. Do i need to especially specify ssl cert for each /dosomething block? Please help!!! -- Sent from: http://nginx.2469901.n2.nabble.com/ From nginx-forum at forum.nginx.org Fri Mar 13 07:50:59 2020 From: nginx-forum at forum.nginx.org (galew) Date: Fri, 13 Mar 2020 03:50:59 -0400 Subject: Elasticsearch Native Binary Protocol through NGiNX Stream In-Reply-To: <728246e1a6a0f854c96f8e146f9a6baf.NginxMailingListEnglish@forum.nginx.org> References: <728fb1871fb7ca2bb5e7f4368b2b5455.NginxMailingListEnglish@forum.nginx.org> <728246e1a6a0f854c96f8e146f9a6baf.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, thanks for the answer The problem was the syntax in the Elasticsearch Native Binary Protocol Client, which tried to sniff the configuration behind the proxy. Setting the clientTransportSniff="false" and transport addresses with the right syntax was enough. So this case solved, quilty was Liferay Client configuration. Cheers Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287254,287331#msg-287331 From sathish.create at gmail.com Fri Mar 13 08:58:59 2020 From: sathish.create at gmail.com (satscreate) Date: Fri, 13 Mar 2020 01:58:59 -0700 (MST) Subject: upstream SSL certificate does not match "dev_server" while SSL handshaking to upstream In-Reply-To: <1584083415541-0.post@n2.nabble.com> References: <1584083415541-0.post@n2.nabble.com> Message-ID: <1584089939729-0.post@n2.nabble.com> Any Update on this issue? -- Sent from: http://nginx.2469901.n2.nabble.com/ From martin.grigorov at gmail.com Fri Mar 13 13:12:35 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Fri, 13 Mar 2020 15:12:35 +0200 Subject: Problem installing in custom folder when Perl is enabled Message-ID: Hello Nginx team, I'm facing the following problem when I try to install Nginx in a custom folder: ... objs/ngx_modules.o \ -ldl -lpthread -lcrypt -lpcre -lssl -lcrypto -ldl -lpthread -lz -lxml2 -lxslt -lexslt -lgd -lGeoIP \ -Wl,-E -fstack-protector-strong -L/usr/local/lib -L/usr/lib/aarch64-linux-gnu/perl/5.26/CORE -lperl -ldl -lm -lpthread -lc -lcrypt \ -Wl,-E sed -e "s|%%PREFIX%%|/home/ubuntu/hg/nginx/nginx-build|" \ -e "s|%%PID_PATH%%|/home/ubuntu/hg/nginx/nginx-build/logs/nginx.pid|" \ -e "s|%%CONF_PATH%%|/home/ubuntu/hg/nginx/nginx-build/conf/nginx.conf|" \ -e "s|%%ERROR_LOG_PATH%%|/home/ubuntu/hg/nginx/nginx-build/logs/error.log|" \ < docs/man/nginx.8 > objs/nginx.8 make[1]: Leaving directory '/home/ubuntu/hg/nginx/nginx' make -f objs/Makefile install make[1]: Entering directory '/home/ubuntu/hg/nginx/nginx' cd objs/src/http/modules/perl && make install make[2]: Entering directory '/home/ubuntu/hg/nginx/nginx/objs/src/http/modules/perl' "/usr/bin/perl" -MExtUtils::Command::MM -e 'cp_nonempty' -- nginx.bs blib/arch/auto/nginx/nginx.bs 644 Manifying 1 pod document Files found in blib/arch: installing files in blib/lib into architecture dependent library tree !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ERROR: Can't create '/usr/local/lib/aarch64-linux-gnu/perl/5.26.1' Do not have write permissions on '/usr/local/lib/aarch64-linux-gnu/perl/5.26.1' !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! at -e line 1. Makefile:802: recipe for target 'pure_site_install' failed make[2]: *** [pure_site_install] Error 13 make[2]: Leaving directory '/home/ubuntu/hg/nginx/nginx/objs/src/http/modules/perl' objs/Makefile:1795: recipe for target 'install_perl_modules' failed make[1]: *** [install_perl_modules] Error 2 make[1]: Leaving directory '/home/ubuntu/hg/nginx/nginx' Makefile:11: recipe for target 'install' failed make: *** [install] Error 2 chown: cannot access '/home/ubuntu/hg/nginx/nginx-build': No such file or directory I do the following: $ cd /home/ubuntu/hg/nginx $ hg clone https://hg.nginx.org/nginx $ cd nginx $ ./auto/configure --prefix=/home/ubuntu/hg/nginx/nginx-build --with-http_perl_module $ make $ make install If I remove " --with-http_perl_module" then the installation is successful. But with Perl it still tries to install at /usr/local/lib and fails with permissions denied. Is this a problem in Nginx or in Perl itself ? P.S. I have some more --with-xyz modules in the configure parameters but there are no problems with them and I didn't list them above. Regards, Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Mar 13 13:20:01 2020 From: nginx-forum at forum.nginx.org (MAXMAXarena) Date: Fri, 13 Mar 2020 09:20:01 -0400 Subject: Prevent direct access to files but allow download from site In-Reply-To: References: <1dbba4ce438d08435c3fd0712095ea20.NginxMailingListEnglish@forum.nginx.org> <65a6791023ed7c002df75b6fb22dd5c6.NginxMailingListEnglish@forum.nginx.org> Message-ID: I managed to solve using cookies, but as you said, it is not secure. Although I have no experience, I managed to bypass the control. Maybe it's not the safest way like I did, in any case it is not recommended to proceed in this way. I have experience with auth_basic, but using the terminal to create user and password and to grant access. Too many different information in this topic that I have opened, my fault, I want to simplify it. I know I previously said I wanted to avoid using Curl, but I would like to understand the mechanism. Imagine that the user logs in and i provide him an url, for example: curl -u {{user.id}}:{{unique_value}} https://domain.com/assets/file/test.txt Or curl -O https://domain.com/assets/file/test.txt?param={{unique_value}} How can I find out with Nginx if the username and password are real or that the user/unique_value is still active? Should I somehow access the database or am I wrong? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287297,287335#msg-287335 From francis at daoine.org Fri Mar 13 13:30:54 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 13 Mar 2020 13:30:54 +0000 Subject: How to establish secure connection between NGINX <-> https upstream API In-Reply-To: <1584078460778-0.post@n2.nabble.com> References: <1584078460778-0.post@n2.nabble.com> Message-ID: <20200313133054.GF26683@daoine.org> On Thu, Mar 12, 2020 at 10:47:40PM -0700, satscreate wrote: Hi there, > https://docs.nginx.com/nginx/admin-guide/security-controls/securing-http-traffic-upstream/# > location /upstream { > proxy_pass https://$upstream$request_uri; > proxy_ssl_certificate /etc/nginx/client.pem; > proxy_ssl_certificate_key /etc/nginx/client.key; > proxy_ssl_trusted_certificate /etc/nginx/trusted_ca_cert.crt; > What is below client.pem & client.key? > > is this the nginx client files which needs to be created and signed with CA? The page you link to says """ Add the client certificate and the key that will be used to authenticate NGINX on each upstream server with proxy_ssl_certificate and proxy_ssl_certificate_key directives: """ and the documentation for those directives is at http://nginx.org/r/proxy_ssl_certificate Those files relate to the client certificate that nginx will offer to the upstream server in order to identify itself. > What is trusted_ca_cert.crt;? http://nginx.org/r/proxy_ssl_trusted_certificate That file allows nginx to verify that the certificate presented by the upstream server, is one that nginx is willing to consider acceptable. > Is this related to backend.example.com? how can i obtain this? Yes; the Certificate Authority that signed the backend.example.com certificate should make this available to anyone they want to trust them. > But getting below exception when i hit the API. > > upstream SSL certificate verify error: (19:self signed certificate in > certificate chain) while SSL handshaking to upstream, client: , > server: , request: "POST /getsomething HTTP/1.1", upstream: > "https://backend.example.com:443/getsomething", host: "nginx_server_ip" I believe that that says that nginx (as the client) does not accept the certificate provided by the server at backend.example.com; probably due to nginx's proxy_ssl_trusted_certificate configuration not being what it expects. Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Fri Mar 13 14:22:49 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 13 Mar 2020 17:22:49 +0300 Subject: Problem installing in custom folder when Perl is enabled In-Reply-To: References: Message-ID: <20200313142249.GL12894@mdounin.ru> Hello! On Fri, Mar 13, 2020 at 03:12:35PM +0200, Martin Grigorov wrote: > I'm facing the following problem when I try to install Nginx in a custom > folder: [...] > make[1]: Entering directory '/home/ubuntu/hg/nginx/nginx' > cd objs/src/http/modules/perl && make install > make[2]: Entering directory > '/home/ubuntu/hg/nginx/nginx/objs/src/http/modules/perl' > "/usr/bin/perl" -MExtUtils::Command::MM -e 'cp_nonempty' -- nginx.bs > blib/arch/auto/nginx/nginx.bs 644 > Manifying 1 pod document > Files found in blib/arch: installing files in blib/lib into architecture > dependent library tree > !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! > ERROR: Can't create '/usr/local/lib/aarch64-linux-gnu/perl/5.26.1' > Do not have write permissions on > '/usr/local/lib/aarch64-linux-gnu/perl/5.26.1' > !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! > at -e line 1. > Makefile:802: recipe for target 'pure_site_install' failed [...] > I do the following: > > $ cd /home/ubuntu/hg/nginx > $ hg clone https://hg.nginx.org/nginx > $ cd nginx > $ > ./auto/configure --prefix=/home/ubuntu/hg/nginx/nginx-build > --with-http_perl_module > $ make > $ make install > > > If I remove " --with-http_perl_module" then the installation is > successful. > But with Perl it still tries to install at /usr/local/lib and fails with > permissions denied. > Is this a problem in Nginx or in Perl itself ? By default the nginx perl module is installed by into system's default path for perl modules. That is, the path is not set by the "--prefix" parameter, but rather comes from the Perl itself. You can specify a different path using the "--with-perl_modules_path" configure parameter. -- Maxim Dounin http://mdounin.ru/ From chris at cretaforce.gr Fri Mar 13 14:24:10 2020 From: chris at cretaforce.gr (Christos Chatzaras) Date: Fri, 13 Mar 2020 16:24:10 +0200 Subject: Double RAM usage after Nginx reload Message-ID: <0DB64BE8-7610-4A0E-9EAE-08061CC75956@cretaforce.gr> Any idea why the "cache manager process" uses double RAM after the reload? System: nginx version: nginx/1.16.1 built with OpenSSL 1.1.1d-freebsd 10 Sep 2019 TLS SNI support enabled configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I /usr/local/include' --with-ld-opt='-L /usr/local/lib' --conf-path=/usr/local/etc/nginx/nginx.conf --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/error.log --user=www --group=www --modules-path=/usr/local/libexec/nginx --http-client-body-temp-path=/var/tmp/nginx/client_body_temp --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp --http-proxy-temp-path=/var/tmp/nginx/proxy_temp --http-scgi-temp-path=/var/tmp/nginx/scgi_temp --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp --http-log-path=/var/log/nginx/access.log --with-http_v2_module --with-http_realip_module --with-pcre --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-cc-opt='-DNGX_HAVE_INET6=0 -I /usr/local/include' --without-mail_imap_module --without-mail_pop3_module --without-mail_smtp_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-stream=dynamic --add-dynamic-module=/usr/ports/www/nginx/work/nginx-http-auth-digest-cd86418 How to reproduce it: 1) service nginx start 2) top output: PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 99240 www 1 20 0 926M 519M kqread 5 0:02 0.65% nginx: worker process (nginx) 4433 www 1 20 0 934M 525M kqread 4 0:00 0.00% nginx: cache manager process (nginx) 3) service nginx reload 4) top output: PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 49456 www 1 20 0 930M 523M kqread 6 0:03 0.11% nginx: worker process (nginx) 50868 www 1 20 0 1759M 1028M kqread 2 0:00 0.00% nginx: cache manager process (nginx) From mahmood.nt at gmail.com Sat Mar 14 08:09:45 2020 From: mahmood.nt at gmail.com (Mahmood Naderan) Date: Sat, 14 Mar 2020 11:39:45 +0330 Subject: nginx and php settings Message-ID: Hi, I have install nginx 1.0.15 and php 5.3 on a VM running Ubuntu 14.04. The configuration file looks like below $ cat /usr/local/nginx/conf/nginx.conf #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/local/nginx/html/public_html/$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } I also have put a phpinfo.php file like this root at fr13:/usr/local/nginx/html# cat phpinfo.php When I open the browser and enter localhost/phpinfo.php, I get this message The page you are looking for is temporarily unavailable. Please try again later. How can I resolve that? Regards, Mahmood -------------- next part -------------- An HTML attachment was scrubbed... URL: From hobson42 at gmail.com Sat Mar 14 10:56:51 2020 From: hobson42 at gmail.com (Ian Hobson) Date: Sat, 14 Mar 2020 10:56:51 +0000 Subject: nginx and php settings In-Reply-To: References: Message-ID: <7604f303-da24-cec4-26e7-b460d2ee27fe@gmail.com> Hi Mamood, On 14/03/2020 08:09, Mahmood Naderan wrote: > Hi, > I have install nginx 1.0.15 and php 5.3 on a VM running Ubuntu 14.04. > The configuration file looks like below > > $ cat /usr/local/nginx/conf/nginx.conf > #user ?nobody; > worker_processes ?1; > #error_log ?logs/error.log; > #error_log ?logs/error.log ?notice; > #error_log ?logs/error.log ?info; > #pid ? ? ? ?logs/nginx.pid; > events { > ? ? worker_connections ?1024; > } > http { > ? ? include ? ? ? mime.types; > ? ? default_type ?application/octet-stream; > ? ? #log_format ?main ?'$remote_addr - $remote_user [$time_local] > "$request" ' > ? ? # ? ? ? ? ? ? ? ? ?'$status $body_bytes_sent "$http_referer" ' > ? ? # ? ? ? ? ? ? ? ? ?'"$http_user_agent" "$http_x_forwarded_for"'; > > ? ? #access_log ?logs/access.log ?main; > ? ? sendfile ? ? ? ?on; > ? ? #tcp_nopush ? ? on; > ? ? #keepalive_timeout ?0; > ? ? keepalive_timeout ?65; > ? ? #gzip ?on; > ? ? server { > ? ? ? ? listen ? ? ? 80; > ? ? ? ? server_name ?localhost; > ? ? ? ? #charset koi8-r; > ? ? ? ? #access_log ?logs/host.access.log ?main; > ? ? ? ? location / { > ? ? ? ? ? ? root ? html; This is not the same as below. > ??? ? ? ? ? index ?index.html index.htm; > ? ? ? ? } > ? ? ? ? #error_page ?404 ? ? ? ? ? ? ?/404.html; > ? ? ? ? # redirect server error pages to the static page /50x.html > ? ? ? ? # > ? ? ? ? error_page ? 500 502 503 504 ?/50x.html; > ? ? ? ? location = /50x.html { > ? ? ? ? ? ? root ? html; > ? ? ? ? } > ? ? ? ? # proxy the PHP scripts to Apache listening on 127.0.0.1:80 > > ? ? ? ? # > ? ? ? ? #location ~ \.php$ { > ? ? ? ? # ? ?proxy_pass http://127.0.0.1; > ? ? ? ? #} > ? ? ? ? # pass the PHP scripts to FastCGI server listening on > 127.0.0.1:9000 > ? ? ? ? # > ? ? ? ? location ~ \.php$ { > ? ? ? ? ? ? root ? ? ? ? ? html; > ? ? ? ? ? ? fastcgi_pass 127.0.0.1:9000 ; > ? ? ? ? ? ? fastcgi_index ?index.php; > ? ? ? ? ? ? fastcgi_param ?SCRIPT_FILENAME > ?/usr/local/nginx/html/public_html/$fastcgi_script_name; I think you need to remove the public_html/ part of this. I would expect the line to be fastcgi_param SCRIPT_FILENAME /usr/local/nginx/html/$fastcgi_script_name; Other things to check are: 1) Does the fastcgi process (probably user www-data) have permission to read your phpinfo.php file? 2) Uncomment the error-log lines near the top, and choose a suitable level of logging, and then check the error-log after the problem. Hope this helps. Ian > > I also have put a phpinfo.php file like this > > root at fr13:/usr/local/nginx/html# cat phpinfo.php > > > > When I open the browser and enter localhost/phpinfo.php, I get this message > > The page you are looking for is temporarily unavailable. > Please try again later. > -- This email has been checked for viruses by AVG. https://www.avg.com From mahmood.nt at gmail.com Sat Mar 14 12:24:19 2020 From: mahmood.nt at gmail.com (Mahmood Naderan) Date: Sat, 14 Mar 2020 15:54:19 +0330 Subject: nginx and php settings In-Reply-To: <7604f303-da24-cec4-26e7-b460d2ee27fe@gmail.com> References: <7604f303-da24-cec4-26e7-b460d2ee27fe@gmail.com> Message-ID: >fastcgi_param SCRIPT_FILENAME /usr/local/nginx/html/$fastcgi_script_name; Done. >1) Does the fastcgi process (probably user www-data) have permission to >read your phpinfo.php file? I see this root at fr13:/usr/local# ls -lh nginx/ total 36K drwx------ 2 nobody root 4.0K ???? 13 16:53 client_body_temp drwxr-xr-x 2 root root 4.0K ???? 14 15:51 conf drwx------ 2 nobody root 4.0K ???? 13 16:53 fastcgi_temp drwxr-xr-x 9 root root 4.0K ???? 14 11:32 html drwxr-xr-x 2 root root 4.0K ???? 14 15:48 logs drwx------ 2 nobody root 4.0K ???? 13 16:53 proxy_temp drwxr-xr-x 2 root root 4.0K ???? 13 16:45 sbin drwx------ 2 nobody root 4.0K ???? 13 16:53 scgi_temp drwx------ 2 nobody root 4.0K ???? 13 16:53 uwsgi_temp root at fr13:/usr/local# ls -lh lib/ total 12K drwxr-xr-x 15 root root 4.0K ???? 14 11:28 php drwxrwsr-x 4 root staff 4.0K ???? 5 2019 python2.7 drwxrwsr-x 3 root staff 4.0K ???? 5 2019 python3.4 root at fr13:/usr/local# ls -lh lib/php/ total 132K drwxr-xr-x 2 root root 4.0K ???? 13 17:34 Archive drwxr-xr-x 2 root root 4.0K ???? 14 11:19 build drwxr-xr-x 2 root root 4.0K ???? 13 17:34 Console drwxr-xr-x 4 root root 4.0K ???? 13 17:34 data drwxr-xr-x 6 root root 4.0K ???? 13 17:34 doc drwxr-xr-x 3 root root 4.0K ???? 13 17:41 extensions drwxr-xr-x 2 root root 4.0K ???? 13 17:34 OS drwxr-xr-x 11 root root 4.0K ???? 13 17:34 PEAR -rw-r--r-- 1 root root 1.1K ???? 13 17:34 PEAR5.php -rw-r--r-- 1 root root 15K ???? 13 17:34 pearcmd.php -rw-r--r-- 1 root root 34K ???? 13 17:34 PEAR.php -rw-r--r-- 1 root root 1012 ???? 13 17:34 peclcmd.php -rw-r--r-- 1 root root 1.5K ???? 14 00:08 php.ini drwxr-xr-x 3 root root 4.0K ???? 13 17:34 Structures -rw-r--r-- 1 root root 21K ???? 13 17:34 System.php drwxr-xr-x 4 root root 4.0K ???? 13 17:34 test drwxr-xr-x 2 root root 4.0K ???? 13 17:34 XML Do you mean chown www-data:www-data /usr/lib/local/php/php.ini ? but the upper directory, php/ is for root. Regards, Mahmood -------------- next part -------------- An HTML attachment was scrubbed... URL: From sathish.create at gmail.com Sat Mar 14 13:36:27 2020 From: sathish.create at gmail.com (satscreate) Date: Sat, 14 Mar 2020 06:36:27 -0700 (MST) Subject: How to establish secure connection between NGINX <-> https upstream API In-Reply-To: <20200313133054.GF26683@daoine.org> References: <1584078460778-0.post@n2.nabble.com> <20200313133054.GF26683@daoine.org> Message-ID: <1584192987318-0.post@n2.nabble.com> Thanks Buddy, It helps, I set up a proper CA cert in pem format and it does make connections. Thanks a lot. -- Sent from: http://nginx.2469901.n2.nabble.com/ From sathish.create at gmail.com Sat Mar 14 13:37:42 2020 From: sathish.create at gmail.com (satscreate) Date: Sat, 14 Mar 2020 06:37:42 -0700 (MST) Subject: =?UTF-8?B?TkdJTlggLSB74oCcc3RhdHVz4oCdOjQwMCzigJxtZXNzYWdl4oCdOuKAnEJhZCBy?= =?UTF-8?B?ZXF1ZXN04oCdfSBFdmVyeSBhbHRlcm5hdGUgcmVxdWVzdCBnZXR0aW5nIDQw?= =?UTF-8?B?MA==?= Message-ID: <1584193062422-0.post@n2.nabble.com> I Have below nginx config and the api hit works fine every first time and alternate hit getting 400. nginx.conf: http { lua_package_path '~/lua/?.lua;;'; # Allow larger than normal headers large_client_header_buffers 4 64k; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; include /etc/nginx/gateway.conf; } gateway.conf server { listen 443 ssl; location = /_dosomething { internal; # Validate oauth token and add custom nginx access_token into the request header access_by_lua_file /etc/nginx/api_conf.d/oauth/oauth_introspec.lua; mirror /_NULL; # Create a copy of the request to capture request body client_body_in_single_buffer on; # Minimize memory copy operations on request body client_body_buffer_size 16k; # Largest body to keep in memory (before writing to file) client_max_body_size 16k; # Policy configuration here (authentication, rate limiting, logging, more...) proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_pass https://$upstream$request_uri; } } Am not seeing any specific errors in nginx error log too, i can see the log before proxy pass. added below line too error_log logs/error.log debug; Every alternate requests are getting 400 bad request. But constant interval requests are getting success response and if i test with 2 requests per second then it fails with this error. am out of options. Please help. -- Sent from: http://nginx.2469901.n2.nabble.com/ From mahmood.nt at gmail.com Sun Mar 15 12:21:43 2020 From: mahmood.nt at gmail.com (Mahmood Naderan) Date: Sun, 15 Mar 2020 15:51:43 +0330 Subject: nginx and php settings In-Reply-To: References: <7604f303-da24-cec4-26e7-b460d2ee27fe@gmail.com> Message-ID: I don't know what is going on for this problem. All things look normal. I will post in another thread which I did all necessary modifications. Regards, Mahmood On Sat, Mar 14, 2020 at 3:54 PM Mahmood Naderan wrote: > >fastcgi_param SCRIPT_FILENAME > /usr/local/nginx/html/$fastcgi_script_name; > Done. > > >1) Does the fastcgi process (probably user www-data) have permission to > >read your phpinfo.php file? > > I see this > > root at fr13:/usr/local# ls -lh nginx/ > total 36K > drwx------ 2 nobody root 4.0K ???? 13 16:53 client_body_temp > drwxr-xr-x 2 root root 4.0K ???? 14 15:51 conf > drwx------ 2 nobody root 4.0K ???? 13 16:53 fastcgi_temp > drwxr-xr-x 9 root root 4.0K ???? 14 11:32 html > drwxr-xr-x 2 root root 4.0K ???? 14 15:48 logs > drwx------ 2 nobody root 4.0K ???? 13 16:53 proxy_temp > drwxr-xr-x 2 root root 4.0K ???? 13 16:45 sbin > drwx------ 2 nobody root 4.0K ???? 13 16:53 scgi_temp > drwx------ 2 nobody root 4.0K ???? 13 16:53 uwsgi_temp > root at fr13:/usr/local# ls -lh lib/ > total 12K > drwxr-xr-x 15 root root 4.0K ???? 14 11:28 php > drwxrwsr-x 4 root staff 4.0K ???? 5 2019 python2.7 > drwxrwsr-x 3 root staff 4.0K ???? 5 2019 python3.4 > root at fr13:/usr/local# ls -lh lib/php/ > total 132K > drwxr-xr-x 2 root root 4.0K ???? 13 17:34 Archive > drwxr-xr-x 2 root root 4.0K ???? 14 11:19 build > drwxr-xr-x 2 root root 4.0K ???? 13 17:34 Console > drwxr-xr-x 4 root root 4.0K ???? 13 17:34 data > drwxr-xr-x 6 root root 4.0K ???? 13 17:34 doc > drwxr-xr-x 3 root root 4.0K ???? 13 17:41 extensions > drwxr-xr-x 2 root root 4.0K ???? 13 17:34 OS > drwxr-xr-x 11 root root 4.0K ???? 13 17:34 PEAR > -rw-r--r-- 1 root root 1.1K ???? 13 17:34 PEAR5.php > -rw-r--r-- 1 root root 15K ???? 13 17:34 pearcmd.php > -rw-r--r-- 1 root root 34K ???? 13 17:34 PEAR.php > -rw-r--r-- 1 root root 1012 ???? 13 17:34 peclcmd.php > -rw-r--r-- 1 root root 1.5K ???? 14 00:08 php.ini > drwxr-xr-x 3 root root 4.0K ???? 13 17:34 Structures > -rw-r--r-- 1 root root 21K ???? 13 17:34 System.php > drwxr-xr-x 4 root root 4.0K ???? 13 17:34 test > drwxr-xr-x 2 root root 4.0K ???? 13 17:34 XML > > > > Do you mean chown www-data:www-data /usr/lib/local/php/php.ini > ? > > but the upper directory, php/ is for root. > > Regards, > Mahmood > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mahmood.nt at gmail.com Sun Mar 15 12:28:30 2020 From: mahmood.nt at gmail.com (Mahmood Naderan) Date: Sun, 15 Mar 2020 15:58:30 +0330 Subject: Unable to see a php page Message-ID: Hi, For a test, I have installed nginx 1.0.15 with php 5.3 on an Ubuntu 14.04. The settings related to php in nginx.conf are as below where I removed the comments for simplicity. server { listen 80; server_name localhost; location / { root /home/ubuntu/htdocs/; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } location ~ \.php$ { root /home/ubuntu/htdocs/public_html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/ubuntu/htdocs/public_html/$fastcgi_script_name; include fastcgi_params; } } The document root is here. ubuntu at fr13:~$ ls -l htdocs/ total 36 -rw-r--r-- 1 www-data www-data 1422 ???? 15 15:28 build.xml drwxr-xr-x 2 www-data www-data 4096 ???? 15 15:28 classes drwxr-xr-x 2 www-data www-data 4096 ???? 15 15:28 controllers drwxr-xr-x 2 www-data www-data 4096 ???? 15 15:35 etc drwxr-xr-x 2 www-data www-data 4096 ???? 15 15:28 includes -rw-r--r-- 1 www-data www-data 152 ???? 15 15:29 index.html drwxr-xr-x 2 www-data www-data 4096 ???? 15 15:28 lib drwxr-xr-x 6 www-data www-data 4096 ???? 15 15:28 public_html drwxr-xr-x 2 www-data www-data 4096 ???? 15 15:28 views The index.html is a simple welcome message. Also ubuntu at fr13:~$ ls -l htdocs/public_html/index.php -rw-r--r-- 1 www-data www-data 7556 ???? 15 15:28 htdocs/public_html/index.php When I open the browser and enter localhost, I can see the welcome message. That means the basic functionality is fine. However, when I enter localhost/public_html/index.php I get this message in the browser: The page you are looking for is temporarily unavailable. Please try again later. At the same time, I see this entry in /usr/local/nginx/logs/error.log 2020/03/15 15:50:20 [error] 4808#0: *5 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /public_html/index.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "localhost" Why I get connection refused? What else should I check for more debugging? Regards, Mahmood -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Mar 15 13:02:28 2020 From: nginx-forum at forum.nginx.org (bubunia2000ster) Date: Sun, 15 Mar 2020 09:02:28 -0400 Subject: How to fix ERR_RESPONSE_HEADERS_TRUNCATED error Message-ID: Hi all, I am implementing a small application to test the path based routing functionality using nginx. My set up is something as below: User ->AWS Route 53(DNS resolution) -> AWS NLB (443)-> nginx(implements path based routing to different backend EC2 instances)------->http(backend ec2 instances) SSL is terminated at the nginx. When I run directly this URL with https://:6667 it works fine and page loads properly. But through nginx it doesnt work and I get the below error(https:///rest/). might be temporarily down or it may have moved permanently to a new web address. ERR_RESPONSE_HEADERS_TRUNCATED upstream app1 { keepalive 16; server test1.example.com:6666 max_fails=2 fail_timeout=300s;server test2.example.com:6666 max_fails=2 fail_timeout=300s; } upstream app1_external { sticky name=srv_id expires=1h domain= httponly secure path=/; keepalive 16; server test1.example.com:6667 max_fails=2 fail_timeout=300s;server test2.example.com:6667 max_fails=2 fail_timeout=300s; } server { listen 443 ssl; access_log /var/log/nginx/access.log main; # listen [::]:443 ssl proxy_protocol; server_name ssl_certificate "/etc/ssl/nginx/server.crt"; ssl_certificate_key "/etc/ssl/nginx/server.key"; ssl_session_cache shared:SSL:10m; ssl_session_timeout 30m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers EECDH+AESGCM:EDH+AESGCM; ssl_prefer_server_ciphers on; location / { proxy_pass https://app1/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_protocol_addr; real_ip_header X-Real-IP; proxy_http_version 1.1; proxy_set_header Connection ""; } location /rest/ { proxy_pass http://app1_external/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $scheme; real_ip_header X-Real-IP; proxy_http_version 1.1; proxy_set_header Connection ""; } } nginx version: nginx/1.13.5 Can anyone help me in this regard how to fix this issue? Regards Pradeep Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287347,287347#msg-287347 From francis at daoine.org Sun Mar 15 13:23:10 2020 From: francis at daoine.org (Francis Daly) Date: Sun, 15 Mar 2020 13:23:10 +0000 Subject: Unable to see a php page In-Reply-To: References: Message-ID: <20200315132310.GG26683@daoine.org> On Sun, Mar 15, 2020 at 03:58:30PM +0330, Mahmood Naderan wrote: Hi there, > For a test, I have installed nginx 1.0.15 with php 5.3 on an Ubuntu 14.04. > At the same time, I see this entry in /usr/local/nginx/logs/error.log > > 2020/03/15 15:50:20 [error] 4808#0: *5 connect() failed (111: Connection > refused) while connecting to upstream, client: 127.0.0.1, server: > localhost, request: "GET /public_html/index.php HTTP/1.1", upstream: > "fastcgi://127.0.0.1:9000", host: "localhost" > > > Why I get connection refused? > What else should I check for more debugging? nginx does not "do" php. nginx expects that you have a separate thing -- in this case, a fastcgi server listening on tcp port 9000 -- that handles php. "Connection refused" suggests that you are not running a fastcgi server there that nginx can access. You will want to find the Ubuntu method of running the PHP fastcgi service, possibly called "php-fpm"; and making sure that it is listening for requests in the place where your nginx does "fasctgi_pass" to. (As an aside -- the version numbers you mention are not the newest. It is possible that there are some bugs in those versions that mean that they do not work well together. If that happens, then you may have more debugging to do, or choose to use newer versions of things.) And also -- based on your config, you probably want to make a request for http://localhost/index.php, not http://localhost/public_html/index.php. But that will only matter after the current error is resolved. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Mar 15 13:28:29 2020 From: francis at daoine.org (Francis Daly) Date: Sun, 15 Mar 2020 13:28:29 +0000 Subject: How to fix ERR_RESPONSE_HEADERS_TRUNCATED error In-Reply-To: References: Message-ID: <20200315132829.GH26683@daoine.org> On Sun, Mar 15, 2020 at 09:02:28AM -0400, bubunia2000ster wrote: Hi there, > User ->AWS Route 53(DNS resolution) -> AWS NLB > (443)-> nginx(implements path based routing to different backend EC2 > instances)------->http(backend ec2 instances) That says "the plan is http from nginx to backend". > SSL is terminated at the nginx. When I run directly this URL with > https://:6667 it works fine and page loads properly. But That says "it works using https to backend". > upstream app1_external { > sticky name=srv_id expires=1h domain= httponly secure path=/; > keepalive 16; > server test1.example.com:6667 max_fails=2 fail_timeout=300s;server > test2.example.com:6667 max_fails=2 fail_timeout=300s; > } The backend is port 6667, as above. > location /rest/ { > proxy_pass http://app1_external/; And you try to talk http to it. > nginx version: nginx/1.13.5 > Can anyone help me in this regard how to fix this issue? Either - make your port 6667 listener use http, not https; or tell nginx to talk https to it. f -- Francis Daly francis at daoine.org From mahmood.nt at gmail.com Sun Mar 15 13:49:57 2020 From: mahmood.nt at gmail.com (Mahmood Naderan) Date: Sun, 15 Mar 2020 17:19:57 +0330 Subject: Unable to see a php page In-Reply-To: <20200315132310.GG26683@daoine.org> References: <20200315132310.GG26683@daoine.org> Message-ID: Thank you for the feedback. I want to do some tests on a VM albeit the versions are old. I started php-fpm and opened localhost/index.php This time I see the content of index.php as a plain text. I have to do more debugging. Thanks. Regards, Mahmood On Sun, Mar 15, 2020 at 4:53 PM Francis Daly wrote: > On Sun, Mar 15, 2020 at 03:58:30PM +0330, Mahmood Naderan wrote: > > Hi there, > > > For a test, I have installed nginx 1.0.15 with php 5.3 on an Ubuntu > 14.04. > > > At the same time, I see this entry in /usr/local/nginx/logs/error.log > > > > 2020/03/15 15:50:20 [error] 4808#0: *5 connect() failed (111: Connection > > refused) while connecting to upstream, client: 127.0.0.1, server: > > localhost, request: "GET /public_html/index.php HTTP/1.1", upstream: > > "fastcgi://127.0.0.1:9000", host: "localhost" > > > > > > Why I get connection refused? > > What else should I check for more debugging? > > nginx does not "do" php. nginx expects that you have a separate thing > -- in this case, a fastcgi server listening on tcp port 9000 -- that > handles php. > > "Connection refused" suggests that you are not running a fastcgi server > there that nginx can access. > > You will want to find the Ubuntu method of running the PHP fastcgi > service, possibly called "php-fpm"; and making sure that it is listening > for requests in the place where your nginx does "fasctgi_pass" to. > > (As an aside -- the version numbers you mention are not the newest. It > is possible that there are some bugs in those versions that mean that > they do not work well together. If that happens, then you may have more > debugging to do, or choose to use newer versions of things.) > > And also -- based on your config, you probably want to make a request for > http://localhost/index.php, not http://localhost/public_html/index.php. > But that will only matter after the current error is resolved. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Mar 15 14:00:31 2020 From: francis at daoine.org (Francis Daly) Date: Sun, 15 Mar 2020 14:00:31 +0000 Subject: Unable to see a php page In-Reply-To: References: <20200315132310.GG26683@daoine.org> Message-ID: <20200315140031.GK26683@daoine.org> On Sun, Mar 15, 2020 at 05:19:57PM +0330, Mahmood Naderan wrote: Hi there, > I started php-fpm and opened localhost/index.php > This time I see the content of index.php as a plain text. > I have to do more debugging. Thanks. That's useful. I suggest that the first step is to look at the http headers returned from your request -- if they include something like "X-Powered-By: php", then you can be confident that nginx asked php to process the file (and further investigation should be on the php side); if they do not, then you should probably check the nginx side (and the error log, possibly in debug mode for the test), to see what nginx did with the request that involved it *not* making the fastcgi_pass request. Good luck with it, f -- Francis Daly francis at daoine.org From a.vidican92 at gmail.com Sun Mar 15 18:58:38 2020 From: a.vidican92 at gmail.com (Adrian Vidican) Date: Sun, 15 Mar 2020 20:58:38 +0200 Subject: Error 512 after nginx setup Message-ID: Hi Everyone, First time I post here, hopefully I'm not gonna broke any rule. I setup Nginx on my Ubuntu 16.04 server to point my domain (using cloudflare) to my server where discourse.org is installed. Here's the default file in sites-available server { listen 80; listen [::]:80; server_name stumblr.in; return 301 https://$host$request_uri ; } server { listen 443 ssl http2; server_name stumblr.in; ssl_certificate /etc/letsencrypt/live/stumblr.in/fullchain.pem ; ssl_certificate_key /etc/letsencrypt/live/stumblr.in/privkey.pem ; include /etc/nginx/snippets/ssl.conf; location / { proxy_pass http://stumblr.in:2045/ ; proxy_set_header Host $http_host; proxy_http_version 1.1; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect http://stumblr.in:2045/ https://stumblr.in ; } } There's no error of Nginx but I've get 512 in browser. Any idea what could be wrong? Thanks in advance. Adrian Vidican -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.grigorov at gmail.com Sun Mar 15 22:01:43 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Mon, 16 Mar 2020 00:01:43 +0200 Subject: Problem installing in custom folder when Perl is enabled In-Reply-To: <20200313142249.GL12894@mdounin.ru> References: <20200313142249.GL12894@mdounin.ru> Message-ID: Thank you for the answer, Maxim! Regards, Martin On Fri, Mar 13, 2020 at 4:22 PM Maxim Dounin wrote: > Hello! > > On Fri, Mar 13, 2020 at 03:12:35PM +0200, Martin Grigorov wrote: > > > I'm facing the following problem when I try to install Nginx in a custom > > folder: > > [...] > > > make[1]: Entering directory '/home/ubuntu/hg/nginx/nginx' > > cd objs/src/http/modules/perl && make install > > make[2]: Entering directory > > '/home/ubuntu/hg/nginx/nginx/objs/src/http/modules/perl' > > "/usr/bin/perl" -MExtUtils::Command::MM -e 'cp_nonempty' -- nginx.bs > > blib/arch/auto/nginx/nginx.bs 644 > > Manifying 1 pod document > > Files found in blib/arch: installing files in blib/lib into architecture > > dependent library tree > > !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! > > ERROR: Can't create '/usr/local/lib/aarch64-linux-gnu/perl/5.26.1' > > Do not have write permissions on > > '/usr/local/lib/aarch64-linux-gnu/perl/5.26.1' > > !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! > > at -e line 1. > > Makefile:802: recipe for target 'pure_site_install' failed > > [...] > > > I do the following: > > > > $ cd /home/ubuntu/hg/nginx > > $ hg clone https://hg.nginx.org/nginx > > $ cd nginx > > $ > > ./auto/configure --prefix=/home/ubuntu/hg/nginx/nginx-build > > --with-http_perl_module > > $ make > > $ make install > > > > > > If I remove " --with-http_perl_module" then the installation is > > successful. > > But with Perl it still tries to install at /usr/local/lib and fails with > > permissions denied. > > Is this a problem in Nginx or in Perl itself ? > > By default the nginx perl module is installed by into system's > default path for perl modules. That is, the path is not set by > the "--prefix" parameter, but rather comes from the Perl itself. > You can specify a different path using the > "--with-perl_modules_path" configure parameter. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfrankliu at gmail.com Mon Mar 16 07:36:21 2020 From: gfrankliu at gmail.com (Frank Liu) Date: Mon, 16 Mar 2020 00:36:21 -0700 Subject: SSL session cache full Message-ID: Hi, I have a question after reading https://trac.nginx.org/nginx/ticket/621 . When that alert is logged in error log, what will happen to the connection? Will the client get an error (such as HTTP 4XX), or will it work as if the server doesn't support session resumption? As mentioned in comment3 in that ticket, for 2-way SSL clients, this could happen more frequently, will nginx fail the 2-way SSL handshake and give 4xx error? Thanks! Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Mar 16 12:33:21 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Mar 2020 15:33:21 +0300 Subject: SSL session cache full In-Reply-To: References: Message-ID: <20200316123321.GN12894@mdounin.ru> Hello! On Mon, Mar 16, 2020 at 12:36:21AM -0700, Frank Liu wrote: > I have a question after reading https://trac.nginx.org/nginx/ticket/621 . > When that alert is logged in error log, what will happen to the connection? > Will the client get an error (such as HTTP 4XX), or will it work as if the > server doesn't support session resumption? > As mentioned in comment3 in that ticket, for 2-way SSL clients, this could > happen more frequently, will nginx fail the 2-way SSL handshake and give > 4xx error? The error in question simply means the session won't be cached, so it cannot be resumed later. No SSL handshake or HTTP level errors will happen. -- Maxim Dounin http://mdounin.ru/ From eric.cox at kroger.com Mon Mar 16 13:58:29 2020 From: eric.cox at kroger.com (Cox, Eric S) Date: Mon, 16 Mar 2020 13:58:29 +0000 Subject: SSL session cache full In-Reply-To: <20200316123321.GN12894@mdounin.ru> References: <20200316123321.GN12894@mdounin.ru> Message-ID: How can this be monitored however? -----Original Message----- From: nginx On Behalf Of Maxim Dounin Sent: Monday, March 16, 2020 8:33 AM To: nginx at nginx.org Subject: Re: SSL session cache full ** [EXTERNAL EMAIL]: Do not click links or open attachments unless you recognize the sender and know the content is safe. ** Hello! On Mon, Mar 16, 2020 at 12:36:21AM -0700, Frank Liu wrote: > I have a question after reading https://nam05.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftrac.nginx.org%2Fnginx%2Fticket%2F621&data=02%7C01%7Ceric.cox%40kroger.com%7C55d4953f99d1463d5b0408d7c9a63bc9%7C8331e14a91344288bf5a5e2c8412f074%7C0%7C0%7C637199588132799634&sdata=1oXIyqckAq1MsnmVYoskBJH8ixRGoWqkVcOiajUtkW8%3D&reserved=0 . > When that alert is logged in error log, what will happen to the connection? > Will the client get an error (such as HTTP 4XX), or will it work as if > the server doesn't support session resumption? > As mentioned in comment3 in that ticket, for 2-way SSL clients, this > could happen more frequently, will nginx fail the 2-way SSL handshake > and give 4xx error? The error in question simply means the session won't be cached, so it cannot be resumed later. No SSL handshake or HTTP level errors will happen. -- Maxim Dounin https://nam05.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmdounin.ru%2F&data=02%7C01%7Ceric.cox%40kroger.com%7C55d4953f99d1463d5b0408d7c9a63bc9%7C8331e14a91344288bf5a5e2c8412f074%7C0%7C0%7C637199588132799634&sdata=gjfmvOiIz16HqBEFWRrunTUE4ihOQilCbL%2FRCMrkzWc%3D&reserved=0 _______________________________________________ nginx mailing list nginx at nginx.org https://nam05.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx&data=02%7C01%7Ceric.cox%40kroger.com%7C55d4953f99d1463d5b0408d7c9a63bc9%7C8331e14a91344288bf5a5e2c8412f074%7C0%7C0%7C637199588132799634&sdata=luC3%2FBJR2uJuG55O1UHl9FbxiwUP0QZ22nRHrf21kQ4%3D&reserved=0 ________________________________ This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is confidential and protected by law from unauthorized disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. From hobson42 at gmail.com Mon Mar 16 15:12:46 2020 From: hobson42 at gmail.com (Ian Hobson) Date: Mon, 16 Mar 2020 15:12:46 +0000 Subject: Unable to see a php page In-Reply-To: <20200315140031.GK26683@daoine.org> References: <20200315132310.GG26683@daoine.org> <20200315140031.GK26683@daoine.org> Message-ID: <82c14b10-1461-b8ee-1293-c85996e52116@gmail.com> On 15/03/2020 14:00, Francis Daly wrote: > On Sun, Mar 15, 2020 at 05:19:57PM +0330, Mahmood Naderan wrote: > > Hi there, > >> I started php-fpm and opened localhost/index.php >> This time I see the content of index.php as a plain text. >> I have to do more debugging. Thanks. > > That's useful. > > I suggest that the first step is to look at the http headers returned > from your request -- if they include something like "X-Powered-By: php", > then you can be confident that nginx asked php to process the file (and > further investigation should be on the php side); if they do not, then > you should probably check the nginx side (and the error log, possibly > in debug mode for the test), to see what nginx did with the request that > involved it *not* making the fastcgi_pass request. > > Good luck with it, > The thing I notice is that you have two root statements. You only need the one at the /server level. Then, if you use the line fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; (That is all on one line). in the localtion clause, things should work. -- Ian Hobson -- This email has been checked for viruses by AVG. https://www.avg.com From mahmood.nt at gmail.com Mon Mar 16 18:10:40 2020 From: mahmood.nt at gmail.com (Mahmood Naderan) Date: Mon, 16 Mar 2020 21:40:40 +0330 Subject: Unable to see a php page In-Reply-To: <82c14b10-1461-b8ee-1293-c85996e52116@gmail.com> References: <20200315132310.GG26683@daoine.org> <20200315140031.GK26683@daoine.org> <82c14b10-1461-b8ee-1293-c85996e52116@gmail.com> Message-ID: OK with this configuration: location ~ \.php$ { root /home/ubuntu/htdocs/public_html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } and the following folder structure $ ls ~/htdocs/ build.xml classes controllers etc includes index.html lib public_html views $ ls ~/htdocs/public_html/ addAttendee.php checkUser.php fileService.php logout.php upcomingEvents.php addDeleteFriend.php css findUsers.php phpinfo.php users.php addEvent.php deleteAttendee.php friends.php postedEvents.php xsl addEventResult.php deleteCommentsRating.php getAppConfig.php rateit.php yourUpcomingEvents.php addPerson.php deleteEvent.php images rejectInvite.php addPersonResult.php events.php index.php requestCityState.php approveFriendship.php favicon.ico js revokeInvite.php calendar.php feedFromDB.php login.php taggedEvents.php $ ls ~/htdocs/public_html/js/ controls.js dragdrop.js effects.js httpobject.js prototype.js starrating.js validateform.js when I open "localhost/index.php", I can see the php page, however, in the logs/error.log I see these 2020/03/16 21:34:22 [error] 5821#0: *2 FastCGI sent in stderr: "PHP message: PHP Warning: session_start() [function.session-start]: open(/tmp/http_sessions/sess_dd0e14b5b3f5aebcb53015b6bebc3bfa, O_RDWR) failed: No such file or directory (2) in /home/ubuntu/htdocs/public_html/index.php on line 26 Also I see somethings like 2020/03/16 21:34:22 [error] 5821#0: *2 open() "/home/ubuntu/htdocs/js/dragdrop.js" failed (2: No such file or directory), client: 127.0.0.1, server: localhost, request: "GET /js/dragdrop.js HTTP/1.1", host: "localhost", referrer: "http://localhost/index.php" I don't know why it is using "/home/ubuntu/htdocs/js/dragdrop.js"? According to the setting, it should look for it at "/home/ubuntu/htdocs/public_html/js/dragdrop.js" where the file actually exists. I know that is not directly related to nginx, but I appreciate any feedback. Regards, Mahmood -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Mar 17 15:49:25 2020 From: nginx-forum at forum.nginx.org (chrisguk) Date: Tue, 17 Mar 2020 11:49:25 -0400 Subject: Strange log output from access.log Message-ID: <650ec1c86c7268f843097f877f4a458c.NginxMailingListEnglish@forum.nginx.org> Has anyone seen this kind of output before, and why it is happening? 10.8.0.1 - - [17/Mar/2020:16:37:07 +0100] "GET /admin.php?content=8204;menu=040044;product_id=236431 HTTP/2.0" 200 18324 "https://app.tdom.net/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php/admin.php?text_string=7613052442458&lang=1&active=1&search.x=25&search.y=7&manufacturer=69&year=0&season=0&collection=0&price=0&stocktype=1&stocksum=&id_string=&erp_id_string=&supplier_item_no_string=&_qf__productSearchForm=&_qfe__submit=1&content=8210&menu=040044" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0" "-" Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287371,287371#msg-287371 From nginx-forum at forum.nginx.org Tue Mar 17 15:51:55 2020 From: nginx-forum at forum.nginx.org (chrisguk) Date: Tue, 17 Mar 2020 11:51:55 -0400 Subject: Strange log output from access.log In-Reply-To: <650ec1c86c7268f843097f877f4a458c.NginxMailingListEnglish@forum.nginx.org> References: <650ec1c86c7268f843097f877f4a458c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0d6a840adb5b3890665b449b9bec9d8d.NginxMailingListEnglish@forum.nginx.org> How can I put the log message in code format so that it wraps? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287371,287372#msg-287372 From fiorellazampetti at gmail.com Tue Mar 17 16:24:22 2020 From: fiorellazampetti at gmail.com (Fiorella Zampetti) Date: Tue, 17 Mar 2020 17:24:22 +0100 Subject: Study on annotating implementation and design choices, and technical debt Message-ID: Dear all, As software engineering research teams at the University of Sannio (Italy) and Eindhoven University of Technology (The Netherlands) we are interested in investigating the protocol used by developers while they have to annotate implementation and design choices during their normal development activities. More specifically, we are looking at whether, where and what kind of annotations developers usually use trying to be focused more on those annotations mainly aimed at highlighting that the code is not in the right shape (e.g., comments for annotating delayed or intended work activities such as TODO, FIXME, hack, workaround, etc). In the latter case, we are looking at what is the content of the above annotations, as well as how they usually behave while evolving the code that has been previously annotated. When answering the survey, in case your annotation practices are different in different open source projects you may contribute, please refer to how you behave for the projects where you have been contacted. Filling out the survey will take about 5 minutes. Please note that your identity and personal data will not be disclosed, while we plan to use the aggregated results and anonymized responses as part of a scientific publication. If you have any questions about the questionnaire or our research, please do not hesitate to contact us. You can find the survey link here: https://forms.gle/NQULdWRVvXYeMc1r6 Thanks and regards, Fiorella Zampetti (fzampetti at unisannio.it) Gianmarco Fucci (gianmarcofucci94 at gmail.com) Alexander Serebrenik (a.serebrenik at tue.nl) Massimiliano Di Penta (dipenta at unisannio.it) From nginx-forum at forum.nginx.org Wed Mar 18 11:17:01 2020 From: nginx-forum at forum.nginx.org (itpp2012) Date: Wed, 18 Mar 2020 07:17:01 -0400 Subject: openssl 1.1.1e 14095126:SSL routines:ssl3_read_n Message-ID: Logging getting swamped with: [crit] 1808#2740: *20747 SSL_read() failed (SSL: error:14095126:SSL routines:ssl3_read_n:unexpected eof while reading) while keepalive Related to: https://github.com/openssl/openssl/issues/10880 and this commit: https://github.com/openssl/openssl/commit/db943f43a60d1b5b1277e4b5317e8f288e7a0a3a Question: does this need to resolved in openssl or nginx ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287377,287377#msg-287377 From themadbeaker at gmail.com Wed Mar 18 13:30:06 2020 From: themadbeaker at gmail.com (J.R.) Date: Wed, 18 Mar 2020 08:30:06 -0500 Subject: openssl 1.1.1e 14095126:SSL routines:ssl3_read_n Message-ID: > [crit] 1808#2740: *20747 SSL_read() failed (SSL: error:14095126:SSL > routines:ssl3_read_n:unexpected eof while reading) while keepalive Just curious, but were you getting these errors while running 1.1.1d or they just started after upgrade to 1.1.1e ? From nginx-forum at forum.nginx.org Wed Mar 18 13:52:16 2020 From: nginx-forum at forum.nginx.org (itpp2012) Date: Wed, 18 Mar 2020 09:52:16 -0400 Subject: openssl 1.1.1e 14095126:SSL routines:ssl3_read_n In-Reply-To: References: Message-ID: <4796d4fd57b196d90514ea3be312d536.NginxMailingListEnglish@forum.nginx.org> After using 1.1.1e, see also the commit where an explicit entry has been added. nginx just reports back what openssl passes, if this was unexpected (none critical) nginx needs to be patched, if not this openssl workaround (10880) needs to be changed. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287377,287380#msg-287380 From nginx-forum at forum.nginx.org Thu Mar 19 09:42:28 2020 From: nginx-forum at forum.nginx.org (q1548) Date: Thu, 19 Mar 2020 05:42:28 -0400 Subject: ssl_dhparam with Wildcard SSL Message-ID: Hello, I want to use a Wildcard SSL on several servers. "ssl_certificate" and "ssl_certificate_key" are same CRT file and KEY file, but for "ssl_dhparam", each server have its private dhparam file? or use the same dhparam file? please help, thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287383,287383#msg-287383 From mdounin at mdounin.ru Thu Mar 19 13:25:17 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Mar 2020 16:25:17 +0300 Subject: ssl_dhparam with Wildcard SSL In-Reply-To: References: Message-ID: <20200319132517.GQ12894@mdounin.ru> Hello! On Thu, Mar 19, 2020 at 05:42:28AM -0400, q1548 wrote: > I want to use a Wildcard SSL on several servers. > > "ssl_certificate" and "ssl_certificate_key" are same CRT file and KEY file, > but for "ssl_dhparam", each server have its private dhparam file? or use the > same dhparam file? please help, thanks. You don't need to configure more than one dhparam file, one for all servers as set on the http level is enough. Moreover, you probably don't want to configure dhparam file at all, keeping all DHE ciphers disabled, as it is by default. DHE ciphers are very slow compared to ECDH ones, and most browsers support ECDH nowadays. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Mar 19 14:42:03 2020 From: nginx-forum at forum.nginx.org (q1548) Date: Thu, 19 Mar 2020 10:42:03 -0400 Subject: ssl_dhparam with Wildcard SSL In-Reply-To: <20200319132517.GQ12894@mdounin.ru> References: <20200319132517.GQ12894@mdounin.ru> Message-ID: <39248f0c47c9c378274b05fa6bc1cbf2.NginxMailingListEnglish@forum.nginx.org> Hello Maxim, Thanks for your helps. "http level...", Oh, not just a hardware server, several different dedicated servers. When wildcard SSL is used, the CRT file and the KEY file should be the same in each hardware server, I just want to know, can each server use its private dhparam file or must I use the same dhparam file? thank you very much. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287383,287385#msg-287385 From robertocarna36 at gmail.com Thu Mar 19 15:23:53 2020 From: robertocarna36 at gmail.com (Roberto Carna) Date: Thu, 19 Mar 2020 12:23:53 -0300 Subject: Nginx SSL reverse proxy with independent authentication for each backend web server Message-ID: Hi people, I wanna use NGINX as a SSL reverse proxy for several backends Apache web servers which listens on port TCP/8080 and TCP/9090. The NGINX reverse proxy must have one independent authentication for each backend web server: NGINX -- Auth 1 --- Web server 1 ports 8080/9090 -- Auth 2 --- Web server 2 ports 8080/9090 -- Auth 3 --- Web server 3 ports 8080/9090 etc. Is it possible to do this??? Can you give me some info o link in this way ??? Thanks a lot and regards !!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mahmood.nt at gmail.com Thu Mar 19 17:44:15 2020 From: mahmood.nt at gmail.com (Mahmood Naderan) Date: Thu, 19 Mar 2020 21:14:15 +0330 Subject: Question about root path for php-fpm Message-ID: Hi I am following a document but something seems to be a typo and I want to be sure about that. 1) It says: In the webserver root directory, we will install the Olio PHP application, we will call this directory $APP_DIR: o cd /webserver/root/dir (e.g. /home/username/htdocs/ created when we installed Nginx). So, I set location / { root /home/ub/htdocs; index index.html index.htm; } 2) It says: The nginx.conf configuration file must be set with the correct port number to access PHP-FPM. Open the file nginx.conf and make sure the following lines exist: location ~ \.php$ { root /path/to/root (e.g /home/username/htdocs/public_html ); fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $APP_DIR/$fastcgi_script_name; (e.g., /home/username/htdocs/public_html/$fastcgi_script_name) include fastcgi_params; } Currently, I have this folder structure $ ls ~/htdocs/ build.xml classes controllers etc includes index.html lib public_html views $ ls ~/htdocs/public_html/index* /home/ub/htdocs/public_html/index.php If I open browser and enter localhost, I can see the content of ~/htdocs/index.html. So, the first step is fine. The fastcgi_param says $APP_DIR. So, I should write /home/ub/htdocs but the "e.g." part says /home/ub/htdocs/public_html I am not sure if the root in the second step is /home/ub/htdocs or /home/ub/htdocs/public_html ? Can someone help. Thanks. Regards, Mahmood -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Thu Mar 19 21:52:44 2020 From: r at roze.lv (Reinis Rozitis) Date: Thu, 19 Mar 2020 23:52:44 +0200 Subject: openssl 1.1.1e 14095126:SSL routines:ssl3_read_n In-Reply-To: <4796d4fd57b196d90514ea3be312d536.NginxMailingListEnglish@forum.nginx.org> References: <4796d4fd57b196d90514ea3be312d536.NginxMailingListEnglish@forum.nginx.org> Message-ID: <002d01d5fe38$b8eb7500$2ac25f00$@roze.lv> > After using 1.1.1e, see also the commit where an explicit entry has been > added. > nginx just reports back what openssl passes, if this was unexpected (none > critical) nginx needs to be patched, if not this openssl workaround (10880) > needs to be changed. Any comment on this from any nginx devs? Been running 1.1.1c for some time and out of curiosity upgraded to 1.1.1e and indeed there are a lot of "(SSL: error:14095126:SSL routines:ssl3_read_n:unexpected eof while reading)". Is it "safe" to temporary revert the patch to reduce the noise (as per the github thread - the EOF (other than the "data loss") most likely has been there previously just not being returned as error) or are there more deeper problems with openssl/tls 1.3 etc? Also since there are no plans to implement quic even in openssl 3.0 does it maybe make sense to compile nginx with BoringSSL? rr From nginx-forum at forum.nginx.org Thu Mar 19 22:35:16 2020 From: nginx-forum at forum.nginx.org (q1548) Date: Thu, 19 Mar 2020 18:35:16 -0400 Subject: ssl_dhparam with Wildcard SSL In-Reply-To: <20200319132517.GQ12894@mdounin.ru> References: <20200319132517.GQ12894@mdounin.ru> Message-ID: <843e4d5094839ebc9186755cf07ae002.NginxMailingListEnglish@forum.nginx.org> need helps for this, thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287383,287390#msg-287390 From teward at thomas-ward.net Fri Mar 20 00:29:03 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Thu, 19 Mar 2020 20:29:03 -0400 Subject: ssl_dhparam with Wildcard SSL In-Reply-To: <39248f0c47c9c378274b05fa6bc1cbf2.NginxMailingListEnglish@forum.nginx.org> References: <20200319132517.GQ12894@mdounin.ru> <39248f0c47c9c378274b05fa6bc1cbf2.NginxMailingListEnglish@forum.nginx.org> Message-ID: The dhparam file cam be whichever you want it to be **provided that** you configure it per server block. Refer to the config documentation - http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam - and the 'context' being 'http' or 'server' - you can define different dhparam files for each server block.? This said, if you don't define this each server, it'll disable DHE ciphers (but not ECDHE ciphers). Thomas On 3/19/20 10:42 AM, q1548 wrote: > Hello Maxim, > > Thanks for your helps. "http level...", Oh, not just a hardware server, > several different dedicated servers. > > When wildcard SSL is used, the CRT file and the KEY file should be the same > in each hardware server, I just want to know, can each server use its > private dhparam file or must I use the same dhparam file? thank you very > much. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287383,287385#msg-287385 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From teward at thomas-ward.net Fri Mar 20 00:34:18 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Thu, 19 Mar 2020 20:34:18 -0400 Subject: Nginx SSL reverse proxy with independent authentication for each backend web server In-Reply-To: References: Message-ID: You can specify different auth_basic configurations per server or per location match. Refer to the documentation - http://nginx.org/en/docs/http/ngx_http_auth_basic_module.html - which shows that the auth_basic config options can be at the http, server, location, or limit_except levels of the configuration.? You can configure different server hostnames or locations within the same server block. Example configuration - different auth sets per location path within the foo.example.com auth AND a site config that will have a completely separate third auth that covers an entire site (private.example.com) (note that both of these server definitions are implied to be in an http server block, I just am not typing it out here for brevity) server { ??? listen 80; ??? server_name foo.example.com; ??? location /foo/ { ??????? try_files $uri $uri/ =403; ??????? auth_basic on; ??????? auth_basic_user_file /etc/nginx/fooauths; ??? } ??? location / { ??????? try_files $uri $uri/ =403; ??????? auth_basic on; ??????? auth_basic_user_file /etc/nginx/rootauths; ??? } } server { ??? listen 80; ??? server_name private.example.com; ??? auth_basic on; ??? auth_basic_user_file /etc/nginx/privateauths; ??? location / { ??????? try_files $uri $uri/ =403; ??? } } Thomas On 3/19/20 11:23 AM, Roberto Carna wrote: > Hi people,? > > I wanna use NGINX as a SSL reverse proxy for several backends Apache > web servers which?listens on port TCP/8080 and TCP/9090. > > The NGINX reverse proxy must have one independent?authentication for > each backend web server: > > NGINX -- Auth 1 --- Web server 1 ports 8080/9090 > ? ? ? ? ? ? -- Auth 2 --- Web server 2 ports 8080/9090 > ? ? ? ? ? ? -- Auth 3 --- Web server 3 ports 8080/9090 > ? ? ? ? ? ? ?etc. > > Is it possible to do this??? > > Can you?give me some info o link in?this way ??? > > Thanks a lot and regards !!! > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Mar 20 01:08:29 2020 From: nginx-forum at forum.nginx.org (q1548) Date: Thu, 19 Mar 2020 21:08:29 -0400 Subject: ssl_dhparam with Wildcard SSL In-Reply-To: References: Message-ID: Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287383,287393#msg-287393 From pluknet at nginx.com Fri Mar 20 07:41:32 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 20 Mar 2020 10:41:32 +0300 Subject: openssl 1.1.1e 14095126:SSL routines:ssl3_read_n In-Reply-To: References: Message-ID: > On 18 Mar 2020, at 14:17, itpp2012 wrote: > > Logging getting swamped with: > > [crit] 1808#2740: *20747 SSL_read() failed (SSL: error:14095126:SSL > routines:ssl3_read_n:unexpected eof while reading) while keepalive > > Related to: https://github.com/openssl/openssl/issues/10880 > and this commit: > https://github.com/openssl/openssl/commit/db943f43a60d1b5b1277e4b5317e8f288e7a0a3a > > Question: does this need to resolved in openssl or nginx ? So, they deliberately changed existing behaviour, known since at least OpenSSL 0.9.7, in the stable branch which should not be targeted (per their words) for introducing behaviour changes. That is unfortunate and beyond explanation. To simply shut up the crit, this would require such an ugly hack. diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -2301,7 +2301,13 @@ ngx_ssl_handle_recv(ngx_connection_t *c, c->ssl->no_wait_shutdown = 1; c->ssl->no_send_shutdown = 1; - if (sslerr == SSL_ERROR_ZERO_RETURN || ERR_peek_error() == 0) { + if (sslerr == SSL_ERROR_ZERO_RETURN || ERR_peek_error() == 0 +#ifdef SSL_R_UNEXPECTED_EOF_WHILE_READING + || (sslerr == SSL_ERROR_SSL && ERR_GET_REASON(ERR_peek_error()) + == SSL_R_UNEXPECTED_EOF_WHILE_READING) +#endif + ) + { ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, "peer shutdown SSL cleanly"); return NGX_DONE; -- Sergey Kandaurov From mdounin at mdounin.ru Fri Mar 20 12:59:37 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 20 Mar 2020 15:59:37 +0300 Subject: openssl 1.1.1e 14095126:SSL routines:ssl3_read_n In-Reply-To: References: Message-ID: <20200320125937.GR12894@mdounin.ru> Hello! On Fri, Mar 20, 2020 at 10:41:32AM +0300, Sergey Kandaurov wrote: > > > On 18 Mar 2020, at 14:17, itpp2012 wrote: > > > > Logging getting swamped with: > > > > [crit] 1808#2740: *20747 SSL_read() failed (SSL: error:14095126:SSL > > routines:ssl3_read_n:unexpected eof while reading) while keepalive > > > > Related to: https://github.com/openssl/openssl/issues/10880 > > and this commit: > > https://github.com/openssl/openssl/commit/db943f43a60d1b5b1277e4b5317e8f288e7a0a3a > > > > Question: does this need to resolved in openssl or nginx ? > > So, they deliberately changed existing behaviour, known since > at least OpenSSL 0.9.7, in the stable branch which should not > be targeted (per their words) for introducing behaviour changes. > That is unfortunate and beyond explanation. > > To simply shut up the crit, this would require such an ugly hack. > > diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c > --- a/src/event/ngx_event_openssl.c > +++ b/src/event/ngx_event_openssl.c > @@ -2301,7 +2301,13 @@ ngx_ssl_handle_recv(ngx_connection_t *c, > c->ssl->no_wait_shutdown = 1; > c->ssl->no_send_shutdown = 1; > > - if (sslerr == SSL_ERROR_ZERO_RETURN || ERR_peek_error() == 0) { > + if (sslerr == SSL_ERROR_ZERO_RETURN || ERR_peek_error() == 0 > +#ifdef SSL_R_UNEXPECTED_EOF_WHILE_READING > + || (sslerr == SSL_ERROR_SSL && ERR_GET_REASON(ERR_peek_error()) > + == SSL_R_UNEXPECTED_EOF_WHILE_READING) > +#endif > + ) > + { > ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, > "peer shutdown SSL cleanly"); > return NGX_DONE; I think a separate condition in an #ifdef might be preferred here, probably with better debug logging as well. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Fri Mar 20 13:54:08 2020 From: nginx-forum at forum.nginx.org (itpp2012) Date: Fri, 20 Mar 2020 09:54:08 -0400 Subject: openssl 1.1.1e 14095126:SSL routines:ssl3_read_n In-Reply-To: <20200320125937.GR12894@mdounin.ru> References: <20200320125937.GR12894@mdounin.ru> Message-ID: <905cc0e6e96ca52cef06434a1b6b8ebf.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > On Fri, Mar 20, 2020 at 10:41:32AM +0300, Sergey Kandaurov wrote: > > > On 18 Mar 2020, at 14:17, itpp2012 wrote: > > > Question: does this need to resolved in openssl or nginx ? > > So, they deliberately changed existing behaviour, known since > > at least OpenSSL 0.9.7, in the stable branch which should not > > be targeted (per their words) for introducing behaviour changes. > > That is unfortunate and beyond explanation. > I think a separate condition in an #ifdef might be preferred here, > probably with better debug logging as well. I'd prefer an openssl fix but can we now assume nginx prefers a nginx fix ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287377,287404#msg-287404 From phillip.odam at nitorgroup.com Fri Mar 20 13:57:40 2020 From: phillip.odam at nitorgroup.com (Phillip Odam) Date: Fri, 20 Mar 2020 09:57:40 -0400 Subject: Establish TCP connection to upstream when client connection made to listener Message-ID: Hi I'm looking for when a client establishes a TCP connection to an IP and port, that NGINX is listening on, that NGINX, without waiting on data being transmitted from the client to NGINX, would establish a TCP connection to the upstream. If such a capability were to exist I'd have thought it'd be documented either at http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html or http://nginx.org/en/docs/stream/ngx_stream_core_module.html. So from what I gather the capability does not exist in NGINX and it's quite likely considered a good thing, NGINX won't establish a backend connection (tying up resources) simply based on an in bound connection. Trouble with this though, NGINX then can't fully support reverse proxying protocols where the server provides a response upon TCP connection eg. SSH2, MySQL. You're instead dependent on the client handling the lack of initial server response and that after the client sends its first lot of data it'll then receive the server's initial response. I've checked the way HAProxy works and it either by default establishes the backend TCP connection upon connection to the frontend or there's some switch I unknowingly flipped. Presumably this isn't anything new, so please feel free to point me towards whatever I've failed to find myself and I'm interested in hearing others thoughts and experience with this aspect of NGINX if you have time to share. Cheers Phillip From francis at daoine.org Fri Mar 20 14:26:27 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 20 Mar 2020 14:26:27 +0000 Subject: Question about root path for php-fpm In-Reply-To: References: Message-ID: <20200320142627.GA20939@daoine.org> On Thu, Mar 19, 2020 at 09:14:15PM +0330, Mahmood Naderan wrote: Hi there, > I am following a document but something seems to be a typo and I want to be > sure about that. The fragment of the document you have provided seems to duplicate things unnecessarily; but I do not know what else it tries to do, so maybe it is correct if you follow it exactly. >From an nginx/php perspective, the thing that matters is: * for this particular request, what file on the filesystem do you want nginx to invite the php-service to process? If you can make a consistent pattern between url and matching file, then you can use a configuration that can handle more than one request. > location ~ \.php$ { > root /path/to/root (e.g /home/username/htdocs/public_html ); > fastcgi_pass 127.0.0.1:9000; "root", by itself, is irrelevant to nginx when fastcgi_pass is used. It only matters if you use a variable (such as $document_root or $request_filename) that depends on it. > fastcgi_param SCRIPT_FILENAME $APP_DIR/$fastcgi_script_name; (e.g., > /home/username/htdocs/public_html/$fastcgi_script_name) Usually, "fastcgi_param SCRIPT_FILENAME" names the file that nginx invites the php service (fastcgi server, php-fpm) to process. In this example, the only variable it uses is $fastcgi_script_name, which depends on the url and (maybe) "fastcgi_index". It does not depend on "root". > include fastcgi_params; } That file probably includes other variables that might depend on the "root" setting; whether they are used by your php script being processed by your php service is a separate question. > Currently, I have this folder structure > > $ ls ~/htdocs/ > build.xml classes controllers etc includes index.html lib > public_html views > $ ls ~/htdocs/public_html/index* > /home/ub/htdocs/public_html/index.php > > > If I open browser and enter localhost, I can see the content of > ~/htdocs/index.html. So, the first step is fine. > > > The fastcgi_param says $APP_DIR. So, I should write /home/ub/htdocs but the > "e.g." part says /home/ub/htdocs/public_html > > I am not sure if the root in the second step is /home/ub/htdocs or > /home/ub/htdocs/public_html > ? If the request "/one.php" should be handled by the file "/home/ub/htdocs/public_html/one.php", then you want the "fastcgi_param SCRIPT_FILENAME" to end up having that filename value. Often, the directive would be "fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;" or "fastcgi_param SCRIPT_FILENAME $request_filename;" -- if you use one of those, set "root /home/ub/htdocs/public_html;". But if you use your own value, this part of the nginx config does not care about "root". There is more than one convention for how to match urls to files. The description you give above suggests that you are mixing different conventions. That is perfectly fine; nginx does not care what you do, so long as you can tell it what you want it to do. To simplify things, I suggest you put *everything* that you care about in public_html, and use that as the only "root" set at "server" level in your nginx config. But that does depend on what *else* you want this nginx service to do. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Mar 20 14:43:21 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 20 Mar 2020 14:43:21 +0000 Subject: Establish TCP connection to upstream when client connection made to listener In-Reply-To: References: Message-ID: <20200320144321.GB20939@daoine.org> On Fri, Mar 20, 2020 at 09:57:40AM -0400, Phillip Odam wrote: Hi there, > I'm looking for when a client establishes a TCP connection to an IP and > port, that NGINX is listening on, that NGINX, without waiting on data being > transmitted from the client to NGINX, would establish a TCP connection to > the upstream. What happened when you tried it? A quick test here of "nc -l 9005", plus nginx.conf with == stream { server { listen 9001; proxy_pass 127.0.0.3:9005; } } == and "tcpdump -nn -i any -X -s 0 port 9005 or port 9001", seems to show that "nc localhost 9001" leads to a tcp handshake involving port 9001 (from the client to nginx) and a tcp handshake involving port 9005 (from nginx to the upstream). > Trouble with this though, NGINX then can't fully support reverse proxying > protocols where the server provides a response upon TCP connection eg. SSH2, > MySQL. You're instead dependent on the client handling the lack of initial > server response and that after the client sends its first lot of data it'll > then receive the server's initial response. Do you have a specific test case that shows this problem? == stream { server { listen 9001; proxy_pass 127.0.0.3:22; } } == and "ssh -v -p 9001 localhost" would seem to indicate that it Just Works. Perhaps my testing is wrong? f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Mar 20 21:09:45 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 20 Mar 2020 21:09:45 +0000 Subject: Strange log output from access.log In-Reply-To: <650ec1c86c7268f843097f877f4a458c.NginxMailingListEnglish@forum.nginx.org> References: <650ec1c86c7268f843097f877f4a458c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200320210945.GC20939@daoine.org> On Tue, Mar 17, 2020 at 11:49:25AM -0400, chrisguk wrote: Hi there, > Has anyone seen this kind of output before, and why it is happening? > > 10.8.0.1 - - [17/Mar/2020:16:37:07 +0100] "GET > /admin.php?content=8204;menu=040044;product_id=236431 HTTP/2.0" 200 18324 > "https://app.tdom.net/admin.php/admin.php/ad [snip] > min.php/admin.php/admin.php/admin.php/admin.php?text_string=7613052442458&lang=1&active=1&search.x=25&search.y=7&manufacturer=69&year=0&season=0&collection=0&price=0&stocktype=1&stocksum=&id_string=&erp_id_string=&supplier_item_no_string=&_qf__productSearchForm=&_qfe__submit=1&content=8210&menu=040044" > "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 > Firefox/68.0" "-" That looks to me like a normal log line for a request that has a silly-long referer: header. Is this the only such line in your log file? Can you track previous requests from this same browser/user to see if there is a real problem that you can address? Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Mar 20 21:31:14 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 20 Mar 2020 21:31:14 +0000 Subject: Unable to see a php page In-Reply-To: References: <20200315132310.GG26683@daoine.org> <20200315140031.GK26683@daoine.org> <82c14b10-1461-b8ee-1293-c85996e52116@gmail.com> Message-ID: <20200320213114.GD20939@daoine.org> On Mon, Mar 16, 2020 at 09:40:40PM +0330, Mahmood Naderan wrote: Hi there, > OK with this configuration: > > location ~ \.php$ { > root /home/ubuntu/htdocs/public_html; > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > include fastcgi_params; > } > when I open "localhost/index.php", I can see the php page, however, in the > logs/error.log I see these > > 2020/03/16 21:34:22 [error] 5821#0: *2 FastCGI sent in stderr: "PHP > message: PHP Warning: session_start() [ href='function.session-start'>function.session-start]: > open(/tmp/http_sessions/sess_dd0e14b5b3f5aebcb53015b6bebc3bfa, O_RDWR) > failed: No such file or directory (2) in > /home/ubuntu/htdocs/public_html/index.php on line 26 That's a PHP thing -- perhaps make the directories, or perhaps configure your PHP to use a different directory. PHP documentation should indicate how. > Also I see somethings like > > 2020/03/16 21:34:22 [error] 5821#0: *2 open() > "/home/ubuntu/htdocs/js/dragdrop.js" failed (2: No such file or directory), > client: 127.0.0.1, server: localhost, request: "GET /js/dragdrop.js > HTTP/1.1", host: "localhost", referrer: "http://localhost/index.php" > > > I don't know why it is using "/home/ubuntu/htdocs/js/dragdrop.js"? > According to the setting, it should look for it at > "/home/ubuntu/htdocs/public_html/js/dragdrop.js" where the file actually > exists. The config you showed only uses /home/ubuntu/htdocs/public_html as the document root for requests that end in .php. The request /js/dragdrop.js is not handled in this location. The configuration in, or inherited into, the location that handles that request is the one that matters. And that, presumably, has /home/ubuntu/htdocs as the configured document root. Possibly setting "root /home/ubuntu/htdocs/public_html;" at server{} level will cause things to work for you. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Mar 20 21:39:17 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 20 Mar 2020 21:39:17 +0000 Subject: Error 512 after nginx setup In-Reply-To: References: Message-ID: <20200320213917.GE20939@daoine.org> On Sun, Mar 15, 2020 at 08:58:38PM +0200, Adrian Vidican wrote: Hi there, > I setup Nginx on my Ubuntu 16.04 server to point my domain (using cloudflare) to my server where discourse.org is installed. Is there any evidence (logs, tcpdump) that the request got to your nginx in the first place? If not -- you have something outside of nginx to adjust, so that the request gets that far. Otherwise... > location / { > proxy_pass http://stumblr.in:2045/ ; > proxy_set_header Host $http_host; > proxy_http_version 1.1; > proxy_set_header X-Forwarded-Proto $scheme; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_redirect http://stumblr.in:2045/ https://stumblr.in ; > } > There's no error of Nginx but I've get 512 in browser. Does your upstream port-2045 service return the 512 message? (512 is not a standard http response code. So something must be creating it specially.) What response do you get if you use "curl -i" (or "curl -v") to make the request? That is less likely to hide important information. Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Fri Mar 20 22:12:26 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 21 Mar 2020 01:12:26 +0300 Subject: openssl 1.1.1e 14095126:SSL routines:ssl3_read_n In-Reply-To: <905cc0e6e96ca52cef06434a1b6b8ebf.NginxMailingListEnglish@forum.nginx.org> References: <20200320125937.GR12894@mdounin.ru> <905cc0e6e96ca52cef06434a1b6b8ebf.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200320221226.GS12894@mdounin.ru> Hello! On Fri, Mar 20, 2020 at 09:54:08AM -0400, itpp2012 wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > On Fri, Mar 20, 2020 at 10:41:32AM +0300, Sergey Kandaurov wrote: > > > > On 18 Mar 2020, at 14:17, itpp2012 > wrote: > > > > Question: does this need to resolved in openssl or nginx ? > > > > So, they deliberately changed existing behaviour, known since > > > at least OpenSSL 0.9.7, in the stable branch which should not > > > be targeted (per their words) for introducing behaviour changes. > > > That is unfortunate and beyond explanation. > > > I think a separate condition in an #ifdef might be preferred here, > > probably with better debug logging as well. > > I'd prefer an openssl fix but can we now assume nginx prefers a nginx fix ? Well, reverting OpenSSL behaviour back to one existed for years would be awesome. Unfortunately, this might never happen, as OpenSSL's team usually don't care. Also, even if this will happen, there will be at least some versions of OpenSSL when things behave incorrectly. As such, we certainly have to consider how to fix it on nginx side. Whether or not we'll commit the fix is a different question. -- Maxim Dounin http://mdounin.ru/ From yichun at openresty.com Sat Mar 21 08:14:04 2020 From: yichun at openresty.com (Yichun Zhang) Date: Sat, 21 Mar 2020 01:14:04 -0700 Subject: [ANN] OpenResty 1.15.8.3 released Message-ID: Hi there, OpenResty 1.15.8.3 is a patch release addressing recent security vulnerabilities in both the Nginx core and the ngx_http_lua module. The (portable) source code distribution, the Win32/Win64 binary distributions, and the pre-built binary Linux packages for Ubuntu, Debian, Fedora, CentOS, RHEL, OpenSUSE, Amazon Linux are provided on this page: https://openresty.org/en/download.html We also upgraded PCRE to 8.44 and OpenSSL to 1.1.0l for our binary packages. This is the third OpenResty release based on the nginx 1.15.8 core. Acknowledgments Thanks the HackerOne team for reporting the memory content leak vulnerabilities. Thanks Thibault Charbonnier and Dejiang Zhu for helping this release. Full Changelog Complete change logs since the last (formal) release, 1.15.8.2, can be browsed in the page Change Log for 1.15.8.x: https://openresty.org/en/changelog-1015008.html Feedback Feedback on this release is more than welcome. Feel free to create new [GitHub issues](https://github.com/openresty/openresty/issues) or send emails to one of our mailing lists. The Next Release The next release will be OpenResty 1.17.8.1 based on the recent nginx 1.17.8 core and its RC1 version is already out for community testing. See https://openresty.org/en/ann-1017008001rc1.html Thanks! Best regards, Yichun From phillip.odam at nitorgroup.com Sat Mar 21 14:56:30 2020 From: phillip.odam at nitorgroup.com (Phillip Odam) Date: Sat, 21 Mar 2020 10:56:30 -0400 Subject: Establish TCP connection to upstream when client connection made to listener In-Reply-To: <20200320144321.GB20939@daoine.org> References: <20200320144321.GB20939@daoine.org> Message-ID: Hi Francis Thanks for the detail. And you're quite right the issue had nothing to do with NGINX, it was the loadbalancer out in front of NGINX. Cheers Phillip On 3/20/20 10:43 AM, Francis Daly wrote: > On Fri, Mar 20, 2020 at 09:57:40AM -0400, Phillip Odam wrote: > > Hi there, > >> I'm looking for when a client establishes a TCP connection to an IP and >> port, that NGINX is listening on, that NGINX, without waiting on data being >> transmitted from the client to NGINX, would establish a TCP connection to >> the upstream. > What happened when you tried it? > > A quick test here of "nc -l 9005", plus nginx.conf with > > == > stream { > server { > listen 9001; > proxy_pass 127.0.0.3:9005; > } > } > == > > and "tcpdump -nn -i any -X -s 0 port 9005 or port 9001", seems to show > that "nc localhost 9001" leads to a tcp handshake involving port 9001 > (from the client to nginx) and a tcp handshake involving port 9005 > (from nginx to the upstream). > >> Trouble with this though, NGINX then can't fully support reverse proxying >> protocols where the server provides a response upon TCP connection eg. SSH2, >> MySQL. You're instead dependent on the client handling the lack of initial >> server response and that after the client sends its first lot of data it'll >> then receive the server's initial response. > Do you have a specific test case that shows this problem? > > == > stream { > server { > listen 9001; > proxy_pass 127.0.0.3:22; > } > } > == > > and "ssh -v -p 9001 localhost" would seem to indicate that it Just Works. > > Perhaps my testing is wrong? > > f From nginx-forum at forum.nginx.org Sat Mar 21 20:49:04 2020 From: nginx-forum at forum.nginx.org (itpp2012) Date: Sat, 21 Mar 2020 16:49:04 -0400 Subject: openssl 1.1.1e 14095126:SSL routines:ssl3_read_n In-Reply-To: References: Message-ID: <7407492cf399e3fa9048b961ff88748c.NginxMailingListEnglish@forum.nginx.org> Other not as often as the primary but all related to 1.1.1e: All [crit]: SSL_read() failed (SSL: error:14095126:SSL routines:ssl3_read_n:unexpected eof while reading) while processing HTTP/2 connection SSL_read() failed (SSL: error:14095126:SSL routines:ssl3_read_n:unexpected eof while reading) while keepalive SSL_read() failed (SSL: error:14095126:SSL routines:ssl3_read_n:unexpected eof while reading) while waiting for request SSL_do_handshake() failed (SSL: error:14095126:SSL routines:ssl3_read_n:unexpected eof while reading) while SSL handshaking Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287377,287418#msg-287418 From nginx-forum at forum.nginx.org Sun Mar 22 18:39:06 2020 From: nginx-forum at forum.nginx.org (itpp2012) Date: Sun, 22 Mar 2020 14:39:06 -0400 Subject: openssl 1.1.1e 14095126:SSL routines:ssl3_read_n In-Reply-To: <7407492cf399e3fa9048b961ff88748c.NginxMailingListEnglish@forum.nginx.org> References: <7407492cf399e3fa9048b961ff88748c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <92f6dafa927e3afc97a5fc6b69748643.NginxMailingListEnglish@forum.nginx.org> How about this as this catches all 3 while conditions: +++ src/event/ngx_event_openssl.c @@ -2318, c->ssl->no_wait_shutdown = 1; c->ssl->no_send_shutdown = 1; if (sslerr == SSL_ERROR_ZERO_RETURN || ERR_peek_error() == 0) { ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, "peer shutdown SSL cleanly"); return NGX_DONE; } + /* https://forum.nginx.org/read.php?2,287377 */ + /* https://github.com/openssl/openssl/issues/11381 */ +#ifdef SSL_R_UNEXPECTED_EOF_WHILE_READING + if (sslerr == SSL_ERROR_SSL && ERR_GET_REASON(ERR_peek_error()) + == SSL_R_UNEXPECTED_EOF_WHILE_READING) { + ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, + "ssl3_read_n:unexpected eof while reading"); + return NGX_DONE; + } +#endif + ngx_ssl_connection_error(c, sslerr, err, "SSL_read() failed"); Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287377,287420#msg-287420 From email at torstenreinhard.de Mon Mar 23 10:06:08 2020 From: email at torstenreinhard.de (Torsten Reinhard) Date: Mon, 23 Mar 2020 11:06:08 +0100 (CET) Subject: unable to get local issuer certificate Message-ID: <1124207028.901143.1584957968660@ox.hosteurope.de> Hi, I?m running nginx/1.17.8 as a ReverseProxy, executed as Docker container via docker-compose.yaml. version: '2' services: proxy: image: nginx:1.17 container_name: nginx restart: always ports: - "443:8443" - "80:8080" volumes: - /data/nginx-conf:/etc/nginx/conf.d/ networks: - webgateway networks: webgateway: driver: bridge driver_opts: com.docker.network.driver.mtu: 1300 It?s configured to run secured, which is working fine. The servers being proxied are availabe at https, but currently the verification is turned off.The certificate used by the server is also valid, it?s a chain being built upon server->intermediate-root CA. When turning it on, I always get => nginx | 2020/03/19 12:37:50 [error] 6#6: *1 upstream SSL certificate verify error: (20:unable to get local issuer certificate) while SSL handshaking to upstream, client: 141.77.119.231, server: tam-ci.mygroup.net, request: ?GET /sonarqube/ HTTP/2.0?, upstream: "https://10.248.117.61:443/sonarqube/", host: ?tam-ci.mygroup.net? Here?s my configuration: location /sonarqube/ { proxy_pass https://cvm23801.mygroup.net$request_uri; # TODO needed here ? proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # verify the Traefik certificate # TODO need to use own client certificate ??? #proxy_ssl_certificate /etc/nginx/conf.d/tam-ci.pem; #proxy_ssl_certificate_key /etc/nginx/conf.d/tam-ci.key; proxy_ssl_trusted_certificate /etc/nginx/conf.d/mygroup-ca.pem; proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; proxy_ssl_ciphers HIGH:!aNULL:!MD5; #proxy_ssl_name tam-ci.bmwgroup.net; proxy_ssl_verify on; #proxy_ssl_server_name off; proxy_ssl_verify_depth 2; proxy_ssl_session_reuse on; proxy_read_timeout 1800; proxy_connect_timeout 1800; proxy_send_timeout 1800; send_timeout 1800; } Any idea why I always see this error ? Or how to fix it? The proxy_ssl_trusted_certificate is a valid certificate chain containing an Intermediata as well as a root certificate (in one file) Thanx in advance, Torsten -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Mon Mar 23 11:04:36 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 23 Mar 2020 14:04:36 +0300 Subject: openssl 1.1.1e 14095126:SSL routines:ssl3_read_n In-Reply-To: <92f6dafa927e3afc97a5fc6b69748643.NginxMailingListEnglish@forum.nginx.org> References: <7407492cf399e3fa9048b961ff88748c.NginxMailingListEnglish@forum.nginx.org> <92f6dafa927e3afc97a5fc6b69748643.NginxMailingListEnglish@forum.nginx.org> Message-ID: > On 22 Mar 2020, at 21:39, itpp2012 wrote: > > How about this as this catches all 3 while conditions: > > +++ src/event/ngx_event_openssl.c > @@ -2318, > > c->ssl->no_wait_shutdown = 1; > c->ssl->no_send_shutdown = 1; > > if (sslerr == SSL_ERROR_ZERO_RETURN || ERR_peek_error() == 0) { > ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, > "peer shutdown SSL cleanly"); > return NGX_DONE; > } > > + /* https://forum.nginx.org/read.php?2,287377 */ > + /* https://github.com/openssl/openssl/issues/11381 */ > +#ifdef SSL_R_UNEXPECTED_EOF_WHILE_READING > + if (sslerr == SSL_ERROR_SSL && ERR_GET_REASON(ERR_peek_error()) > + == SSL_R_UNEXPECTED_EOF_WHILE_READING) { > + ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, > + "ssl3_read_n:unexpected eof while reading"); > + return NGX_DONE; > + } > +#endif > + > ngx_ssl_connection_error(c, sslerr, err, "SSL_read() failed"); How would this catch the reported error in SSL_do_handshake() ? I'd replicate this check in ngx_ssl_handshake(). And probably for SSL_read_early_data, SSL_shutdown, SSL_peak, (ok, we don't use SSL_peak), but this is a moot point. -- Sergey Kandaurov From nginx-forum at forum.nginx.org Mon Mar 23 11:41:28 2020 From: nginx-forum at forum.nginx.org (itpp2012) Date: Mon, 23 Mar 2020 07:41:28 -0400 Subject: openssl 1.1.1e 14095126:SSL routines:ssl3_read_n In-Reply-To: References: Message-ID: It doesn't and there are a few more for which this doesn't work either, it needs a lot more work and testing. I had a new concept patch but today decided to roll back to 1.1.1d and back port 1.1.1e (de) patches only. Only NGX_ERROR mitigates a truncation attack, not NGX_DONE (which is open for debate). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287377,287426#msg-287426 From mdounin at mdounin.ru Mon Mar 23 12:34:51 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 23 Mar 2020 15:34:51 +0300 Subject: openssl 1.1.1e 14095126:SSL routines:ssl3_read_n In-Reply-To: References: <7407492cf399e3fa9048b961ff88748c.NginxMailingListEnglish@forum.nginx.org> <92f6dafa927e3afc97a5fc6b69748643.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200323123451.GA1578@mdounin.ru> Hello! On Mon, Mar 23, 2020 at 02:04:36PM +0300, Sergey Kandaurov wrote: > > > On 22 Mar 2020, at 21:39, itpp2012 wrote: > > > > How about this as this catches all 3 while conditions: > > > > +++ src/event/ngx_event_openssl.c > > @@ -2318, > > > > c->ssl->no_wait_shutdown = 1; > > c->ssl->no_send_shutdown = 1; > > > > if (sslerr == SSL_ERROR_ZERO_RETURN || ERR_peek_error() == 0) { > > ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, > > "peer shutdown SSL cleanly"); > > return NGX_DONE; > > } > > > > + /* https://forum.nginx.org/read.php?2,287377 */ > > + /* https://github.com/openssl/openssl/issues/11381 */ > > +#ifdef SSL_R_UNEXPECTED_EOF_WHILE_READING > > + if (sslerr == SSL_ERROR_SSL && ERR_GET_REASON(ERR_peek_error()) > > + == SSL_R_UNEXPECTED_EOF_WHILE_READING) { > > + ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, > > + "ssl3_read_n:unexpected eof while reading"); > > + return NGX_DONE; > > + } > > +#endif > > + > > ngx_ssl_connection_error(c, sslerr, err, "SSL_read() failed"); > > How would this catch the reported error in SSL_do_handshake() ? > I'd replicate this check in ngx_ssl_handshake(). > And probably for SSL_read_early_data, SSL_shutdown, SSL_peak, > (ok, we don't use SSL_peak), but this is a moot point. Given the session resumption issue[1], I tend to think the best solution for now is to recommend to avoid using OpenSSL 1.1.1e. [1] https://github.com/openssl/openssl/issues/11378 -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon Mar 23 13:30:36 2020 From: nginx-forum at forum.nginx.org (q1548) Date: Mon, 23 Mar 2020 09:30:36 -0400 Subject: openssl 1.1.1d SSL_read() failed in error log Message-ID: <95c2f15f9303a326047d26fadb91edbf.NginxMailingListEnglish@forum.nginx.org> I use openssl 1.1.1d, SSL_read() failed in error log. not often, a few, but what does this mean, thanks. [crit] ... SSL_read() failed (SSL: error:14191044:SSL routines:tls1_enc:internal error) while processing HTTP/2 connection [crit] ... SSL_read() failed (SSL: error:14191044:SSL routines:tls1_enc:internal error) while processing HTTP/2 connection Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287429,287429#msg-287429 From pluknet at nginx.com Mon Mar 23 13:56:59 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 23 Mar 2020 16:56:59 +0300 Subject: openssl 1.1.1d SSL_read() failed in error log In-Reply-To: <95c2f15f9303a326047d26fadb91edbf.NginxMailingListEnglish@forum.nginx.org> References: <95c2f15f9303a326047d26fadb91edbf.NginxMailingListEnglish@forum.nginx.org> Message-ID: <39879841-BE94-49D6-80CF-97FC4F128E0A@nginx.com> > On 23 Mar 2020, at 16:30, q1548 wrote: > > I use openssl 1.1.1d, SSL_read() failed in error log. > not often, a few, but what does this mean, thanks. > > [crit] ... SSL_read() failed (SSL: error:14191044:SSL > routines:tls1_enc:internal error) while processing HTTP/2 connection > [crit] ... SSL_read() failed (SSL: error:14191044:SSL > routines:tls1_enc:internal error) while processing HTTP/2 connection There was a TLS record decryption error for some reason. Not much details. -- Sergey Kandaurov From nginx-forum at forum.nginx.org Mon Mar 23 22:35:42 2020 From: nginx-forum at forum.nginx.org (q1548) Date: Mon, 23 Mar 2020 18:35:42 -0400 Subject: openssl 1.1.1d SSL_read() failed in error log In-Reply-To: <39879841-BE94-49D6-80CF-97FC4F128E0A@nginx.com> References: <39879841-BE94-49D6-80CF-97FC4F128E0A@nginx.com> Message-ID: Thank you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287429,287435#msg-287435 From nginx-forum at forum.nginx.org Tue Mar 24 12:38:36 2020 From: nginx-forum at forum.nginx.org (q1548) Date: Tue, 24 Mar 2020 08:38:36 -0400 Subject: USR2 signal not work, failed to upgrade executable Message-ID: <8ad037ef90b6bfe3d8815f506d5b010a.NginxMailingListEnglish@forum.nginx.org> Hello, Both nginx_new and nginx_old are good, after USR2 signal be sent to the master process, it can not start new master process. I use these steps: 1. cp -f nginx_new nginx_old 2. kill -USR2 $( cat /usr/local/nginx/logs/nginx.pid ) 3. ps aux | grep nginx no new master process, only old master process, error.log show: [emerg] 19205#0: bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg] 19205#0: still could not bind() nginx: [emerg] still could not bind() Please help, thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287439,287439#msg-287439 From nginx-forum at forum.nginx.org Tue Mar 24 22:01:35 2020 From: nginx-forum at forum.nginx.org (robe007) Date: Tue, 24 Mar 2020 18:01:35 -0400 Subject: Nginx load balancing to keep sessions between IIS servers Message-ID: <12621a6aac01c3d84b4133b6efcfac5b.NginxMailingListEnglish@forum.nginx.org> I have set up a load balancer with NGINX for two IIS web servers that works with sessions. Here is the NGINX configuration file I have created for the load balancing: #Log Format log_format upstreamlog '$server_name to: $upstream_addr [$request] ' 'upstream_response_time $upstream_response_time ' 'msec $msec request_time $request_time'; #Upstream upstream mybalancer { ip_hash; server server1.com:80; server server2.com:80; } #Server server { listen 80; listen [::]:80; server_name server3.com; access_log /var/log/nginx/access.log upstreamlog; location / { proxy_pass http://mybalancer; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } When I make a request to server3.com it gets redirected -for example- to server1.com. Next I make the login, go to a specific page, let's say: server1.com/welcome/maps. Everything is ok. Now I turn off server1.com, and NGINX redirects me to server2.com, but prompts me to the login page. My question: It's possible to configure NGINX to keep the same sessions when one server goes down? This means that -in my example- NGINX could redirect me to server2.com/welcome/maps with the same session. PD: I have read on other posts about setting this options: proxy_cookie_path ~*^/.* /; add_header "Set-Cookie" "lb=$upstream_http_X_Loadbalance_ID"; but does not works. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287447,287447#msg-287447 From randyorbs at protonmail.com Tue Mar 24 23:15:59 2020 From: randyorbs at protonmail.com (randyorbs) Date: Tue, 24 Mar 2020 23:15:59 +0000 Subject: 2 locations, 2 _different_ cache valid settings, but same cache & pass-through Message-ID: <86PfBe7McMJNK7RDL9hQWsjQwR5T5ziR6aYk2Mh0X4e250aOJy_IF7wWRqnZfwW_90qnhca9V6zIN9l7phYGjcbmgYMD7N2VrQiUrlRooHI=@protonmail.com> I'd like to setup a reverse proxy with cache that will allow me to... 1. expose two different locations... location /foo {} location /bar {} 2. resolve to the same pass-through... location /foo { proxy_pass "http://myhost.io/go"; } location /bar { proxy_pass "http://myhost.io/go"; } 3. use the same cache... location /foo { proxy_pass "http://myhost.io/go"; proxy_cache shared_cache; } location /bar { proxy_pass "http://myhost.io/go"; proxy_cache shared_cache; } 4. use _different_ cache valid settings... location /foo { proxy_pass "http://myhost.io/go"; proxy_cache shared_cache; proxy_cache_valid any 5m; } location /bar { proxy_pass "http://myhost.io/go"; proxy_cache shared_cache; proxy_cache_valid any 10m; } What I have found is that I can request /foo, then /bar and the /bar result will be an immediate HIT on the cache, which is good - the keys are the same and they are both aware of the cache. However, now that I've requested /bar any requests to /foo will result in cache HITs for 10 minutes instead of the 5 minutes I want. If I never hit /bar, then /foo will cache HIT for the correct 5 minutes. Any thoughts on how I can use NGINX to configure my way into a solution for my unusual (?) use-case? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Mar 25 05:08:16 2020 From: nginx-forum at forum.nginx.org (q1548) Date: Wed, 25 Mar 2020 01:08:16 -0400 Subject: USR2 signal not work, failed to upgrade executable In-Reply-To: <8ad037ef90b6bfe3d8815f506d5b010a.NginxMailingListEnglish@forum.nginx.org> References: <8ad037ef90b6bfe3d8815f506d5b010a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5ca19e88f3c585373eb0f66cbfa6fd5e.NginxMailingListEnglish@forum.nginx.org> It is okay now, thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287439,287448#msg-287448 From shahzaib.cb at gmail.com Wed Mar 25 14:50:34 2020 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Wed, 25 Mar 2020 19:50:34 +0500 Subject: svg broken ! Message-ID: Hi, We've setup Nginx as Edge node. Website is running fine from edge (caching & proxying requests to origin as required) . However, proxying requests for .svg showing the following error while origin ndoe loads it fine. https://i.imgur.com/oYNl7UP.png Mime-type is also configured in nginx for svg on edge side but issue still persists: image/svg+xml svg svgz; Here is the directive for svg: https://pastebin.com/1JStJTBC =========================== Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Mar 26 03:36:58 2020 From: nginx-forum at forum.nginx.org (pdh0710) Date: Wed, 25 Mar 2020 23:36:58 -0400 Subject: SSL_read() failed on Nginx built with new OpenSSL 1.1.1e Message-ID: (Please excuse my English) I built Nginx 1.16.1 (current stable version) with OpenSSL 1.1.1e(newly released), PCRE 8.44 and Zlib 1.2.11. However, sometimes(not always) the below error logs are generated. 2020/03/26 09:53:19 [crit] 24020#24020: *6 SSL_read() failed (SSL: error:14095126:SSL routines:ssl3_read_n:unexpected eof while reading) while keepalive, client: 68.183.***.***, server: 0.0.0.0:443 The Nginx built with OpenSSL 1.1.1d does not generate the error logs. I don't know how I can fix this problem. Belows are my Nginx build configuration and nginx.conf. --*--*--*--*--*-- ./configure --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2' \ --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' \ --prefix=/nginx --user=www-data --group=www-data \ --error-log-path=/nginx/srv/nginx-error.log --http-log-path=/nginx/srv/nginx-access.log \ --pid-path=/nginx/srv/nginx.pid --lock-path=/nginx/srv/nginx.lock \ --with-zlib=../zlib-1.2.11 --with-pcre=../pcre-8.44 --with-openssl=../openssl-1.1.1e \ --with-pcre-jit --with-file-aio --with-threads --with-http_v2_module \ --without-http_uwsgi_module --without-http_scgi_module \ --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module \ --with-http_ssl_module --without-http_memcached_module \ --with-http_gunzip_module --with-http_gzip_static_module --*--*--*--*--*-- worker_processes auto; ? events { worker_connections 1024; } ? http { include mime.types; default_type application/octet-stream; ? log_format main '$time_iso8601 $remote_addr $status $body_bytes_sent "$request" $remote_user "$http_referer" "$http_user_agent" "$http_x_forwarded_for"'; ? server_tokens off; client_max_body_size 10m; client_body_buffer_size 128k; client_body_temp_path /var/tmp/ngx_client_body_temp; proxy_temp_path /var/tmp/ngx_proxy_temp; fastcgi_temp_path /var/tmp/ngx_proxy_temp; merge_slashes on; charset utf-8; tcp_nopush on; tcp_nodelay on; sendfile on; sendfile_max_chunk 1m; keepalive_timeout 70s; ? gzip on; gzip_comp_level 5; gzip_proxied any; gzip_min_length 1000; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_types text/plain text/css text/javascript application/javascript text/x-js application/json application/x-javascript application/octet-stream text/mathml text/xml application/xml application/atom+xml application/rss+xml; gzip_vary on; gzip_buffers 16 8k; ? server { server_name myserver.com; listen 443 ssl http2; keepalive_timeout 70; ? #ref : http://nginx.org/en/docs/http/configuring_https_servers.html ? ssl_certificate /etc/letsencrypt/live/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/privkey.pem; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; ? ssl_session_cache shared:le_nginx_SSL:50m; ssl_session_timeout 1d; ssl_session_tickets off; ssl_ecdh_curve X25519:sect571r1:secp521r1:secp384r1; ssl_early_data on; ? ? error_page 400 401 402 403 404 500 502 503 504 /err.html; location = /err.html { root /nginx/www; add_header Set-Cookie "ErrorCode=${status}; path=/;" always; internal; } ? location / { root /nginx/www; index index.html; try_files $uri $uri/index.html =404; aio threads; ? location ~ \.(css|js|ico|png|gif)$ { access_log off; } } } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287464,287464#msg-287464 From zn1314 at 126.com Thu Mar 26 05:14:42 2020 From: zn1314 at 126.com (David Ni) Date: Thu, 26 Mar 2020 13:14:42 +0800 (CST) Subject: unsubscribe In-Reply-To: <39d7a0be.7ddb.170f2a4d5cc.Coremail.zn1314@126.com> References: <39d7a0be.7ddb.170f2a4d5cc.Coremail.zn1314@126.com> Message-ID: <2ae76dff.3b7d.171154392a9.Coremail.zn1314@126.com> unsubscribe At 2020-03-19 19:54:13, "David Ni" wrote: unsubscribe -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Mar 26 08:14:24 2020 From: nginx-forum at forum.nginx.org (itpp2012) Date: Thu, 26 Mar 2020 04:14:24 -0400 Subject: SSL_read() failed on Nginx built with new OpenSSL 1.1.1e In-Reply-To: References: Message-ID: <1f9d83e2377f2aeaf2212792ddc1b754.NginxMailingListEnglish@forum.nginx.org> See https://forum.nginx.org/read.php?2,287377 Revert back to 1.1.1d Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287464,287466#msg-287466 From r at roze.lv Thu Mar 26 10:30:00 2020 From: r at roze.lv (Reinis Rozitis) Date: Thu, 26 Mar 2020 12:30:00 +0200 Subject: SSL_read() failed on Nginx built with new OpenSSL 1.1.1e In-Reply-To: References: Message-ID: <000001d60359$813adee0$83b09ca0$@roze.lv> > The Nginx built with OpenSSL 1.1.1d does not generate the error logs. I don't > know how I can fix this problem. > Belows are my Nginx build configuration and nginx.conf. I'm using 1.1.1e bit with reverted EOF patch (so far haven't seen any issues and it seems they are going to revert it anyways): cd openssl-1.1.1e wget https://patch-diff.githubusercontent.com/raw/openssl/openssl/pull/10882.patch patch -R -p1 < 10882.patch then recompile nginx rr From nginx-forum at forum.nginx.org Fri Mar 27 03:44:51 2020 From: nginx-forum at forum.nginx.org (pdh0710) Date: Thu, 26 Mar 2020 23:44:51 -0400 Subject: SSL_read() failed on Nginx built with new OpenSSL 1.1.1e In-Reply-To: <000001d60359$813adee0$83b09ca0$@roze.lv> References: <000001d60359$813adee0$83b09ca0$@roze.lv> Message-ID: <36d7e7889c61b5c2e165f2e2cfdbf8a6.NginxMailingListEnglish@forum.nginx.org> > cd openssl-1.1.1e > wget https://patch-diff.githubusercontent.com/raw/openssl/openssl/pull/10882.patch > patch -R -p1 < 10882.patch > > then recompile nginx Thank you. This solution fix the problem. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287464,287473#msg-287473 From francis at daoine.org Fri Mar 27 08:21:55 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 27 Mar 2020 08:21:55 +0000 Subject: 2 locations, 2 _different_ cache valid settings, but same cache & pass-through In-Reply-To: <86PfBe7McMJNK7RDL9hQWsjQwR5T5ziR6aYk2Mh0X4e250aOJy_IF7wWRqnZfwW_90qnhca9V6zIN9l7phYGjcbmgYMD7N2VrQiUrlRooHI=@protonmail.com> References: <86PfBe7McMJNK7RDL9hQWsjQwR5T5ziR6aYk2Mh0X4e250aOJy_IF7wWRqnZfwW_90qnhca9V6zIN9l7phYGjcbmgYMD7N2VrQiUrlRooHI=@protonmail.com> Message-ID: <20200327082155.GF20939@daoine.org> On Tue, Mar 24, 2020 at 11:15:59PM +0000, randyorbs wrote: Hi there, > 4. use _different_ cache valid settings... > location /foo { > proxy_pass "http://myhost.io/go"; > proxy_cache shared_cache; > proxy_cache_valid any 5m; > } > location /bar { > proxy_pass "http://myhost.io/go"; > proxy_cache shared_cache; > proxy_cache_valid any 10m; > } > > What I have found is that I can request /foo, then /bar and the /bar result will be an immediate HIT on the cache, which is good - the keys are the same and they are both aware of the cache. However, now that I've requested /bar any requests to /foo will result in cache HITs for 10 minutes instead of the 5 minutes I want. If I never hit /bar, then /foo will cache HIT for the correct 5 minutes. > > Any thoughts on how I can use NGINX to configure my way into a solution for my unusual (?) use-case? The nginx cache file structure includes the validity period within the stored object file. The system does not care how the object file got there; it cares about the file name and file contents. So, "no". (At least, not without writing your own special-case caching system.) What is the thing that you want to achieve? Perhaps there is an alternate way to get to the same desired end result. f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Mar 27 08:30:41 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 27 Mar 2020 08:30:41 +0000 Subject: Nginx load balancing to keep sessions between IIS servers In-Reply-To: <12621a6aac01c3d84b4133b6efcfac5b.NginxMailingListEnglish@forum.nginx.org> References: <12621a6aac01c3d84b4133b6efcfac5b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200327083041.GG20939@daoine.org> On Tue, Mar 24, 2020 at 06:01:35PM -0400, robe007 wrote: Hi there, > I have set up a load balancer with NGINX for two IIS web servers that works > with sessions. What is "a session"? The answer to that may make it clear how to achieve what you want. > upstream mybalancer { > ip_hash; > server server1.com:80; > server server2.com:80; > } > > #Server > server { > server_name server3.com; > location / { > proxy_pass http://mybalancer; > } > } > > When I make a request to server3.com it gets redirected -for example- to > server1.com. Next I make the login, go to a specific page, let's say: > server1.com/welcome/maps. Everything is ok. That... probably should not happen. In a reverse proxy situation, your client should not know (or care) whether it is talking to server1 or to server2 -- it only interacts with server3. (It is possible that I am just misunderstanding the terminology here.) > Now I turn off server1.com, and NGINX redirects me to server2.com, but > prompts me to the login page. I think "no". server2 asks you to login; nginx does not. And that difference matters here. > It's possible to configure NGINX to keep the same sessions when one server > goes down? This means that -in my example- NGINX could redirect me to > server2.com/welcome/maps with the same session. Back to the first question -- what is a session? I suspect that in your system it is "state stored on the back-end server1 or server2"; and it is not anything that the browser knows about and not anything that nginx knows about. The browser might have a "key" to the session in a cookie. If that is the case, then the fix would be for you to make sure that server1 and server2 both have the same shared idea of a session, so that the same cookie sent to either server will end up with the same response. And there is nothing the nginx can do about arranging that. f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Mar 27 08:34:13 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 27 Mar 2020 08:34:13 +0000 Subject: svg broken ! In-Reply-To: References: Message-ID: <20200327083413.GH20939@daoine.org> On Wed, Mar 25, 2020 at 07:50:34PM +0500, shahzaib mushtaq wrote: Hi there, > We've setup Nginx as Edge node. Website is running fine from edge (caching > & proxying requests to origin as required) . However, proxying requests for > .svg showing the following error while origin ndoe loads it fine. What is the difference in the output between requests like curl -i http://nginx-server/example.svg and curl -i http://origin-server/example.svg ? That difference might hint where the problem is caused. f -- Francis Daly francis at daoine.org From shahzaib.cb at gmail.com Fri Mar 27 09:32:07 2020 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Fri, 27 Mar 2020 14:32:07 +0500 Subject: svg broken ! In-Reply-To: <20200327083413.GH20939@daoine.org> References: <20200327083413.GH20939@daoine.org> Message-ID: >From ORIGIN (WORKING) https://i.imgur.com/4EJ8VfM.png >From Edge (Broken SVG) https://i.imgur.com/MZ6HAsU.png On Fri, Mar 27, 2020 at 1:34 PM Francis Daly wrote: > On Wed, Mar 25, 2020 at 07:50:34PM +0500, shahzaib mushtaq wrote: > > Hi there, > > > We've setup Nginx as Edge node. Website is running fine from edge > (caching > > & proxying requests to origin as required) . However, proxying requests > for > > .svg showing the following error while origin ndoe loads it fine. > > What is the difference in the output between requests like > > curl -i http://nginx-server/example.svg > > and > > curl -i http://origin-server/example.svg > > ? > > That difference might hint where the problem is caused. > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Mar 27 15:31:11 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 27 Mar 2020 15:31:11 +0000 Subject: svg broken ! In-Reply-To: References: <20200327083413.GH20939@daoine.org> Message-ID: <20200327153111.GI20939@daoine.org> On Fri, Mar 27, 2020 at 02:32:07PM +0500, shahzaib mushtaq wrote: Hi there, > From ORIGIN (WORKING) > https://i.imgur.com/4EJ8VfM.png > > From Edge (Broken SVG) > https://i.imgur.com/MZ6HAsU.png Could you include the words as words, rather than as a link to a picture of words? That will make it much easier to copy-paste and "diff" the output, to see what is different in the two responses. And then your browser manual may describe what input it expects, so that the "wrong" output can be examined to see what the browser wants that is not there. (Presumably -- it will be one of the things that is different between the "good" and the "bad" responses.) Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Mar 27 15:39:28 2020 From: nginx-forum at forum.nginx.org (Faulteh) Date: Fri, 27 Mar 2020 11:39:28 -0400 Subject: NGINX on windows Message-ID: <43f3ec908e70342ea386d91d33e92774.NginxMailingListEnglish@forum.nginx.org> HI All, I'm just wondering what the current limitations on worker_connections are in Windows? Is it 1024 as I can see in some discussions or is it able to be set higher? Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287488,287488#msg-287488 From nginx-forum at forum.nginx.org Fri Mar 27 16:04:49 2020 From: nginx-forum at forum.nginx.org (itpp2012) Date: Fri, 27 Mar 2020 12:04:49 -0400 Subject: NGINX on windows In-Reply-To: <43f3ec908e70342ea386d91d33e92774.NginxMailingListEnglish@forum.nginx.org> References: <43f3ec908e70342ea386d91d33e92774.NginxMailingListEnglish@forum.nginx.org> Message-ID: Look at this version, per-compiled, high performance http://nginx-win.ecsds.eu/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287488,287489#msg-287489 From kworthington at gmail.com Fri Mar 27 16:49:50 2020 From: kworthington at gmail.com (Kevin Worthington) Date: Fri, 27 Mar 2020 12:49:50 -0400 Subject: NGINX on windows In-Reply-To: References: <43f3ec908e70342ea386d91d33e92774.NginxMailingListEnglish@forum.nginx.org> Message-ID: I offer 32-bit and 64-bit pre-compiled MSI installer versions of Nginx for Windows for free download here: https://kevinworthington.com/nginx-for-windows/ Nginx.org also has supported binaries for free download. Best regards, Kevin -- Kevin Worthington kworthington at gmail.com https://kevinworthington.com/ https://twitter.com/kworthington On Fri, Mar 27, 2020 at 12:04 PM itpp2012 wrote: > Look at this version, per-compiled, high performance > http://nginx-win.ecsds.eu/ > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,287488,287489#msg-287489 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Mar 27 16:51:30 2020 From: nginx-forum at forum.nginx.org (waqas9980) Date: Fri, 27 Mar 2020 12:51:30 -0400 Subject: Nginx Truncating Logs Message-ID: <35814f1d4a3df369bcbdb97a9a004bd6.NginxMailingListEnglish@forum.nginx.org> Hi, I am using Java application with NGINX. When my request have large size response body, nginx truncates that logs in access.log file. I have used proxy_buffering off; but that didn't work. Kindly suggest. NGINX version : 1.10.3 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287490,287490#msg-287490 From nginx-forum at forum.nginx.org Fri Mar 27 16:58:08 2020 From: nginx-forum at forum.nginx.org (Faulteh) Date: Fri, 27 Mar 2020 12:58:08 -0400 Subject: NGINX on windows In-Reply-To: References: Message-ID: This doesn't answer my question. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287488,287492#msg-287492 From kworthington at gmail.com Fri Mar 27 17:03:43 2020 From: kworthington at gmail.com (Kevin Worthington) Date: Fri, 27 Mar 2020 13:03:43 -0400 Subject: NGINX on windows In-Reply-To: References: Message-ID: I think it's 1024. Best regards, Kevin -- Kevin Worthington kworthington at gmail.com https://kevinworthington.com/ https://twitter.com/kworthington On Fri, Mar 27, 2020 at 12:58 PM Faulteh wrote: > This doesn't answer my question. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,287488,287492#msg-287492 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Mar 27 17:32:01 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 Mar 2020 20:32:01 +0300 Subject: NGINX on windows In-Reply-To: <43f3ec908e70342ea386d91d33e92774.NginxMailingListEnglish@forum.nginx.org> References: <43f3ec908e70342ea386d91d33e92774.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200327173201.GE3546@mdounin.ru> Hello! On Fri, Mar 27, 2020 at 11:39:28AM -0400, Faulteh wrote: > I'm just wondering what the current limitations on worker_connections are in > Windows? Is it 1024 as I can see in some discussions or is it able to be set > higher? Since nginx 1.15.9 you can use higher number of worker_connections with the "poll" event method (only available on Windows Vista or newer). Note though that this isn't the only limitation of nginx when running on Windows, and it's generally a bad idea to use it for production. Some basic limitations are outlined in the documentation, see http://nginx.org/en/docs/windows.html. -- Maxim Dounin http://mdounin.ru/ From bee.lists at gmail.com Fri Mar 27 18:42:44 2020 From: bee.lists at gmail.com (Bee.Lists) Date: Fri, 27 Mar 2020 14:42:44 -0400 Subject: Nginx Truncating Logs In-Reply-To: <35814f1d4a3df369bcbdb97a9a004bd6.NginxMailingListEnglish@forum.nginx.org> References: <35814f1d4a3df369bcbdb97a9a004bd6.NginxMailingListEnglish@forum.nginx.org> Message-ID: Look at logrotate, as it will reduce file size. > On Mar 27, 2020, at 12:51 PM, waqas9980 wrote: > > Hi, > I am using Java application with NGINX. When my request have large size > response body, nginx truncates that logs in access.log file. I have used > proxy_buffering off; but that didn't work. Kindly suggest. > > NGINX version : 1.10.3 Cheers, Bee From randyorbs at protonmail.com Fri Mar 27 18:59:05 2020 From: randyorbs at protonmail.com (randyorbs) Date: Fri, 27 Mar 2020 18:59:05 +0000 Subject: 2 locations, 2 _different_ cache valid settings, but same cache & pass-through In-Reply-To: <20200327082155.GF20939@daoine.org> References: <86PfBe7McMJNK7RDL9hQWsjQwR5T5ziR6aYk2Mh0X4e250aOJy_IF7wWRqnZfwW_90qnhca9V6zIN9l7phYGjcbmgYMD7N2VrQiUrlRooHI=@protonmail.com> <20200327082155.GF20939@daoine.org> Message-ID: > What is the thing that you want to achieve? I have two clients, foo and bar. I charge foo to hit /foo and bar to hit /bar at different rates/costs. They've both agreed that the data can be stale, but how stale is different for each - 5 minutes for foo, 10 minutes for bar. Very often, foo and bar (though using different end-points) are making the exact same requests and getting the exact same responses. I too get charged for hitting /myhost.io/go when my clients hit /foo and /bar. I want to limit how often I hit /myhost.io/go to reduce costs, but also ensure I fulfill my "how stale is OK" agreements with my clients. What I need is a cache of data that is aware that the validity of its data is dependent on who - foo or bar - is retrieving it. In practice, this means that requests from both foo and bar may be responded to with cache data from the other's previous request, but most likely the cache will be populated with responses to foo's requests to the benefit of bar, which is fine. It would be great if I could config my way into a solution to my use-case using NGINX. ??????? Original Message ??????? On Friday, March 27, 2020 1:21 AM, Francis Daly wrote: > On Tue, Mar 24, 2020 at 11:15:59PM +0000, randyorbs wrote: > > Hi there, > > > 4. use different cache valid settings... > > location /foo { > > proxy_pass "http://myhost.io/go"; > > proxy_cache shared_cache; > > proxy_cache_valid any 5m; > > } > > location /bar { > > proxy_pass "http://myhost.io/go"; > > proxy_cache shared_cache; > > proxy_cache_valid any 10m; > > } > > > > > > What I have found is that I can request /foo, then /bar and the /bar result will be an immediate HIT on the cache, which is good - the keys are the same and they are both aware of the cache. However, now that I've requested /bar any requests to /foo will result in cache HITs for 10 minutes instead of the 5 minutes I want. If I never hit /bar, then /foo will cache HIT for the correct 5 minutes. > > Any thoughts on how I can use NGINX to configure my way into a solution for my unusual (?) use-case? > > The nginx cache file structure includes the validity period within the > stored object file. > > The system does not care how the object file got there; it cares about > the file name and file contents. > > So, "no". > > (At least, not without writing your own special-case caching system.) > > What is the thing that you want to achieve? Perhaps there is an alternate > way to get to the same desired end result. > > f > > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > > Francis Daly francis at daoine.org > > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From r at roze.lv Fri Mar 27 23:03:16 2020 From: r at roze.lv (Reinis Rozitis) Date: Sat, 28 Mar 2020 01:03:16 +0200 Subject: 2 locations, 2 _different_ cache valid settings, but same cache & pass-through In-Reply-To: References: <86PfBe7McMJNK7RDL9hQWsjQwR5T5ziR6aYk2Mh0X4e250aOJy_IF7wWRqnZfwW_90qnhca9V6zIN9l7phYGjcbmgYMD7N2VrQiUrlRooHI=@protonmail.com> <20200327082155.GF20939@daoine.org> Message-ID: <000001d6048b$e6a979c0$b3fc6d40$@roze.lv> > What I need is a cache of data that is aware that the validity of its data is > dependent on who - foo or bar - is retrieving it. In practice, this means that > requests from both foo and bar may be responded to with cache data from > the other's previous request, but most likely the cache will be populated with > responses to foo's requests to the benefit of bar, which is fine. > > I too get charged for hitting /myhost.io/go when my clients hit /foo and /bar. > > It would be great if I could config my way into a solution to my use-case > using NGINX. Just a quick idea (though maybe there is a more elegant way). Depending on how do you charge your clients (like if it matters which client actually made the request to the origin) instead of using the same proxy_cache for both you could make 2 separate caches (one for the 5 min other for 10 min) and then chain them. Something like: location /foo { proxy_pass "http://myhost.io/go"; proxy_cache 5min_shared_cache; proxy_cache_valid any 5m; } location /bar { proxy_pass http://localhost/foo; // obviously you need to adjust/rewrite the uris proxy_cache 10min_shared_cache; proxy_cache_valid any 10m; } So in case there is a request to /foo it will be forwarded to origin and cached for 5 mins, if there is request to /bar - a subrequest will be made to /foo and if there is a cached object in the 5min cache you'll get the object without making request to origin and the object will be saved for 10 mins, but if there is no object in the 5 min cache a single request to origin will populate both caches. There are some drawbacks of course - the /bar requests will always populate also the 5min /foo cache - so every object will be saved twice = extra disk space. Also depending if you ignore or not the Expires (origin) headers the object in the 10min cache could end up actually older than 10 minutes (if you get the object from the 5min cache at 4:59 age (sadly nginx doesn't have Age header) and then add 10 mins to it. So it may or may not be a problem if you can adjust the times a bit - like add only ~7 minutes if the /foo responds with cache hit status) Going the other way around (/foo -> /bar) could be possible but has extra complexity (like checking if the object is not too old etc) for example using LUA you could implement whole proxy and file store logic. rr From randyorbs at protonmail.com Sun Mar 29 06:48:31 2020 From: randyorbs at protonmail.com (randyorbs) Date: Sun, 29 Mar 2020 06:48:31 +0000 Subject: 2 locations, 2 _different_ cache valid settings, but same cache & pass-through In-Reply-To: <000001d6048b$e6a979c0$b3fc6d40$@roze.lv> References: <86PfBe7McMJNK7RDL9hQWsjQwR5T5ziR6aYk2Mh0X4e250aOJy_IF7wWRqnZfwW_90qnhca9V6zIN9l7phYGjcbmgYMD7N2VrQiUrlRooHI=@protonmail.com> <20200327082155.GF20939@daoine.org> <000001d6048b$e6a979c0$b3fc6d40$@roze.lv> Message-ID: > Just a quick idea... Thank you for the idea. I was able to quickly demo it out and play around with it a bit. You mentioned /bar objects being older than desired with this idea, which I agree is a problem, but in another way. If I reconfigure /bar to say 60 minutes, then that means /bar would not benefit from /foo's more frequent requests to http://myhost.io/go i.e. /bar will use its cache for 60 minutes, thus, missing out on all the fresh data /foo is seeing. I want both /foo and /bar to use the most recent data either has received. I admit, my arbitrary choice of 5 and 10 minutes for my example was not ideal. Thanks again. ??????? Original Message ??????? On Friday, March 27, 2020 4:03 PM, Reinis Rozitis wrote: > > What I need is a cache of data that is aware that the validity of its data is > > dependent on who - foo or bar - is retrieving it. In practice, this means that > > requests from both foo and bar may be responded to with cache data from > > the other's previous request, but most likely the cache will be populated with > > responses to foo's requests to the benefit of bar, which is fine. > > I too get charged for hitting /myhost.io/go when my clients hit /foo and /bar. > > It would be great if I could config my way into a solution to my use-case > > using NGINX. > > Just a quick idea (though maybe there is a more elegant way). > > Depending on how do you charge your clients (like if it matters which client actually made the request to the origin) instead of using the same proxy_cache for both you could make 2 separate caches (one for the 5 min other for 10 min) and then chain them. > > Something like: > > location /foo { > proxy_pass "http://myhost.io/go"; > proxy_cache 5min_shared_cache; > proxy_cache_valid any 5m; > } > > location /bar { > proxy_pass http://localhost/foo; // obviously you need to adjust/rewrite the uris > proxy_cache 10min_shared_cache; > proxy_cache_valid any 10m; > } > > So in case there is a request to /foo it will be forwarded to origin and cached for 5 mins, > if there is request to /bar - a subrequest will be made to /foo and if there is a cached object in the 5min cache you'll get the object without making request to origin and the object will be saved for 10 mins, but if there is no object in the 5 min cache a single request to origin will populate both caches. > > There are some drawbacks of course - the /bar requests will always populate also the 5min /foo cache - so every object will be saved twice = extra disk space. > > Also depending if you ignore or not the Expires (origin) headers the object in the 10min cache could end up actually older than 10 minutes (if you get the object from the 5min cache at 4:59 age (sadly nginx doesn't have Age header) and then add 10 mins to it. So it may or may not be a problem if you can adjust the times a bit - like add only ~7 minutes if the /foo responds with cache hit status) > > Going the other way around (/foo -> /bar) could be possible but has extra complexity (like checking if the object is not too old etc) for example using LUA you could implement whole proxy and file store logic. > > rr > > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From shahzaib.cb at gmail.com Mon Mar 30 08:31:02 2020 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Mon, 30 Mar 2020 13:31:02 +0500 Subject: svg broken ! In-Reply-To: <20200327153111.GI20939@daoine.org> References: <20200327083413.GH20939@daoine.org> <20200327153111.GI20939@daoine.org> Message-ID: PLease check this: https://pastebin.com/VZKHrM1u On Fri, Mar 27, 2020 at 8:31 PM Francis Daly wrote: > On Fri, Mar 27, 2020 at 02:32:07PM +0500, shahzaib mushtaq wrote: > > Hi there, > > > From ORIGIN (WORKING) > > https://i.imgur.com/4EJ8VfM.png > > > > From Edge (Broken SVG) > > https://i.imgur.com/MZ6HAsU.png > > Could you include the words as words, rather than as a link to a picture > of words? > > That will make it much easier to copy-paste and "diff" the output, > to see what is different in the two responses. > > And then your browser manual may describe what input it expects, so that > the "wrong" output can be examined to see what the browser wants that > is not there. > > (Presumably -- it will be one of the things that is different between the > "good" and the "bad" responses.) > > Cheers, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Mar 31 12:52:34 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 31 Mar 2020 13:52:34 +0100 Subject: svg broken ! In-Reply-To: References: <20200327083413.GH20939@daoine.org> <20200327153111.GI20939@daoine.org> Message-ID: <20200331125234.GJ20939@daoine.org> On Mon, Mar 30, 2020 at 01:31:02PM +0500, shahzaib mushtaq wrote: Hi there, > PLease check this: > > https://pastebin.com/VZKHrM1u So, as I understand things: * the report is that your particular client acts the way you want it to when it requests the svg url from the "upstream" (origin) web server directly * your particular client does not act the way you want it to when it requests the equivalent svg url from the "edge" (nginx reverse proxy) web server The presumption is that this is the result of one single http request. The intention now is to capture the responses to the two requests (to origin and to edge) that the client makes, and to compare them to see what is different. And then to see if those differences should affect what the client does. This "pastebin" link shows the "origin" response being HTTP/1.1, and the "edge" response being HTTP/2. The previous "imgur" links show the "origin" response and the "edge" response both being HTTP/1.1. Can you do whatever it takes to make the same request to both servers, that matches what your client does, and copy-paste the full response into the email? Possibly "curl -i --http1.1" can mimic the client more closely? If the response body is the 210 bytes shown in an earlier imgur link, then just pasting it twice to confirm that nothing changed, should not be a problem. >From the first sets of data, I do see that the nginx response does not include a Content-Length header, where the origin response does. And the nginx response includes two Strict-Transport-Security headers, where the origin response includes just the one. The two responses seem to have different Cache-Control headers as well. There are other differences, which you can see better than I can since you have access to the extra data. Does your client care about Content-Length? Or duplicate Strict-Transport-Security headers? Or it the error-output-picture due to the missing "xlink" namespace in the svg file? (That latter should not be the case, if you are using the same client to access both urls -- it should be getting the same content in both cases.) You may want to investigate why your nginx is sending duplicate headers. Good luck with it, f -- Francis Daly francis at daoine.org