From nginx-forum at forum.nginx.org Fri May 1 08:18:10 2020 From: nginx-forum at forum.nginx.org (MarcoI) Date: Fri, 01 May 2020 04:18:10 -0400 Subject: POST result: 404 In-Reply-To: <91801cf3-e172-e2a1-0e39-5adcd4fddc31@thomas-ward.net> References: <91801cf3-e172-e2a1-0e39-5adcd4fddc31@thomas-ward.net> Message-ID: It doesn't produce a "404" but it is saying "Cannot POST" ...: curl -X POST http://127.0.0.1:8080/puser/add Error
Cannot POST /puser/add
These are the corresponding lines in /var/log/nginx : 128.14.134.170 - - [01/May/2020:10:04:03 +0200] "GET / HTTP/1.1" 502 584 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36" 37.116.208.25 - - [01/May/2020:10:09:53 +0200] "GET / HTTP/2.0" 200 694 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36" 37.116.208.25 - - [01/May/2020:10:09:53 +0200] "GET /js/app.js HTTP/2.0" 200 147353 "https://ggc.world/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safar$ 37.116.208.25 - - [01/May/2020:10:09:54 +0200] "GET /js/chunk-vendors.js HTTP/2.0" 200 4241853 "https://ggc.world/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.404$ 37.116.208.25 - - [01/May/2020:10:09:54 +0200] "GET /sockjs-node/info?t=1588320594594 HTTP/2.0" 200 79 "https://ggc.world/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/$ 37.116.208.25 - - [01/May/2020:10:10:15 +0200] "POST /puser/add HTTP/2.0" 404 137 "https://ggc.world/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/$ And this is the nginx configuration file: server { listen 443 ssl http2 default_server; server_name ggc.world; ssl_certificate /etc/letsencrypt/live/ggc.world/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/ggc.world/privkey.pem; # managed by Certbot ssl_trusted_certificate /etc/letsencrypt/live/ggc.world/chain.pem; ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot ssl_session_timeout 5m; #ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:50m; #ssl_stapling on; #ssl_stapling_verify on; access_log /var/log/nginx/ggcworld-access.log combined; add_header Strict-Transport-Security "max-age=31536000"; location = /favicon.ico { access_log off; log_not_found off; } location / { proxy_pass http://127.0.0.1:8080; #proxy_pass http://127.0.0.1:2000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; #proxy_set_header Host $host; } } server { listen 80 default_server; listen [::]:80 default_server; error_page 497 https://$host:$server_port$request_uri; server_name ggc.world; return 301 https://$server_name$request_uri; access_log /var/log/nginx/ggcworld-access.log combined; add_header Strict-Transport-Security "max-age=31536000"; location = /favicon.ico { access_log off; log_not_found off; } location / { proxy_pass http://127.0.0.1:8080; #proxy_pass http://127.0.0.1:2000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; #proxy_set_header Host $host; } } # https://www.nginx.com/blog/nginx-nodejs-websockets-socketio/ # https://gist.github.com/uorat/10b15a32f3ffa3f240662b9b0fefe706 # http://nginx.org/en/docs/stream/ngx_stream_core_module.html upstream websocket { ip_hash; server localhost:3000; } server { listen 81; server_name ggc.world; #location / { location ~ ^/(websocket|websocket\/socket-io) { proxy_pass http://127.0.0.1:4201; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header X-Forwared-For $remote_addr; proxy_set_header Host $host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; } } # https://stackoverflow.com/questions/40516288/webpack-dev-server-with-nginx-proxy-pass upstream golang-webserver { ip_hash; server 127.0.0.1:2000; } server { listen 2999; server_name ggc.world; location / { #proxy_pass http://127.0.0.1:8080; proxy_pass http://golang-webserver; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; #proxy_set_header Host $host; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287914,287919#msg-287919 From nginx-forum at forum.nginx.org Fri May 1 09:53:35 2020 From: nginx-forum at forum.nginx.org (MarcoI) Date: Fri, 01 May 2020 05:53:35 -0400 Subject: POST result: 404 In-Reply-To: References: <91801cf3-e172-e2a1-0e39-5adcd4fddc31@thomas-ward.net> Message-ID: <1b5ca7d0cee053f9ab77e91fcd05d641.NginxMailingListEnglish@forum.nginx.org> I tried to curl POST the entire form: (base) marco at pc01:/var/log/nginx$ curl -X POST -F 'first_name=pinco' -F 'last_name=pallo' -F 'company_name=Company' -F 'email=pinco.pallo at company.com' -F 'tel=111111111' http://127.0.0.1:8080/puser/add Error
Cannot POST /puser/add
Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287914,287920#msg-287920 From vbl5968 at gmail.com Sun May 3 17:04:49 2020 From: vbl5968 at gmail.com (Vincent Blondel) Date: Sun, 3 May 2020 19:04:49 +0200 Subject: CHACHA20-POLY1305 Server Preference NOK with tlsv1.3 Message-ID: Hello, Trying to get CHACHA20-POLY1305 Server Preference ... Working with tlsv1.2 but NOK with tlsv1.3 ** Tried with a Custom OpenSSL.conf ServerPreference,PrioritizeChaCha OPENSSL_CONF=$HOME/conf/openssl.conf $HOME/bin/nginx.exe [default_conf] ssl_conf = ssl_sect [ssl_sect] system_default = system_default_sect [system_default_sect] Ciphersuites = TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384 Options = ServerPreference,PrioritizeChaCha ** Tried by patching src/event/ngx_event_openssl.c - SSL_CTX_set_options(ssl->ctx, SSL_OP_CIPHER_SERVER_PREFERENCE); + SSL_CTX_set_options(ssl->ctx, SSL_OP_CIPHER_SERVER_PREFERENCE | SSL_OP_PRIORITIZE_CHACHA); ** Tried by patching src/event/ngx_event_openssl.c nginx -s reload nginx: [emerg] SSL_CTX_set_cipher_list("TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_AES_128_CCM_SHA256") failed (SSL: error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match) ssl_prefer_server_ciphers on; ssl_protocols TLSv1.3; ssl_ciphers TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_AES_128_CCM_SHA256; my config is working like a charm with tlsv1.2 but i cannot get CHACHA20 prioritized with tlsv1.3 ... hence my question ...how to do with nginx version: nginx/1.18.0 ? tx, V. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun May 3 21:21:23 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 4 May 2020 00:21:23 +0300 Subject: CHACHA20-POLY1305 Server Preference NOK with tlsv1.3 In-Reply-To: References: Message-ID: <20200503212123.GY20357@mdounin.ru> Hello! On Sun, May 03, 2020 at 07:04:49PM +0200, Vincent Blondel wrote: > Hello, > > Trying to get CHACHA20-POLY1305 Server Preference ... Working with tlsv1.2 > but NOK with tlsv1.3 > > ** Tried with a Custom OpenSSL.conf ServerPreference,PrioritizeChaCha > > OPENSSL_CONF=$HOME/conf/openssl.conf $HOME/bin/nginx.exe > > [default_conf] > ssl_conf = ssl_sect > [ssl_sect] > system_default = system_default_sect > [system_default_sect] > Ciphersuites = > TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384 > Options = ServerPreference,PrioritizeChaCha > > ** Tried by patching src/event/ngx_event_openssl.c > > - SSL_CTX_set_options(ssl->ctx, SSL_OP_CIPHER_SERVER_PREFERENCE); > + SSL_CTX_set_options(ssl->ctx, SSL_OP_CIPHER_SERVER_PREFERENCE | > SSL_OP_PRIORITIZE_CHACHA); > > ** Tried by patching src/event/ngx_event_openssl.c There is no need to patch anything as long as you have Options set in openssl.conf. > nginx -s reload > nginx: [emerg] > SSL_CTX_set_cipher_list("TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_AES_128_CCM_SHA256") > failed (SSL: error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher > match) > > ssl_prefer_server_ciphers on; > ssl_protocols TLSv1.3; > ssl_ciphers > TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_AES_128_CCM_SHA256; > > my config is working like a charm with tlsv1.2 but i cannot get CHACHA20 > prioritized with tlsv1.3 ... hence my question ...how to do with nginx > version: nginx/1.18.0 ? The problem is that OpenSSL's SSL_CTX_set_cipher_list() does not recognize any ciphers in the cipher list you've provided in the ssl_ciphers directive, hence the error. You have to provide at least one valid cipher. Note that OpenSSL's SSL_CTX_set_cipher_list() does not recognize any TLSv1.3 ciphers (and instead enables them by default), hence you have to use at least one TLSv1.2 cipher listed. -- Maxim Dounin http://mdounin.ru/ From vbl5968 at gmail.com Mon May 4 05:49:26 2020 From: vbl5968 at gmail.com (Vincent Blondel) Date: Mon, 4 May 2020 07:49:26 +0200 Subject: CHACHA20-POLY1305 Server Preference NOK with tlsv1.3 In-Reply-To: <20200503212123.GY20357@mdounin.ru> References: <20200503212123.GY20357@mdounin.ru> Message-ID: thanks for the update Maxim but unfortunately still nok ... my openssl.conf [default_conf] ssl_conf = ssl_sect [ssl_sect] system_default = system_default_sect [system_default_sect] Options = ServerPreference,PrioritizeChaCha [req] distinguished_name = req_distinguished_name req_extensions = v3_req prompt = no [req_distinguished_name] C = DE CN = www.example.com [v3_req] keyUsage = keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth subjectAltName = @alt_names [alt_names] DNS.1 = www.example.com my nginx.conf ssl_prefer_server_ciphers on; ssl_protocols TLSv1.3; ssl_ciphers ECDHE+CHACHA20:TLS_CHACHA20_POLY1305_SHA256:ECDHE-ECDSA-CHACHA20-POLY1305; nginx is no longe crying on ssl_ciphers syntax but CHACHA20 is still NOT the Cipher challenged :-( -V. On Sun, May 3, 2020 at 11:21 PM Maxim Dounin wrote: > Hello! > > On Sun, May 03, 2020 at 07:04:49PM +0200, Vincent Blondel wrote: > > > Hello, > > > > Trying to get CHACHA20-POLY1305 Server Preference ... Working with > tlsv1.2 > > but NOK with tlsv1.3 > > > > ** Tried with a Custom OpenSSL.conf ServerPreference,PrioritizeChaCha > > > > OPENSSL_CONF=$HOME/conf/openssl.conf $HOME/bin/nginx.exe > > > > [default_conf] > > ssl_conf = ssl_sect > > [ssl_sect] > > system_default = system_default_sect > > [system_default_sect] > > Ciphersuites = > > > TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384 > > Options = ServerPreference,PrioritizeChaCha > > > > ** Tried by patching src/event/ngx_event_openssl.c > > > > - SSL_CTX_set_options(ssl->ctx, SSL_OP_CIPHER_SERVER_PREFERENCE); > > + SSL_CTX_set_options(ssl->ctx, SSL_OP_CIPHER_SERVER_PREFERENCE | > > SSL_OP_PRIORITIZE_CHACHA); > > > > ** Tried by patching src/event/ngx_event_openssl.c > > There is no need to patch anything as long as you have Options set > in openssl.conf. > > > nginx -s reload > > nginx: [emerg] > > > SSL_CTX_set_cipher_list("TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_AES_128_CCM_SHA256") > > failed (SSL: error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no > cipher > > match) > > > > ssl_prefer_server_ciphers on; > > ssl_protocols TLSv1.3; > > ssl_ciphers > > > TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_AES_128_CCM_SHA256; > > > > my config is working like a charm with tlsv1.2 but i cannot get CHACHA20 > > prioritized with tlsv1.3 ... hence my question ...how to do with nginx > > version: nginx/1.18.0 ? > > The problem is that OpenSSL's SSL_CTX_set_cipher_list() does not > recognize any ciphers in the cipher list you've provided in the > ssl_ciphers directive, hence the error. You have to provide at > least one valid cipher. > > Note that OpenSSL's SSL_CTX_set_cipher_list() does not recognize > any TLSv1.3 ciphers (and instead enables them by default), hence > you have to use at least one TLSv1.2 cipher listed. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon May 4 08:26:43 2020 From: nginx-forum at forum.nginx.org (Flinou) Date: Mon, 04 May 2020 04:26:43 -0400 Subject: Nginx 1.19 Message-ID: Hello, I know that, usually, mainline versions are supposed to be released the month after the stable one. However, based on https://trac.nginx.org/nginx/roadmap, Nginx 1.19 is planned for 04/30/2021. Is that normal ? Thank you in advance, Antoine Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287939,287939#msg-287939 From me at mheard.com Mon May 4 08:37:15 2020 From: me at mheard.com (Mathew Heard) Date: Mon, 4 May 2020 18:37:15 +1000 Subject: Nginx 1.19 In-Reply-To: References: Message-ID: Sad but probably due to HTTP/3, a long anticipated feature that's anything but simple. On Mon, 4 May 2020 at 18:27, Flinou wrote: > Hello, > > I know that, usually, mainline versions are supposed to be released the > month after the stable one. However, based on > https://trac.nginx.org/nginx/roadmap, Nginx 1.19 is planned for > 04/30/2021. > Is that normal ? > > Thank you in advance, > > Antoine > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,287939,287939#msg-287939 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabio.grasso at gmail.com Mon May 4 09:28:16 2020 From: fabio.grasso at gmail.com (Fabio Grasso) Date: Mon, 4 May 2020 11:28:16 +0200 Subject: SMTP proxy and authentication on backend Message-ID: Hello there, I'm new to this mailing list, so hi everybody ;-) I'm implementing a mail proxy based on nginx. I wrote an authentication backend in LUA and it works fine. With IMAP I've no problem, everything works fine. With SMTP I'm facing the well noted "limitation" about the authentication on the backend. I know that nginx doesn't pass username and password when proxying SMTP connection (unlike what happens with POP3 / IMAP) and this is creating problems for me. My SMTP server is based on HCL Domino, I can configure it for accept connections from nginx without relay check but this still creates a security problem for me: I cannot prevent someone from sending e-mails by declaring a sender other than the one they logged in with (spoofing). >From what I understand the only thing that supports nginx is XCLIENT, which however is not supported by HCL Domino (from what I found it seems that it is supported only by postfix and derivatives). I'm a bit surprised that nginx doesn't support autentication on SMTP backend (at least with an option for enable or disable it), since this limitation was reported 10 years ago (i.e. I've found this message: http://mailman.nginx.org/pipermail/nginx/2010-February/019029.html) I'm looking for solution and so I'm asking you if you have any suggestions. I was thinking about two main option: 1) insert a postfix between my reverse proxy and my mail server. But this will add some complexity and another (useless) hop. Moreover I need to manage somehow sorting mail on postfix by domain (the one that sends my authentication server in the Auth-Server / Auth-Port header). Is there any way to pass this information to postfix, for example by including it in XCLIENT? I see that XCLIENT also supports DESTADDR and DESTPORT as attributes, but it doesn't seem to me that there is any way to set nginx to use them 2) I found some "patches" for nginx that add this functionality, for example: https://github.com/guyguy333/nginx/commit/09ac17efa8cc28bf758d03ddafbccea663fa4779 https://github.com/Zauberzeilen/nginx-with-backend-smtp-auth Are there experiences on this? Can they be considered stable? It is not a problem to compile nginx with these changes, what worries me however is that any changes in the source in the future may not work with this patch and in fact risk of limiting myself the possibility of keeping the version of nginx updated (with all the consequences in case of major security patches) Files touched are not so frequently changed on official nginx code: src/mail/ngx_mail.h and src/mail/ngx_mail_proxy_module.c have the last commit 5 years ago, but obviously I have no guarantee that they will not be changed in the future... 2bis) this is a curiosity: why were these patches never included in the nginx code? I see that the I'm not the only one facing this limitation, there are a lot of reference, like these: http://mailman.nginx.org/pipermail/nginx/2008-April/004234.html https://www.ruby-forum.com/topic/1045106 http://mailman.nginx.org/pipermail/nginx/2010-February/019028.html http://mailman.nginx.org/pipermail/nginx/2010-April/020027.html http://mailman.nginx.org/pipermail/nginx/2010-November/023555.html http://mailman.nginx.org/pipermail/nginx-devel/2012-April/002074.html Anyone has expierience with this? How have you solved? Thanks, Fabio From maxim at nginx.com Mon May 4 09:54:09 2020 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 4 May 2020 12:54:09 +0300 Subject: Nginx 1.19 In-Reply-To: References: Message-ID: Hi Antoine, On 04.05.2020 11:26, Flinou wrote: > Hello, > > I know that, usually, mainline versions are supposed to be released the > month after the stable one. However, based on > https://trac.nginx.org/nginx/roadmap, Nginx 1.19 is planned for 04/30/2021. > Is that normal ? > Yes, we are planning to have 1.19.0 somewhere in May (not listed in trac yet). What you see in trac is for 1.19 mainline "branch". Thanks, Maxim -- Maxim Konovalov From martin.grigorov at gmail.com Mon May 4 14:45:24 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Mon, 4 May 2020 17:45:24 +0300 Subject: Nginx 1.19 In-Reply-To: References: Message-ID: Hi Maxim, On Mon, May 4, 2020 at 12:54 PM Maxim Konovalov wrote: > Hi Antoine, > > On 04.05.2020 11:26, Flinou wrote: > > Hello, > > > > I know that, usually, mainline versions are supposed to be released the > > month after the stable one. However, based on > > https://trac.nginx.org/nginx/roadmap, Nginx 1.19 is planned for > 04/30/2021. > > Is that normal ? > > > Yes, we are planning to have 1.19.0 somewhere in May (not listed in trac > yet). > Somewhere in May 2020 or May 2021 ? At the moment Trac is not reachable (error 502) but Flinou's message says 2021. Regards, Martin > > What you see in trac is for 1.19 mainline "branch". > > Thanks, > > Maxim > > -- > Maxim Konovalov > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Mon May 4 14:56:10 2020 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 4 May 2020 17:56:10 +0300 Subject: Nginx 1.19 In-Reply-To: References: Message-ID: <9cb47c45-461e-d9bf-7947-afbbc36b5e82@nginx.com> On 04.05.2020 17:45, Martin Grigorov wrote: > Hi Maxim, > > On Mon, May 4, 2020 at 12:54 PM Maxim Konovalov > wrote: > > Hi Antoine, > > On 04.05.2020 11:26, Flinou wrote: > > Hello, > > > > I know that, usually, mainline versions are supposed to be > released the > > month after the stable one. However, based on > > https://trac.nginx.org/nginx/roadmap, Nginx 1.19 is planned for > 04/30/2021. > > Is that normal ? > > > Yes, we are planning to have 1.19.0 somewhere in May (not listed in trac > yet). > > > Somewhere in May 2020 or May 2021 ? > At the moment Trac is not reachable (error 502) but?Flinou's message > says 2021. > May 2020. -- Maxim Konovalov From Bryan.Townsend at doggett.com Mon May 4 15:34:32 2020 From: Bryan.Townsend at doggett.com (Bryan Townsend) Date: Mon, 4 May 2020 15:34:32 +0000 Subject: Needing assistance with Nginx 1.17.9 Windows install Message-ID: <805f8d02bc4546578321770cbeae4874@degmail2k1303.doggettinc.com> Hello. I am very new to Nginx and need some assistance with my config. This is on a windows server and will be listening on 80,18080, 443 and 18081. Traffic comes in to www.myserver.com and is passed to my Windows Nginx server by the firewall. I need to be able to listen and pass traffic on both 443 and 18081 for HTTPS traffic. I also need to know how to specify my certificate paths on a windows server so SSL works properly. The server also listens on port 80 and 18080 for HTTP traffic. I am completely new to Nginx and am quite lost in how this config should be structured. This is my what i have been able to put together from examples I found on the web and it doesnt fully work. The way it's supposed to work is that the traffic comes in on 443 for the initial request. The webpage presented to the client then loads a page over 18081 and that is what allows the logon to the webpage. The same happens if traffic comes in over 80. The website replies with a webpage that then loads over 18080. #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http{ server{ listen 80; server_name www.mywebsite.com; location / { proxy_pass http://1.2.3.4/; error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } listen 18080; server_name 1.2.3.4:18080; location / { proxy_pass http://1.2.3.4/; listen 18081; server_name 1.2.3.4:18081; location / { proxy_pass https://1.2.3.4/; listen 443; server_name 1.2.3.4; location / { proxy_pass https://1.2.3.4/; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } } Bryan Townsend Corporate IT Systems [cid:image005.jpg at 01CF7CD9.9A23F930] 9111 North Freeway | Houston, TX 77037 281-249-4622| | Bryan.Townsend at doggett.com | www.doggett.com Service Desk Support |(281) 249-4590 or x22835 | helpdesk at doggett.com | www.doggett.com This email message, including any attachments, is for the sole use of the intended recipient(s) and may be confidential, privileged, proprietary or otherwise protected from disclosure. If you received this message in error, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1146 bytes Desc: image001.jpg URL: From mdounin at mdounin.ru Mon May 4 15:54:00 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 4 May 2020 18:54:00 +0300 Subject: CHACHA20-POLY1305 Server Preference NOK with tlsv1.3 In-Reply-To: References: <20200503212123.GY20357@mdounin.ru> Message-ID: <20200504155400.GA20357@mdounin.ru> Hello! On Mon, May 04, 2020 at 07:49:26AM +0200, Vincent Blondel wrote: > thanks for the update Maxim but unfortunately still nok ... > > my openssl.conf > > [default_conf] > ssl_conf = ssl_sect > [ssl_sect] > system_default = system_default_sect > [system_default_sect] > Options = ServerPreference,PrioritizeChaCha > [req] > distinguished_name = req_distinguished_name > req_extensions = v3_req > prompt = no > [req_distinguished_name] > C = DE > CN = www.example.com > [v3_req] > keyUsage = keyEncipherment, dataEncipherment > extendedKeyUsage = serverAuth > subjectAltName = @alt_names > [alt_names] > DNS.1 = www.example.com The openssl.conf looks wrong to me. See https://trac.nginx.org/nginx/ticket/1445#comment:8 for a working example. Quoting it here: : openssl_conf = default_conf : : [default_conf] : ssl_conf = ssl_sect : : [ssl_sect] : system_default = system_default_sect : : [system_default_sect] : Options = PrioritizeChaCha Note the "openssl_conf = default_conf" before the first named section. -- Maxim Dounin http://mdounin.ru/ From vbl5968 at gmail.com Mon May 4 18:10:38 2020 From: vbl5968 at gmail.com (Vincent Blondel) Date: Mon, 4 May 2020 20:10:38 +0200 Subject: CHACHA20-POLY1305 Server Preference NOK with tlsv1.3 In-Reply-To: <20200504155400.GA20357@mdounin.ru> References: <20200503212123.GY20357@mdounin.ru> <20200504155400.GA20357@mdounin.ru> Message-ID: I just copy/pasted/replaced the content of my openssl.conf with the proposal in this mail ... still OK with tslv1.2 and NOK with tlsv1.3 ... openssl is up to date and seems working fine ... $ openssl version OpenSSL 1.1.1f 31 Mar 2020 $ openssl ciphers -v TLS_AES_256_GCM_SHA384 TLSv1.3 Kx=any Au=any Enc=AESGCM(256) Mac=AEAD TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256) Mac=AEAD TLS_AES_128_GCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESGCM(128) Mac=AEAD TLS_AES_128_CCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESCCM(128) Mac=AEAD ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD ECDHE-ECDSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=ECDSA Enc=CHACHA20/POLY1305(256) Mac=AEAD ECDHE-RSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=RSA Enc=CHACHA20/POLY1305(256) Mac=AEAD ECDHE-ECDSA-AES256-CCM TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESCCM(256) Mac=AEAD ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(128) Mac=AEAD ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD ECDHE-ECDSA-AES128-CCM TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESCCM(128) Mac=AEAD ECDHE-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA256 ECDHE-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA256 ECDHE-ECDSA-AES256-SHA TLSv1 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA1 ECDHE-RSA-AES256-SHA TLSv1 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA1 ECDHE-ECDSA-AES128-SHA TLSv1 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA1 ECDHE-RSA-AES128-SHA TLSv1 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA1 AES256-GCM-SHA384 TLSv1.2 Kx=RSA Au=RSA Enc=AESGCM(256) Mac=AEAD AES256-CCM TLSv1.2 Kx=RSA Au=RSA Enc=AESCCM(256) Mac=AEAD AES128-GCM-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AESGCM(128) Mac=AEAD AES128-CCM TLSv1.2 Kx=RSA Au=RSA Enc=AESCCM(128) Mac=AEAD AES256-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA256 AES128-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA256 AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1 AES128-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA1 DHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(256) Mac=AEAD DHE-RSA-CHACHA20-POLY1305 TLSv1.2 Kx=DH Au=RSA Enc=CHACHA20/POLY1305(256) Mac=AEAD DHE-RSA-AES256-CCM TLSv1.2 Kx=DH Au=RSA Enc=AESCCM(256) Mac=AEAD DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(128) Mac=AEAD DHE-RSA-AES128-CCM TLSv1.2 Kx=DH Au=RSA Enc=AESCCM(128) Mac=AEAD DHE-RSA-AES256-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(256) Mac=SHA256 DHE-RSA-AES128-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(128) Mac=SHA256 DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1 DHE-RSA-AES128-SHA SSLv3 Kx=DH Au=RSA Enc=AES(128) Mac=SHA1 PSK-AES256-GCM-SHA384 TLSv1.2 Kx=PSK Au=PSK Enc=AESGCM(256) Mac=AEAD PSK-CHACHA20-POLY1305 TLSv1.2 Kx=PSK Au=PSK Enc=CHACHA20/POLY1305(256) Mac=AEAD PSK-AES256-CCM TLSv1.2 Kx=PSK Au=PSK Enc=AESCCM(256) Mac=AEAD PSK-AES128-GCM-SHA256 TLSv1.2 Kx=PSK Au=PSK Enc=AESGCM(128) Mac=AEAD PSK-AES128-CCM TLSv1.2 Kx=PSK Au=PSK Enc=AESCCM(128) Mac=AEAD PSK-AES256-CBC-SHA SSLv3 Kx=PSK Au=PSK Enc=AES(256) Mac=SHA1 PSK-AES128-CBC-SHA256 TLSv1 Kx=PSK Au=PSK Enc=AES(128) Mac=SHA256 PSK-AES128-CBC-SHA SSLv3 Kx=PSK Au=PSK Enc=AES(128) Mac=SHA1 DHE-PSK-AES256-GCM-SHA384 TLSv1.2 Kx=DHEPSK Au=PSK Enc=AESGCM(256) Mac=AEAD DHE-PSK-CHACHA20-POLY1305 TLSv1.2 Kx=DHEPSK Au=PSK Enc=CHACHA20/POLY1305(256) Mac=AEAD DHE-PSK-AES256-CCM TLSv1.2 Kx=DHEPSK Au=PSK Enc=AESCCM(256) Mac=AEAD DHE-PSK-AES128-GCM-SHA256 TLSv1.2 Kx=DHEPSK Au=PSK Enc=AESGCM(128) Mac=AEAD DHE-PSK-AES128-CCM TLSv1.2 Kx=DHEPSK Au=PSK Enc=AESCCM(128) Mac=AEAD DHE-PSK-AES256-CBC-SHA SSLv3 Kx=DHEPSK Au=PSK Enc=AES(256) Mac=SHA1 DHE-PSK-AES128-CBC-SHA256 TLSv1 Kx=DHEPSK Au=PSK Enc=AES(128) Mac=SHA256 DHE-PSK-AES128-CBC-SHA SSLv3 Kx=DHEPSK Au=PSK Enc=AES(128) Mac=SHA1 ECDHE-PSK-CHACHA20-POLY1305 TLSv1.2 Kx=ECDHEPSK Au=PSK Enc=CHACHA20/POLY1305(256) Mac=AEAD ECDHE-PSK-AES256-CBC-SHA TLSv1 Kx=ECDHEPSK Au=PSK Enc=AES(256) Mac=SHA1 ECDHE-PSK-AES128-CBC-SHA256 TLSv1 Kx=ECDHEPSK Au=PSK Enc=AES(128) Mac=SHA256 ECDHE-PSK-AES128-CBC-SHA TLSv1 Kx=ECDHEPSK Au=PSK Enc=AES(128) Mac=SHA1 On Mon, May 4, 2020 at 5:54 PM Maxim Dounin wrote: > Hello! > > On Mon, May 04, 2020 at 07:49:26AM +0200, Vincent Blondel wrote: > > > thanks for the update Maxim but unfortunately still nok ... > > > > my openssl.conf > > > > [default_conf] > > ssl_conf = ssl_sect > > [ssl_sect] > > system_default = system_default_sect > > [system_default_sect] > > Options = ServerPreference,PrioritizeChaCha > > [req] > > distinguished_name = req_distinguished_name > > req_extensions = v3_req > > prompt = no > > [req_distinguished_name] > > C = DE > > CN = www.example.com > > [v3_req] > > keyUsage = keyEncipherment, dataEncipherment > > extendedKeyUsage = serverAuth > > subjectAltName = @alt_names > > [alt_names] > > DNS.1 = www.example.com > > The openssl.conf looks wrong to me. See > https://trac.nginx.org/nginx/ticket/1445#comment:8 for a working > example. Quoting it here: > > : openssl_conf = default_conf > : > : [default_conf] > : ssl_conf = ssl_sect > : > : [ssl_sect] > : system_default = system_default_sect > : > : [system_default_sect] > : Options = PrioritizeChaCha > > Note the "openssl_conf = default_conf" before the first named > section. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon May 4 21:07:32 2020 From: nginx-forum at forum.nginx.org (abr3) Date: Mon, 04 May 2020 17:07:32 -0400 Subject: Strange behavior on proxy cache at high load spike Message-ID: <7c6f42adf70054497831e7e951f00eb0.NginxMailingListEnglish@forum.nginx.org> Hi, this bugs me for some time now. I have nginx 1.16.0 configured as following on proxy cache: proxy_cache_path /dev/shm/nginx_cache levels=1:2 keys_zone=proxy:1024m max_size=1024m inactive=60m; proxy_temp_path /dev/shm/nginx_proxy_tmp; proxy_cache_use_stale updating; proxy_cache_lock on; proxy_cache_lock_timeout 30s; Most of the time all is fine and working as expected. There is some specialty in the deployment setup where some expected spikes in requests (end clients updating daily data) to few locations occur. Response size varies 1M-1.5M non-gziped. Log snippet from such spike: [2020-05-03T00:00:44] "GET /api/34/guide?date=2020-05-03 HTTP/1.0" 200 445984 cache: HIT request time: 50.211 sec [2020-05-03T00:00:44] "GET /api/34/guide?date=2020-05-03 HTTP/1.0" 200 780472 cache: HIT request time: 52.891 sec [2020-05-03T00:00:44] "GET /api/34/guide?date=2020-05-03 HTTP/1.0" 200 85432 cache: HIT request time: 33.284 sec [2020-05-03T00:00:44] "GET /api/34/guide?date=2020-05-03 HTTP/1.0" 200 57920 cache: HIT request time: 34.957 sec [2020-05-03T00:00:44] "GET /api/34/guide?date=2020-05-03 HTTP/1.0" 200 401096 cache: HIT request time: 49.991 sec [2020-05-03T00:00:44] "GET /api/34/guide?date=2020-05-03 HTTP/1.0" 200 244712 cache: HIT request time: 48.412 sec [2020-05-03T00:00:44] "GET /api/34/guide?date=2020-05-03 HTTP/1.0" 200 101360 cache: HIT request time: 34.955 sec [2020-05-03T00:00:44] "GET /api/34/guide?date=2020-05-03 HTTP/1.0" 200 102808 cache: HIT request time: 34.753 sec ... [2020-03-24T00:02:16] "GET /api/34/guide?date=2020-05-03 HTTP/1.0" 200 1526025 cache: HIT request time: 48.671 sec Monitoring du on cache location shows max 1.1G, like: 1.1G /dev/shm/nginx_cache 0 /dev/shm/nginx_proxy_tmp After 2minutes response 'stabilizes' with correct size (in this example 1526025). Problem is also amplified due clients validate response and retry progressively if corrupted. There are no weird log lines in error log or linux (centos) messages, also there is no cache 'updating', just hits (I guess this omits upstream servers issue). Is it possible we have issue with reading cached entries from /dev/shm during peak times? I would kindly ask for hints where possibly to start looking and debugging? Big thanks in advance Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287951,287951#msg-287951 From mdounin at mdounin.ru Mon May 4 23:42:20 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 May 2020 02:42:20 +0300 Subject: CHACHA20-POLY1305 Server Preference NOK with tlsv1.3 In-Reply-To: References: <20200503212123.GY20357@mdounin.ru> <20200504155400.GA20357@mdounin.ru> Message-ID: <20200504234220.GB20357@mdounin.ru> Hello! On Mon, May 04, 2020 at 08:10:38PM +0200, Vincent Blondel wrote: > I just copy/pasted/replaced the content of my openssl.conf with the > proposal in this mail ... still OK with tslv1.2 and NOK with tlsv1.3 ... > > openssl is up to date and seems working fine ... Some things to consider: - Make sure the openssl.conf you are editing is the one which is actually used. No errors are produced if loading openssl conf fails, and this somewhat complicates things. Given that your first message in this thread suggests you are trying to do this on Windows, trying to use variables when starting nginx might complicate things. Also it might not be trivial to trace if the file is actually used (on unix you can use things like ktrace / strace / truss). - Make sure there are no non-text things in the openssl.conf such as byte order marks. Some editors tend to add them, and this often breaks things. - Make sure you are testing things correctly. Testing cipher preference, especially for TLSv1.3 ciphers, might be non-trivial. Simplier test might be to disable some Ciphersuites in the openssl.conf, and make sure these are actually disabled. And once you see them disabled, start playing with PrioritizeChaCha. -- Maxim Dounin http://mdounin.ru/ From themadbeaker at gmail.com Tue May 5 00:54:43 2020 From: themadbeaker at gmail.com (J.R.) Date: Mon, 4 May 2020 19:54:43 -0500 Subject: Strange behavior on proxy cache at high load spike Message-ID: > After 2minutes response 'stabilizes' with correct size (in this example > 1526025). Problem is also amplified due clients validate response and retry > progressively if corrupted. What is the response your upstream is sending back? If the 'corrupted' data is still a 200, then nginx will cache that... You need to make sure it's sending back a 5xx if it's overloaded or whatever error would be relevant. You might want to consider expanding your 'use_stale' like: 'proxy_cache_use_stale error timeout invalid_header http_500 http_502 http_503 http_504;' Why are you wasting duplicating the data in the same SHM? Just set 'use_temp_path=off' in the proxy_cache_path and be done with it. What is the valid cache time for the content? (i.e. the headers) If they are missing or things are set to 'no cache', then you are obviously going to have issues... From nginx-forum at forum.nginx.org Wed May 6 15:04:52 2020 From: nginx-forum at forum.nginx.org (pgn) Date: Wed, 06 May 2020 11:04:52 -0400 Subject: assigning different SSL cert -- per ingress/listener IP? Message-ID: <81e7b23dca0eef67369dacd6d903a564.NginxMailingListEnglish@forum.nginx.org> I have a single Nginx server configured to listen on two IPs on my VPS host -- an external/public IP (X.X.X.55) and an internal/LAN IP (10.10.10.55). Atm, it's a *single* "server_name" (host.example.com) for both IPs ... handled by a split-horizon DNS that returns the IP address for that hostname depending on the query origin -- public net, or internal LAN. It works as expected. I'd _like_ to setup different SSL cert/key/CA handshake configs to be used -- depending on the ingress IP. Specifically, for ingress via internal/LAN IP (10.10.10.55), I want to use an internally generated, self-signed cert -- from my own/local CA -- with ssl verify ON, and for ingress via external/public IP (X.X.X.55), I want to use a LetsEncrypt-generated public cert, with ssl verify OFF. Is this^ possible with Nginx config? Any examples? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287957,287957#msg-287957 From themadbeaker at gmail.com Wed May 6 20:21:05 2020 From: themadbeaker at gmail.com (J.R.) Date: Wed, 6 May 2020 15:21:05 -0500 Subject: assigning different SSL cert -- per ingress/listener IP? Message-ID: > I'd _like_ to setup different SSL cert/key/CA handshake configs to be used > -- depending on the ingress IP. You can specify an IP with the listen directive: http://nginx.org/en/docs/http/ngx_http_core_module.html#listen So you would end up with two similar copies of each 'server'... The only difference in directives being listen and the ssl certs... From pankaj at releasemanager.in Wed May 6 22:12:07 2020 From: pankaj at releasemanager.in (pankaj at releasemanager.in) Date: Wed, 06 May 2020 15:12:07 -0700 Subject: ipv6 name resolution dependency/requirement Message-ID: <20200506151207.0a504150a66b62e4c9ddb6488e6496fb.6124bc9765.wbe@email13.godaddy.com> An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu May 7 00:59:29 2020 From: nginx-forum at forum.nginx.org (q1548) Date: Wed, 06 May 2020 20:59:29 -0400 Subject: TCP/Status CLOSE_WAIT for nginx, very long time Message-ID: <5f7af0fbf1ada144ce6a5c9dff3497ab.NginxMailingListEnglish@forum.nginx.org> Hello, I run netstat, find one of nginx processes, its status always is CLOSE_WAIT, never change for very long time, how to fix this? thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287966,287966#msg-287966 From pankaj at releasemanager.in Thu May 7 04:27:39 2020 From: pankaj at releasemanager.in (pankaj at releasemanager.in) Date: Wed, 06 May 2020 21:27:39 -0700 Subject: ipv6 name resolution dependency/requirement Message-ID: <20200506212739.0a504150a66b62e4c9ddb6488e6496fb.5616b81b9a.wbe@email13.godaddy.com> An HTML attachment was scrubbed... URL: From pranav.lal at gmail.com Fri May 8 13:13:35 2020 From: pranav.lal at gmail.com (Pranav Lal) Date: Fri, 8 May 2020 18:43:35 +0530 Subject: Seeking help in setting a reverse web sockets proxy Message-ID: <01da01d6253a$7d0811a0$771834e0$@gmail.com> Hi all, I am building a chatbot using python's remi GUI library. This serves controls in the form of HTML which can then be opened in a browser. I want to use nginx as a reverse proxy to serve this application. I have tried to configure the virtual host without success. I am not getting any errors in error.log but my bot is not returning any answers. If I run the bot without proxying, it works. The nginx configuration file is valid and the first page of the app is served without any problems. See below for the configuration file. Note: The server is running on my local ubuntu 19.10 box. server { listen 22000; server_name ws.cisobot.com; location / { proxy_pass http://ws-backend; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_http_version 1.1; #proxy_set_header Upgrade $http_upgrade; proxy_set_header Upgrade "websocket"; proxy_set_header Connection "upgrade"; } } upstream ws-backend { # enable sticky session based on IP ip_hash; server 127.0.0.1:21000; } p From nginx-forum at forum.nginx.org Sun May 10 03:34:49 2020 From: nginx-forum at forum.nginx.org (essenz) Date: Sat, 09 May 2020 23:34:49 -0400 Subject: Question about proxy_cache_min_uses Message-ID: When I did a search for this I only found one thread, but it was in Russian and hard to translate. I currently playing with an image cache for application frontend to a "massive" amount of proxied images. With proxy_cache_min_uses=1 my cache grows extremely fast and I cap it at 100G (for disk performance) and I get an average cache hit rate of 6%. With proxy_cache_min_uses=2 it grows much slower and actually plateaus at 23G with a cache hit rate of 5%. My concern is as follows, proxy_cache_min_uses=1 effectively caches everything, which is too much... But proxy_cache_min_uses=2 doesn't cache enough. I'm struggling to understand exactly how proxy_cache_min_uses works, by setting it to 2 nginx needs to somehow know that 2 requests were made for a given piece of content. But how does it do that? And over what timeframe? 2 requests in the current connection buffer, 2 requests over the past 5 mins, etc.,. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287983,287983#msg-287983 From themadbeaker at gmail.com Sun May 10 14:03:20 2020 From: themadbeaker at gmail.com (J.R.) Date: Sun, 10 May 2020 09:03:20 -0500 Subject: Question about proxy_cache_min_uses Message-ID: > My concern is as follows, proxy_cache_min_uses=1 effectively caches > everything, which is too much... But proxy_cache_min_uses=2 doesn't cache > enough. I'm struggling to understand exactly how proxy_cache_min_uses works, > by setting it to 2 nginx needs to somehow know that 2 requests were made for > a given piece of content. But how does it do that? And over what timeframe? > 2 requests in the current connection buffer, 2 requests over the past 5 > mins, etc.,. First to clarify how many hits are cached... The first request for content no matter what can never be cached (because obviously nginx has no idea what the resources are). For min=1, then the second request (and then on) would hit the nginx cache... For min=2, it would technically be the 3rd request for a resource. This prevents your cache from filling up from single-requests for something. The time frame for the caching is controlled by two mechanisms... FIRST, and probably most important is the 'proxy_cache_path' directive, specifically the 'inactive' setting. Excluding all the other cache settings for now, 'inactive' will expire content if it's not re-requested within the specified time. SECOND, is the actual cache headers for your content, of if you don't have any relevant headers the 'proxy_cache_valid' setting. One super IMPORTANT note, remember that nginx is caching the content AND the header... So you DON'T want to use a cache-control 'max_age=xxx' value (which is the number of seconds till it expires) because it will cache the first instance and it will never get decremented until the cached copy happens to get updated. Instead it's better to use the 'last-modified' and 'Expires' headers that specifies a fixed date/time. Having a 'last-modified' allows for 304 responses (assuming you have everything configured correctly). The 'expires' tag also tell nginx's cache how long the content is valid for, superseding the proxy_cache_valid setting. Here's how it all breaks down... I'll use one of my sites as an example... My content changes every 3 hours due to an update from a data feed... So... All my caching headers will specify that future point in time when the content will update. The cache headers tell nginx when its cached copy is no longer valid and it needs to fetch a fresh copy. If you don't have any caching headers then as I mentioned above nginx will cache based on the 'proxy_cache_valid' defaults. If by some freak accident you have your headers set to no-cache, then nothing would get cached. Since I know my content is valid for a max of 3 hours, I also set the 'inactive' setting to 3 hours since my 'max_size' in the proxy_cache_path never reaches near its capacity, and I would rather use the disk space then spend the processing power having to re-generate the page. If storage was a concern, then one could lower the 'inactive' setting so that only frequently requested content would end up staying in the cache, while the rest would expire from inactivity. IMPORTANT NOTE: The 'inactive' setting supersedes your cache headers! Even if your cache headers content is valid for a month, if your 'inactive' setting is set for 1 hour, and nobody requests it within an hour, then nginx WILL expire that content. I also have my 'proxy_cache_min_uses' set to 2, to prevent caching a bunch of content that might only get hit once. So, the big take away is to make sure your caching headers are good first, then to set the nginx 'inactive' setting to a time-span that you thing is frequent enough that people will be requesting the content, but not so low that active content is being prematurely expired. If its set too high, then your cache will fill up to the max value and nginx will clean out the oldest requested content first which is okay, but you are probably holding more content then you need in the cache if its always 100% (or your cache size is too small). From mdounin at mdounin.ru Sun May 10 17:35:59 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 10 May 2020 20:35:59 +0300 Subject: TCP/Status CLOSE_WAIT for nginx, very long time In-Reply-To: <5f7af0fbf1ada144ce6a5c9dff3497ab.NginxMailingListEnglish@forum.nginx.org> References: <5f7af0fbf1ada144ce6a5c9dff3497ab.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200510173559.GG20357@mdounin.ru> Hello! On Wed, May 06, 2020 at 08:59:29PM -0400, q1548 wrote: > I run netstat, find one of nginx processes, its status always is CLOSE_WAIT, > never change for very long time, how to fix this? thanks. Sockets in the CLOSE_WAIT states for a long time usually indicate a socket leak. First of all, make sure you are using latest nginx version - that is, either 1.17.10 or 1.18.0. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon May 11 05:17:28 2020 From: nginx-forum at forum.nginx.org (q1548) Date: Mon, 11 May 2020 01:17:28 -0400 Subject: TCP/Status CLOSE_WAIT for nginx, very long time In-Reply-To: <20200510173559.GG20357@mdounin.ru> References: <20200510173559.GG20357@mdounin.ru> Message-ID: <1a811c20fdd7eef7698c7c5d193a119b.NginxMailingListEnglish@forum.nginx.org> Thank you, Maxim. If I use the latest nginx version with custom http module, for a socket leak, what I need check in module code? thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287966,287989#msg-287989 From nginx-forum at forum.nginx.org Mon May 11 11:55:54 2020 From: nginx-forum at forum.nginx.org (allenhe) Date: Mon, 11 May 2020 07:55:54 -0400 Subject: How nignx handled the header to the client in this situation Message-ID: I understand the nginx would proxy the header first and then the body, in the case the connection with the upstream is broken during the transfer of body, what status code the client would get? since the nginx would proxy the 200 OK from upstream first to the client, but will nginx send another 5xx header to the client if the upstream connetction is broken? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287990,287990#msg-287990 From nginx-forum at forum.nginx.org Mon May 11 14:24:49 2020 From: nginx-forum at forum.nginx.org (MarcoI) Date: Mon, 11 May 2020 10:24:49 -0400 Subject: Static files Message-ID: <147b1160eaaf743aa0c72e55c46dc1c5.NginxMailingListEnglish@forum.nginx.org> I'm trying to figure out how to load static files . I added to /etc/nginx/conf.d/default.conf the following lines: server { location / { root /home/marco/webMatters/vueMatters/ggc/src/components/auth/weights; } } But I'm still getting this error: Uncaught (in promise) SyntaxError: Unexpected token < in JSON at position 0 which, according to the people intervened in the github issue, should be related to a mis-configuration in the web-server for static files serving: https://github.com/justadudewhohacks/face-api.js/issues/598#issuecomment-626346393 How to solve the problem? Looking forward to your kind help. Marco Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287991,287991#msg-287991 From nginx-forum at forum.nginx.org Mon May 11 16:41:28 2020 From: nginx-forum at forum.nginx.org (MarcoI) Date: Mon, 11 May 2020 12:41:28 -0400 Subject: Static files In-Reply-To: <147b1160eaaf743aa0c72e55c46dc1c5.NginxMailingListEnglish@forum.nginx.org> References: <147b1160eaaf743aa0c72e55c46dc1c5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <77fca70714ae72bd0671436ea5773a9d.NginxMailingListEnglish@forum.nginx.org> Following the indications here: https://docs.nginx.com/nginx/admin-guide/web-server/serving-static-content/ I modified the lines in /etc/nginx/conf.d/default.conf as follows: server { root /home/marco/webMatters/vueMatters/GraspGlobalChances/src/components/auth/weights; location / { try_files $uri /weights/ssd_mobilenetv1_model-shard1; } } I also tried to add these two options: server { root /home/marco/webMatters/vueMatters/GraspGlobalChances/src/components/auth/weights; location / { sendfile on; tcp_nopush on; try_files $uri /weights/ssd_mobilenetv1_model-shard1; } } But still got the problem Marco Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287991,287992#msg-287992 From r at roze.lv Mon May 11 17:03:15 2020 From: r at roze.lv (Reinis Rozitis) Date: Mon, 11 May 2020 20:03:15 +0300 Subject: Static files In-Reply-To: <147b1160eaaf743aa0c72e55c46dc1c5.NginxMailingListEnglish@forum.nginx.org> References: <147b1160eaaf743aa0c72e55c46dc1c5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <001f01d627b6$0fe1b670$2fa52350$@roze.lv> > server { > location / { > root > /home/marco/webMatters/vueMatters/ggc/src/components/auth/weights; > } > } Since it's under /home most likely nginx has no access to the directory. Check the user under which nginx is running (probably nobody) and try to check if you can read the files (su nobody -c ls /home/marco/webMatters/vueMatters/ggc/src/components/auth/weights) rr From mdounin at mdounin.ru Mon May 11 18:17:57 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 11 May 2020 21:17:57 +0300 Subject: TCP/Status CLOSE_WAIT for nginx, very long time In-Reply-To: <1a811c20fdd7eef7698c7c5d193a119b.NginxMailingListEnglish@forum.nginx.org> References: <20200510173559.GG20357@mdounin.ru> <1a811c20fdd7eef7698c7c5d193a119b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200511181757.GH20357@mdounin.ru> Hello! On Mon, May 11, 2020 at 01:17:28AM -0400, q1548 wrote: > Thank you, Maxim. If I use the latest nginx version with custom http module, > for a socket leak, what I need check in module code? thanks. If you are using a 3rd party module, more or less any bug in the module can result in a socket leak. Most obvious thing to check is if you are seeing socket leaks without the module - if not, likely the problem is in the module. Unfortunately, it is hard to say anything beyond this. -- Maxim Dounin http://mdounin.ru/ From brucek at gmail.com Mon May 11 18:39:22 2020 From: brucek at gmail.com (Bruce Klein) Date: Mon, 11 May 2020 08:39:22 -1000 Subject: nginx update to 1.18.0 broke my wsl ubuntu 16.04 set up Message-ID: Does anyone have nginx 1.18.0 running successfully on WSL? Previous versions have worked fine for me the past couple years as long as the standard WSL fix "fastcgi_buffering off" was applied. I just picked up the 1.18.0 update and now I'm back to the old net::ERR_INCOMPLETE_CHUNKED_ENCODING errors you used to see with large pages when fastcgi_buffering was on. Is it possible that as of the 1.18.0 update a different directive is required to turn fastcgi_buffering off? Or that nginx is now ignoring this directive? Anyone have any suggestions for me to try? Thanks for any tips! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon May 11 19:21:01 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 11 May 2020 22:21:01 +0300 Subject: nginx update to 1.18.0 broke my wsl ubuntu 16.04 set up In-Reply-To: References: Message-ID: <20200511192101.GI20357@mdounin.ru> Hello! On Mon, May 11, 2020 at 08:39:22AM -1000, Bruce Klein wrote: > Does anyone have nginx 1.18.0 running successfully on WSL? > > Previous versions have worked fine for me the past couple years as long as > the standard WSL fix "fastcgi_buffering off" was applied. > > I just picked up the 1.18.0 update and now I'm back to the old > net::ERR_INCOMPLETE_CHUNKED_ENCODING errors you used to see with large > pages when fastcgi_buffering was on. > > Is it possible that as of the 1.18.0 update a different directive is > required to turn fastcgi_buffering off? Or that nginx is now ignoring this > directive? > > Anyone have any suggestions for me to try? Thanks for any tips! Things are expected to work regardless of "fastcgi_buffering" being on or off. And the error suggests that there is something wrong with the setup. Instead of trying to work around the problem with "fastcgi_buffering", it might be a good idea to actually dig into what goes wrong and why the error appears. In particular, looking into error log might be a good idea. Once the underlying problem is understood, you'll be able to fix it properly, without involving magic like "the standard WSL fix". And the proper fix is expected to work with any nginx version. -- Maxim Dounin http://mdounin.ru/ From brucek at gmail.com Mon May 11 19:38:12 2020 From: brucek at gmail.com (Bruce Klein) Date: Mon, 11 May 2020 09:38:12 -1000 Subject: nginx update to 1.18.0 broke my wsl ubuntu 16.04 set up In-Reply-To: <20200511192101.GI20357@mdounin.ru> References: <20200511192101.GI20357@mdounin.ru> Message-ID: Hi Maxim, Thank you the reply, which I appreciate very much. I fully agree in spirit. In practice, the issue of previous versions not working on WSL is a long-standing bug vs WSL that people far more expert than me on unix internals, WSL, nginx, and fpm have not yet solved for two years plus, other than everyone being told to disable fastcgi_buffering. (If you're interested, there's plenty of history in various WSL bug reports to read through.) No doubt the root cause here is a flaw in WSL. That's not on the nginx team to fix. That said, as a practical matter, the once easily available workaround is now gone. I'd like to understand what changed in 1.18 and if there is an easy adaptation to it, as that seems the path of least resistance. For what it's worth, the issue generates no logging in either the nginx error logs, access logs, or php7.1-fpm logs. It's impact is visible only on the web client side, where the user sees it as a partially received page and the net::ERR_INCOMPLETE_CHUNKED_ENCODING is available from the browser developer tools once the browser has timed out on waiting for the rest of the page. Thanks again, Bruce On Mon, May 11, 2020 at 9:21 AM Maxim Dounin wrote: > Hello! > > On Mon, May 11, 2020 at 08:39:22AM -1000, Bruce Klein wrote: > > > Does anyone have nginx 1.18.0 running successfully on WSL? > > > > Previous versions have worked fine for me the past couple years as long > as > > the standard WSL fix "fastcgi_buffering off" was applied. > > > > I just picked up the 1.18.0 update and now I'm back to the old > > net::ERR_INCOMPLETE_CHUNKED_ENCODING errors you used to see with large > > pages when fastcgi_buffering was on. > > > > Is it possible that as of the 1.18.0 update a different directive is > > required to turn fastcgi_buffering off? Or that nginx is now ignoring > this > > directive? > > > > Anyone have any suggestions for me to try? Thanks for any tips! > > Things are expected to work regardless of "fastcgi_buffering" > being on or off. And the error suggests that there is something wrong > with the setup. Instead of trying to work around the problem with > "fastcgi_buffering", it might be a good idea to actually dig into > what goes wrong and why the error appears. In particular, looking > into error log might be a good idea. > > Once the underlying problem is understood, you'll be able to fix > it properly, without involving magic like "the standard WSL fix". > And the proper fix is expected to work with any nginx version. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon May 11 22:36:10 2020 From: nginx-forum at forum.nginx.org (evulcu) Date: Mon, 11 May 2020 18:36:10 -0400 Subject: Ngix reverse proxy pass authentication IIS Message-ID: Hello, I'm trying to set up a reverse proxy with NGNIX on a Ubuntu Server. The upstream server is IIS configured with basic authentication working only as https since it has a 301 redirection configured on it. Here is my conf file server { listen 80; return 301 https://$host$request_uri; } server { listen 443; server_name fweb.biz; ssl_certificate /etc/nginx/ssl/cert.crt; ssl_certificate_key /etc/nginx/ssl/cert.key; ssl on; ssl_session_cache builtin:1000 shared:SSL:10m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_ciphers on; access_log /var/log/nginx/access.log; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_pass https://192.168.99.14; proxy_read_timeout 90; proxy_redirect https://192.168.99.14 https://fweb.biz; } } I'm redirect to the upstream server and asked for credential but nothing happens. Looks like the credential are not pass to the upstream server. Please can somebody help me. Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288000,288000#msg-288000 From nginx-forum at forum.nginx.org Tue May 12 02:21:52 2020 From: nginx-forum at forum.nginx.org (q1548) Date: Mon, 11 May 2020 22:21:52 -0400 Subject: TCP/Status CLOSE_WAIT for nginx, very long time In-Reply-To: <20200511181757.GH20357@mdounin.ru> References: <20200511181757.GH20357@mdounin.ru> Message-ID: Hello, Maxim, thank you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287966,288002#msg-288002 From mdounin at mdounin.ru Tue May 12 05:53:59 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 May 2020 08:53:59 +0300 Subject: nginx update to 1.18.0 broke my wsl ubuntu 16.04 set up In-Reply-To: References: <20200511192101.GI20357@mdounin.ru> Message-ID: <20200512055359.GJ20357@mdounin.ru> Hello! On Mon, May 11, 2020 at 09:38:12AM -1000, Bruce Klein wrote: > Hi Maxim, > > Thank you the reply, which I appreciate very much. I fully agree in spirit. > > In practice, the issue of previous versions not working on WSL is a > long-standing bug vs WSL that people far more expert than me on unix > internals, WSL, nginx, and fpm have not yet solved for two years plus, > other than everyone being told to disable fastcgi_buffering. (If you're > interested, there's plenty of history in various WSL bug reports to read > through.) > > No doubt the root cause here is a flaw in WSL. That's not on the nginx team > to fix. > > That said, as a practical matter, the once easily available workaround is > now gone. I'd like to understand what changed in 1.18 and if there is an > easy adaptation to it, as that seems the path of least resistance. To find out how to adapt a workaround - first you'll have to find out why the workaround used to work. That is, what is the bug in WSL we are trying to work around. Also note that it might not be a good idea to use things which depend on unexplained workarounds for flaws not fixed for years. As long as there is no explanation why the workaround work, this usually means that it can stop working unexpectedly and/or won't work in some edge cases. > For what it's worth, the issue generates no logging in either the nginx > error logs, access logs, or php7.1-fpm logs. It's impact is visible only on > the web client side, where the user sees it as a partially received page > and the net::ERR_INCOMPLETE_CHUNKED_ENCODING is available from the browser > developer tools once the browser has timed out on waiting for the rest of > the page. So, the problem is that transfer stalls at some point, correct? This looks like an issue with sockets handling, and some things to try include: 1. Check the debug log to find out where things stall from nginx point of view. 2. Try different event methods, such as "select" and "poll" (http://nginx.org/r/use). Note that this might require you to compile nginx yourself. 3. Play with socket-related options, such as tcp_nodelay (http://nginx.org/r/tcp_nodelay) and tcp_nopush (http://nginx.org/r/tcp_nopush). Unlikely to help though. 4. Play with TCP buffers ("listen ... sndbuf=...", assuming it stalls somewhere while sending to the client) to see if it helps. Likely a buffer larger than the response size should help. 5. Play with "fastcgi_max_temp_file_size 0;" and/or "sendfile on/off". As long as playing with buffering used to help somehow, this suggests that there is a problem with event reporting in the epoll emulation layer. I don't think that this is something that can be fixed on nginx side, and any workarounds, including "fastcgi_buffering off", are likely to fail in some edge cases. The working solution might be to use other event methods though, such as "select" or "poll", see above. Or to make sure that socket buffers are large enough to avoid blocking. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Tue May 12 14:51:39 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 12 May 2020 15:51:39 +0100 Subject: Ngix reverse proxy pass authentication IIS In-Reply-To: References: Message-ID: <20200512145139.GF20939@daoine.org> On Mon, May 11, 2020 at 06:36:10PM -0400, evulcu wrote: Hi there, > I'm trying to set up a reverse proxy with NGNIX on a Ubuntu Server. The > upstream server is IIS configured with basic authentication working only as > https since it has a 301 redirection configured on it. what does curl -ik https://192.168.99.14 return? Assuming it is a http 401, then most interesting is the next word after "WWW-Authenticate:". If it is not "Basic", then your upstream is not configured the way that you think it is. (And that will block it from working through stock-nginx.) > I'm redirect to the upstream server and asked for credential but nothing > happens. Looks like the credential are not pass to the upstream server. > Please can somebody help me. If the above does not show the fix, then: * what request do you make? * what response do you want? * what response do you get instead? Possibly there will be something interesting in the IIS server logs too. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue May 12 14:55:04 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 12 May 2020 15:55:04 +0100 Subject: Static files In-Reply-To: <147b1160eaaf743aa0c72e55c46dc1c5.NginxMailingListEnglish@forum.nginx.org> References: <147b1160eaaf743aa0c72e55c46dc1c5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200512145504.GG20939@daoine.org> On Mon, May 11, 2020 at 10:24:49AM -0400, MarcoI wrote: Hi there, > I'm trying to figure out how to load static files . What request do you make? What file on your filesystem do you want nginx to return, in response to that request? f -- Francis Daly francis at daoine.org From larry.martell at gmail.com Tue May 12 15:33:50 2020 From: larry.martell at gmail.com (Larry Martell) Date: Tue, 12 May 2020 11:33:50 -0400 Subject: 504 timeout Message-ID: I have a django app using nginx and uwsgi. There can be cases when a request from a user does not come back from the db for 15-20 minutes. This is expected as it's calling a stored proc that does a lot and this is an internal app. Issue is that after some amount of time I get a 504 timeout error - but it's not always the same time - sometimes 10 minutes, sometimes 15, sometimes 16. My config file has: uwsgi_read_timeout 60m; uwsgi_send_timeout 60m; client_body_timeout 60m; Are there any other settings I need to set to avoid the 504? From larry.martell at gmail.com Tue May 12 15:56:05 2020 From: larry.martell at gmail.com (Larry Martell) Date: Tue, 12 May 2020 11:56:05 -0400 Subject: 504 timeout In-Reply-To: References: Message-ID: On Tue, May 12, 2020 at 11:33 AM Larry Martell wrote: > > I have a django app using nginx and uwsgi. There can be cases when a > request from a user does not come back from the db for 15-20 minutes. > This is expected as it's calling a stored proc that does a lot and > this is an internal app. Issue is that after some amount of time I get > a 504 timeout error - but it's not always the same time - sometimes 10 > minutes, sometimes 15, sometimes 16. My config file has: > > uwsgi_read_timeout 60m; > uwsgi_send_timeout 60m; > client_body_timeout 60m; > > Are there any other settings I need to set to avoid the 504? Also just tried adding these to the global settings: proxy_connect_timeout 60m; proxy_send_timeout 60m; proxy_read_timeout 60m; But no joy. From gray at nxg.name Tue May 12 16:54:49 2020 From: gray at nxg.name (Norman Gray) Date: Tue, 12 May 2020 17:54:49 +0100 Subject: 504 timeout In-Reply-To: References: Message-ID: <1A9A477E-90EC-43FE-84B6-658261803171@nxg.name> Larry, hello. On 12 May 2020, at 16:33, Larry Martell wrote: > There can be cases when a > request from a user does not come back from the db for 15-20 minutes. > This is expected as it's calling a stored proc that does a lot and > this is an internal app. (open-parenthesis This isn't an answer to your question, but that sort of interaction does go pretty much against the grain of the HTTP protocol, and the 'REST' interaction style that it led to, so there may be architectural reasons, rather than merely implementation ones, for this application to have further problems, further down the line. If a proxy were ever to be involved, you'd acquire a separate set of headaches. There is an HTTP response code '202 Accepted' (which is a success code, of course), glossed as 'The request has been accepted for processing, but the processing has not been completed' -- details below. That's pretty much intended for just this sort of interaction. The idea is that the server would respond promptly with 202, perhaps with a retry-after header, and the client knows to try again later, possibly repeatedly, until it gets either a 200 or a 4xx. Of course, this would require changes at server and client end, so it's not immediately helpful. Good luck, Norman close-parenthesis) The text of RFC 7231 says: 6.3.3. 202 Accepted The 202 (Accepted) status code indicates that the request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place. There is no facility in HTTP for re-sending a status code from an asynchronous operation. The 202 response is intentionally noncommittal. Its purpose is to allow a server to accept a request for some other process (perhaps a batch-oriented process that is only run once per day) without requiring that the user agent's connection to the server persist until the process is completed. The representation sent with this response ought to describe the request's current status and point to (or embed) a status monitor that can provide the user with an estimate of when the request will be fulfilled. -- Norman Gray : https://nxg.me.uk From brucek at gmail.com Tue May 12 18:13:32 2020 From: brucek at gmail.com (Bruce Klein) Date: Tue, 12 May 2020 08:13:32 -1000 Subject: nginx update to 1.18.0 broke my wsl ubuntu 16.04 set up In-Reply-To: <20200512055359.GJ20357@mdounin.ru> References: <20200511192101.GI20357@mdounin.ru> <20200512055359.GJ20357@mdounin.ru> Message-ID: Thanks Maxim! I don't know if I'll be able to fix it with that but I'll sure learn a lot trying. I appreciate all the pointers on where to look. Best, Bruce On Mon, May 11, 2020 at 7:54 PM Maxim Dounin wrote: > Hello! > > On Mon, May 11, 2020 at 09:38:12AM -1000, Bruce Klein wrote: > > > Hi Maxim, > > > > Thank you the reply, which I appreciate very much. I fully agree in > spirit. > > > > In practice, the issue of previous versions not working on WSL is a > > long-standing bug vs WSL that people far more expert than me on unix > > internals, WSL, nginx, and fpm have not yet solved for two years plus, > > other than everyone being told to disable fastcgi_buffering. (If you're > > interested, there's plenty of history in various WSL bug reports to read > > through.) > > > > No doubt the root cause here is a flaw in WSL. That's not on the nginx > team > > to fix. > > > > That said, as a practical matter, the once easily available workaround is > > now gone. I'd like to understand what changed in 1.18 and if there is an > > easy adaptation to it, as that seems the path of least resistance. > > To find out how to adapt a workaround - first you'll have to find > out why the workaround used to work. That is, what is the bug in > WSL we are trying to work around. > > Also note that it might not be a good idea to use things which > depend on unexplained workarounds for flaws not fixed for years. > As long as there is no explanation why the workaround work, this > usually means that it can stop working unexpectedly and/or won't > work in some edge cases. > > > For what it's worth, the issue generates no logging in either the nginx > > error logs, access logs, or php7.1-fpm logs. It's impact is visible only > on > > the web client side, where the user sees it as a partially received page > > and the net::ERR_INCOMPLETE_CHUNKED_ENCODING is available from the > browser > > developer tools once the browser has timed out on waiting for the rest of > > the page. > > So, the problem is that transfer stalls at some point, correct? > This looks like an issue with sockets handling, and some things to > try include: > > 1. Check the debug log to find out where things stall from nginx > point of view. > > 2. Try different event methods, such as "select" and "poll" > (http://nginx.org/r/use). Note that this might require you to > compile nginx yourself. > > 3. Play with socket-related options, such as tcp_nodelay > (http://nginx.org/r/tcp_nodelay) and tcp_nopush > (http://nginx.org/r/tcp_nopush). Unlikely to help though. > > 4. Play with TCP buffers ("listen ... sndbuf=...", assuming it > stalls somewhere while sending to the client) to see if it helps. > Likely a buffer larger than the response size should help. > > 5. Play with "fastcgi_max_temp_file_size 0;" and/or "sendfile > on/off". > > As long as playing with buffering used to help somehow, this > suggests that there is a problem with event reporting in the epoll > emulation layer. I don't think that this is something that can be > fixed on nginx side, and any workarounds, including > "fastcgi_buffering off", are likely to fail in some edge cases. > The working solution might be to use other event methods though, > such as "select" or "poll", see above. Or to make sure that > socket buffers are large enough to avoid blocking. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry.martell at gmail.com Tue May 12 18:40:38 2020 From: larry.martell at gmail.com (Larry Martell) Date: Tue, 12 May 2020 14:40:38 -0400 Subject: 504 timeout In-Reply-To: <1A9A477E-90EC-43FE-84B6-658261803171@nxg.name> References: <1A9A477E-90EC-43FE-84B6-658261803171@nxg.name> Message-ID: On Tue, May 12, 2020 at 12:55 PM Norman Gray wrote: > > > Larry, hello. > > On 12 May 2020, at 16:33, Larry Martell wrote: > > > There can be cases when a > > request from a user does not come back from the db for 15-20 minutes. > > This is expected as it's calling a stored proc that does a lot and > > this is an internal app. > > (open-parenthesis > > This isn't an answer to your question, but that sort of interaction does > go pretty much against the grain of the HTTP protocol, and the 'REST' > interaction style that it led to, so there may be architectural reasons, > rather than merely implementation ones, for this application to have > further problems, further down the line. If a proxy were ever to be > involved, you'd acquire a separate set of headaches. > > There is an HTTP response code '202 Accepted' (which is a success code, > of course), glossed as 'The request has been accepted for processing, > but the processing has not been completed' -- details below. That's > pretty much intended for just this sort of interaction. The idea is > that the server would respond promptly with 202, perhaps with a > retry-after header, and the client knows to try again later, possibly > repeatedly, until it gets either a 200 or a 4xx. > > Of course, this would require changes at server and client end, so it's > not immediately helpful. > > Good luck, > > Norman > > close-parenthesis) > > > > The text of RFC 7231 says: > > 6.3.3. 202 Accepted > > The 202 (Accepted) status code indicates that the request has been > accepted for processing, but the processing has not been completed. > The request might or might not eventually be acted upon, as it might > be disallowed when processing actually takes place. There is no > facility in HTTP for re-sending a status code from an asynchronous > operation. > > The 202 response is intentionally noncommittal. Its purpose is to > allow a server to accept a request for some other process (perhaps a > batch-oriented process that is only run once per day) without > requiring that the user agent's connection to the server persist > until the process is completed. The representation sent with this > response ought to describe the request's current status and point to > (or embed) a status monitor that can provide the user with an > estimate of when the request will be fulfilled. Thanks for the reply. Yes, I knew someone would say this ;-). To clarify, this is an async ajax call (so I am surprised it gets a timeout), and this is just an internal connivence function for the DBAs to run a pipeline of stored procs. In any case, I should have mentioned we are running at AWS using the Elastic Load Balancer, and the timeout there is what I was hitting. Once that was increased I no longer get the 504. From nginx-forum at forum.nginx.org Tue May 12 19:01:05 2020 From: nginx-forum at forum.nginx.org (eckern) Date: Tue, 12 May 2020 15:01:05 -0400 Subject: Conditionally removing a proxy header Message-ID: <470410ac107a3563a7843f2820d278f5.NginxMailingListEnglish@forum.nginx.org> I'm trying to conditionally remove a proxy header but this doesn't appear to be allowed using an "if". Ideally it would look something like this where $external_traffic is either 0 or 1: if ($external_traffic) { ... proxy_hide_header WWW-Authenticate; # Remove negotiate header ... } My workaround is to set up another site with proxy_hide_header set and do a redirect to it inside the if instead but that seems messy. if ($external_traffic) { ... rewrite ^ https://external.testdomain.com$request_uri break; ... } Is there a better way to do this? Thanks, Neil Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288015,288015#msg-288015 From nginx-forum at forum.nginx.org Wed May 13 09:52:26 2020 From: nginx-forum at forum.nginx.org (MarcoI) Date: Wed, 13 May 2020 05:52:26 -0400 Subject: Can someone explain me why "curl: (7) Failed to connect to 127.0.0.1 port 2000: Connection refused" ? Message-ID: Hi!, I do not understand why it says "curl: (7) Failed to connect to 127.0.0.1 port 2000: Connection refused" : curl -X POST -F 'first_name=pinco' -F 'last_name=pallo' -F 'company_name=Company' -F 'email=pinco.pallo at company.com' -F 'tel=111111111' 127.0.0.1:2000/puser/add curl: (7) Failed to connect to 127.0.0.1 port 2000: Connection refused. In server-gorillamux.go : CONN_PORT = "2000" in /etc/nginx/conf.d/default.conf : upstream golang-webserver { ip_hash; server 127.0.0.1:2000; } server { #listen 2999; server_name ggc.world; root /puser/add; // Is this correct? ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:50m; location / { #proxy_pass http://127.0.0.1:8080; proxy_pass http://golang-webserver; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; #proxy_set_header Host $host; } } Looking forward to your kind help. Marco Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288019,288019#msg-288019 From 045hamid at gmail.com Wed May 13 11:50:00 2020 From: 045hamid at gmail.com (Hamid Gholami) Date: Wed, 13 May 2020 16:20:00 +0430 Subject: NGINX loading page slow when requests increase Message-ID: Hi to all, I have a NGINX and configure it as load balancer (hash remote_IP) in front of java application. When number of requests are normal all things good and working fine but when requests go high NGINX has a delay to response to each request. at this moment if I stop NGINX and requests straight to application (java app and use tomcat as webserver) it works without delay but when use NGINX it works with delay. This is nginx.conf file: user root; worker_processes auto; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; worker_rlimit_nofile 100000; events { worker_connections 40000; # use epoll; # multi_accept on; } http { error_log /appserver/nginx/logs/error.log crit; error_log /appserver/nginx/logs/error.log emerg; error_log /appserver/nginx/logs/error.log error; error_log /appserver/nginx/logs/error.log alert; error_log /appserver/nginx/logs/debug.log debug; error_log /appserver/nginx/logs/warn.log warn; include mime.types; default_type application/octet-stream; fastcgi_read_timeout 100000; log_format netdata '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '$request_length $request_time $upstream_response_time $bytes_sent ' '"$http_referer" "$http_user_agent" $upstream_addr ' 'request_time=$request_time ' 'upstream_response_time=$upstream_response_time ' 'upstream_connect_time=$upstream_connect_time ' 'upstream_header_time=$upstream_header_time ' '$msec'; access_log /appserver/nginx/logs/access.log netdata; client_body_buffer_size 256k; client_header_buffer_size 256k; subrequest_output_buffer_size 128k; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; fastcgi_buffers 16 16k; fastcgi_buffer_size 32k; client_max_body_size 100M; sendfile on; tcp_nopush on; tcp_nodelay on; access_log on; keepalive_timeout 600; client_body_timeout 600; client_header_timeout 600; send_timeout 10; reset_timedout_connection on; gzip on; gzip_min_length 10240; gzip_comp_level 1; gzip_vary on; gzip_disable msie6; gzip_proxied expired no-cache no-store private auth; gzip_types text/css text/javascript text/xml text/plain text/x-component application/javascript application/x-javascript application/json application/xml application/rss+xml application/atom+xml font/truetype font/opentype application/vnd.ms-fontobject image/svg+xml; upstream nginxBRANCH21 { server x.x.x.21:448 max_fails=1 fail_timeout=15s; } upstream nginxBRANCH { #server x.x.x.x:443 max_fails=1 fail_timeout=15s; #server x.x.x.x:448 max_fails=1 fail_timeout=15s; server x.x.x.x:448 max_fails=1 fail_timeout=15s; server x.x.x.x:443 max_fails=1 fail_timeout=15s; server x.x.x.x:443 max_fails=1 fail_timeout=15s; server x.x.x.x:443 max_fails=1 fail_timeout=15s; server x.x.x.x:443 max_fails=1 fail_timeout=15s; server x.x.x.x:443 max_fails=1 fail_timeout=15s; hash $remote_addr; } upstream server150 { server x.x.x.x:443 max_fails=1 fail_timeout=15s; } upstream nginx_Z { server x.x.x.x:448 max_fails=1 fail_timeout=15s; } server { listen 80 default_server; server_name _; return 301 https://$host$request_uri; } server { listen 443 ssl; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # ssl on; ssl_certificate /etc/ssl/certs/myssl.crt; ssl_certificate_key /etc/ssl/private/myssl.key; location / { #if ( $remote_addr ~ "(x.x.x.x)|(x.x.x.x)|(x.x.x.x)|(x.x.x.x)|(x.x.x.x)|(x.x.x.x)|(x.x.x.x)|(x.x.x.x)|(x.x.x.x)" ){ # proxy_pass https://server150; #} proxy_pass https://nginxBRANCH21; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_set_header X_FORWARDED_PROTO https; proxy_read_timeout 100000; proxy_connect_timeout 100000; } location /FCBZ { proxy_pass https://nginx_Z; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_set_header X_FORWARDED_PROTO https; proxy_read_timeout 100000; proxy_connect_timeout 100000; } } server { listen 9666; location /basic_status { stub_status; } } } Can anyone help me? Thank you -- Hamid Gholami DevOps Engineer * t: *+982191002809 *m:* +989126105157*Linkedin , Twitter , Telegram * -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Wed May 13 12:02:28 2020 From: r at roze.lv (Reinis Rozitis) Date: Wed, 13 May 2020 15:02:28 +0300 Subject: Can someone explain me why "curl: (7) Failed to connect to 127.0.0.1 port 2000: Connection refused" ? In-Reply-To: References: Message-ID: <000701d6291e$60092ca0$201b85e0$@roze.lv> > Subject: Can someone explain me why "curl: (7) Failed to connect to > 127.0.0.1 port 2000: Connection refused" ? > > Hi!, > > I do not understand why it says "curl: (7) Failed to connect to 127.0.0.1 port > 2000: Connection refused" : > curl -X POST -F 'first_name=pinco' -F 'last_name=pallo' -F > 'company_name=Company' -F 'email=pinco.pallo at company.com' -F > 'tel=111111111' > 127.0.0.1:2000/puser/add curl: (7) Failed to connect to 127.0.0.1 port 2000: > Connection refused. > > In server-gorillamux.go : CONN_PORT = "2000" Is the go application/server running? Since the nginx doesn't listen on 2000 port (and only proxies the connections to backend) it's important that the backend is up. rr From nginx-forum at forum.nginx.org Wed May 13 14:18:56 2020 From: nginx-forum at forum.nginx.org (MarcoI) Date: Wed, 13 May 2020 10:18:56 -0400 Subject: Can someone explain me why "curl: (7) Failed to connect to 127.0.0.1 port 2000: Connection refused" ? In-Reply-To: References: Message-ID: I solved it . It was mix of small little problems that together hindered the correct answer: - changed in golang webserver HOST = 127.0.0.1 - capitalized the first letter of each element of the Puser struct in order to make it visible to json decoder - used correctly the curl command: curl -d'{"first_name":"pinco", "last_name":"pallo", "company_name":"Company","email":"pinco.pallo at company.com","tel":"111111111"}' -H "Content-Type: application/json" 127.0.0.1:2000/puser/add [{"First_name":"pinco","Last_name":"pallo","Country":"","Company_name":"Company","Email":"pinco.pallo at company.com","Tel":"111111111"}] Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288019,288023#msg-288023 From pgnet.dev at gmail.com Thu May 14 18:10:20 2020 From: pgnet.dev at gmail.com (PGNet Dev) Date: Thu, 14 May 2020 11:10:20 -0700 Subject: editing a general location match to exclude one, specific instance? Message-ID: <9a09daeb-f46b-de13-d5b6-376c290bbf5c@gmail.com> editing a general location match to exclude one, specific instance? I run nginx 1.18.0. I've had a trivial 'protection' rule in place for a long time location ~* (gulpfile\.js|settings.php|readme|schema|htpasswd|password|config) { deny all; } That hasn't caused me any particular problems. Recently, I've added a proxied back end app. In logs I see ==> /var/log/nginx/auth.example1.com.error.log <== 2020/05/12 22:16:39 [error] 57803#57803: *1 access forbidden by rule, client: 10.10.10.10, server: testapp.example1.com, request: "GET /api/configuration HTTP/2.0", host: "testapp.example1.com", referrer: "https://testapp.example1.com/?rd=https://example2.net/app2" removing the "config" match from the protection rule, - location ~* (gulpfile\.js|settings.php|readme|schema|htpasswd|password|config) { + location ~* (gulpfile\.js|settings.php|readme|schema|htpasswd|password) { eliminates the problem. I'd like to edit the match to PASS that^ logged match -- as specifically/uniquely as possible -- but CONTINUE to 'deny all' for all other/remaining matches on "config". How would that best be done? A preceding location match? Or editing the existing one? From themadbeaker at gmail.com Thu May 14 19:29:03 2020 From: themadbeaker at gmail.com (J.R.) Date: Thu, 14 May 2020 14:29:03 -0500 Subject: editing a general location match to exclude one, specific instance? Message-ID: First, you forgot to escape the period in settings.php to settings\.php > I'd like to edit the match to PASS that^ logged match -- as > specifically/uniquely as possible -- but CONTINUE to 'deny all' > for all other/remaining matches on "config". Second, it's all in the location documentation: http://nginx.org/en/docs/http/ngx_http_core_module.html#location The two relevant bits are, depending on how you want to handle it: 1. If the longest matching prefix location has the ?^~? modifier then regular expressions are not checked. 2. Then regular expressions are checked, in the order of their appearance in the configuration file. The search of regular expressions terminates on the first match... From pgnet.dev at gmail.com Thu May 14 19:35:29 2020 From: pgnet.dev at gmail.com (PGNet Dev) Date: Thu, 14 May 2020 12:35:29 -0700 Subject: editing a general location match to exclude one, specific instance? In-Reply-To: References: Message-ID: <864e1e89-8a41-6a24-eed0-478e0218594f@gmail.com> > Second, it's all in the location documentation: I'm not asking about the order. I'm asking about a specific match(es) that'd work in this specific case. If it's trivial, care to share a working example? From nginx-forum at forum.nginx.org Thu May 14 19:37:38 2020 From: nginx-forum at forum.nginx.org (Amakesh) Date: Thu, 14 May 2020 15:37:38 -0400 Subject: SSL_ERROR_BAD_CERT_DOMAIN Message-ID: I have 4 domains on one server with control panel - Plesk All the domains and Plesk have the same shared ip. The operating system my web server runs on is CentOS 7.8 Earlier Let?s encrypt certificates worked fine for for all of them, but recently installed Nginx as web proxy. Unfortunately i have now a problem with the certificates - SSL_ERROR_BAD_CERT_DOMAIN. https://www.ssllabs.com shows all of the certificates have Server hostname(rsvix170.gerwanserver.de) as domain name and not their own addresses. https://www.ssllabs.com/ssltest/analyze.html?d=solaris-ustronie.eu https://check-your-website.server-daten.de/?q=solaris-ustronie.eu nginx -T | grep -i server_name nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful server_names_hash_bucket_size 64; server_name lists.*; server_name lists.*; server_name lists.*; server_name lists.*; server_name lists.*; server_name lists.*; server_name mbrcp.com; server_name www.mbrcp.com; server_name ipv4.mbrcp.com; server_name mbrcp.com; server_name www.mbrcp.com; server_name ipv4.mbrcp.com; server_name "webmail.mbrcp.com"; server_name "webmail.mbrcp.com"; server_name "webmail.nwn.mbrcp.com"; server_name "webmail.nwn.mbrcp.com"; server_name "webmail.smartrecepcja.pl"; server_name "webmail.smartrecepcja.pl"; server_name "webmail.solaris-ustronie.eu"; server_name "webmail.solaris-ustronie.eu"; server_name "webmail.zahnarzt-birresborn.de"; server_name "webmail.zahnarzt-birresborn.de"; server_name smartrecepcja.pl; server_name www.smartrecepcja.pl; server_name ipv4.smartrecepcja.pl; server_name smartrecepcja.pl; server_name www.smartrecepcja.pl; server_name ipv4.smartrecepcja.pl; server_name solaris-ustronie.eu; server_name www.solaris-ustronie.eu; server_name ipv4.solaris-ustronie.eu; server_name solaris-ustronie.eu; server_name www.solaris-ustronie.eu; server_name ipv4.solaris-ustronie.eu; server_name zahnarzt-birresborn.de; server_name www.zahnarzt-birresborn.de; server_name ipv4.zahnarzt-birresborn.de; server_name zahnarzt-birresborn.de; server_name www.zahnarzt-birresborn.de; server_name ipv4.zahnarzt-birresborn.de; server_name poczta.smartrecepcja.pl; server_name www.poczta.smartrecepcja.pl; server_name ipv4.poczta.smartrecepcja.pl; nginx -V nginx version: nginx/1.16.1 built with OpenSSL 1.1.1g 21 Apr 2020 TLS SNI support enabled configure arguments: --prefix=/usr/share --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --modules-path=/usr/share/nginx/modules --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --user=nginx --group=nginx --with-file-aio --with-compat --with-http_ssl_module --with-http_realip_module --with-http_sub_module --with-http_dav_module --with-http_gzip_static_module --with-http_stub_status_module --with-http_v2_module --add-dynamic-module=mod_passenger/src/nginx_module --add-dynamic-module=mod_pagespeed --with-openssl=lib_openssl --with-openssl-opt='zlib no-idea no-mdc2 no-rc5 no-ssl2 no-shared -fpic' Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288035,288035#msg-288035 From francis at daoine.org Thu May 14 22:08:21 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 14 May 2020 23:08:21 +0100 Subject: Conditionally removing a proxy header Message-ID: <20200514220821.GH20939@daoine.org> Hi there, I'm not certain why you want to do the specific example that you want to do; but if I were doing the general "conditionally remove header" thing, I would probably use "map" to set a new variable "$my_value" based on your variable "$external_traffic". If $external_traffic is 1, set $my_value to blank. Otherwise, set $my_value to $upstream_http_www_authenticate. And then always "proxy_hide_header WWW-Authenticate;" and "add_header WWW-Authenticate $my_value always;" If the value is blank, add_header does not write the header. And "always" is because you probably only get the WWW-Authenticate on a 401 response. http://nginx.org/r/map http://nginx.org/r/$upstream_http_ http://nginx.org/r/add_header (Note the standard caveats about directive inheritance, particularly regarding add_header, if that applies in your config.) Hope this helps! f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri May 15 05:51:13 2020 From: nginx-forum at forum.nginx.org (allenhe) Date: Fri, 15 May 2020 01:51:13 -0400 Subject: How nignx handled the header to the client in this situation In-Reply-To: References: Message-ID: <503653944eb0b83a0c1e00eecef26365.NginxMailingListEnglish@forum.nginx.org> Will nginx buffer the header before receiving of the whole body? If not, what if error happens in the middle of body receiving? nginx has no chance to resend the error status then. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287990,288039#msg-288039 From marko at vizio.biz Fri May 15 09:43:29 2020 From: marko at vizio.biz (=?UTF-8?Q?Marko_Domanovi=c4=87?=) Date: Fri, 15 May 2020 11:43:29 +0200 Subject: nginx 0.7.65 and TLS1.2 In-Reply-To: References: Message-ID: <9c3fe819-78fe-9eb4-b836-caf2836e2c4e@vizio.biz> Long story short, I need nginx 0.7.65 to be able to support TLS1.2. Seems like it's dependent on openssh version and installed one is 1.0.1t which seem to support TLS1.2, but "nmap --script ssl-enum-ciphers -p 443 sitename" says only SSLv3 and TLS1.0 are supported. So is there anything I can to to make nginx 0.7.65 recognize TLS1.2 and use it? Yeah I know I talk about ancient software here, but I'm in no position to do very wide upgrades. Debian 6 is the system. Thanks! From r at roze.lv Fri May 15 10:07:30 2020 From: r at roze.lv (Reinis Rozitis) Date: Fri, 15 May 2020 13:07:30 +0300 Subject: nginx 0.7.65 and TLS1.2 In-Reply-To: <9c3fe819-78fe-9eb4-b836-caf2836e2c4e@vizio.biz> References: <9c3fe819-78fe-9eb4-b836-caf2836e2c4e@vizio.biz> Message-ID: <001f01d62aa0$a4ef79e0$eece6da0$@roze.lv> > it's dependent on openssh version and installed one is 1.0.1t On openssl. > which seem to support TLS1.2, but "nmap --script ssl-enum-ciphers -p 443 > sitename" says only SSLv3 and TLS1.0 are supported. So is there anything I > can to to make nginx 0.7.65 recognize TLS1.2 and use it? > > Yeah I know I talk about ancient software here, but I'm in no position to do > very wide upgrades. Debian 6 is the system. I'm not sure it's supported in nginx in that particular version as: Changes with nginx 1.1.13 16 Jan 2012 *) Feature: the "TLSv1.1" and "TLSv1.2" parameters of the "ssl_protocols" directive. But is there a reason you can't compile a newer nginx/openssl and use that instead of the 10 year old Debian package? You can compile/link nginx with openssl statically so it doesn't affect the system package and dependencies in any way: 1. download and extract https://www.openssl.org/source/openssl-1.1.1g.tar.gz 2. download and extract http://nginx.org/download/nginx-1.18.0.tar.gz 3. configure the nginx with: ./configure --with-openssl=path/extracted/openssl-1.1.1g --with-openssl-opt=enable-weak-ssl-ciphers (obviously add other configure options like --prefix --with-http_ssl_module --with-http_v2_module etc .. you can check the current configuration with 'nginx -V') 4. make And now you have a nginx binary with statically linked openssl 1.1.1 which has also tls 1.3 support. rr From pluknet at nginx.com Fri May 15 10:16:57 2020 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 15 May 2020 13:16:57 +0300 Subject: nginx 0.7.65 and TLS1.2 In-Reply-To: <9c3fe819-78fe-9eb4-b836-caf2836e2c4e@vizio.biz> References: <9c3fe819-78fe-9eb4-b836-caf2836e2c4e@vizio.biz> Message-ID: <7EEDE50A-95F8-4B26-9BF4-66F7CEADE577@nginx.com> > On 15 May 2020, at 12:43, Marko Domanovi? wrote: > > Long story short, I need nginx 0.7.65 to be able to support TLS1.2. > Seems like it's dependent on openssh version and installed one is 1.0.1t > which seem to support TLS1.2, but "nmap --script ssl-enum-ciphers -p 443 > sitename" says only SSLv3 and TLS1.0 are supported. So is there anything > I can to to make nginx 0.7.65 recognize TLS1.2 and use it? > Technically, you could. You just won't be able to disable this protocol in configuration. $ printf "GET / HTTP/1.0\n\n" | openssl s_client -connect 127.0.0.1:8081 -ign_eof ... New, TLSv1.2, Cipher is DHE-RSA-AES256-GCM-SHA384 ... HTTP/1.1 200 OK Server: nginx/0.7.65 Date: Fri, 15 May 2020 10:14:17 GMT Content-Type: text/html Content-Length: 0 Last-Modified: Fri, 15 May 2020 10:12:53 GMT Connection: close Accept-Ranges: bytes $ ./objs/nginx -V nginx version: nginx/0.7.65 TLS SNI support enabled -- Sergey Kandaurov From postmaster at palvelin.fi Fri May 15 16:49:38 2020 From: postmaster at palvelin.fi (Palvelin Postmaster) Date: Fri, 15 May 2020 09:49:38 -0700 Subject: Passing a special Magento URL to PHP-FPM Message-ID: <42350956-81E1-41E9-AFCD-EE7CEF5744B5@palvelin.fi> I?m trying to learn how to pass specific Magento 1.x URLs such as this to a PHP-FPM backend. /js/index.php/x.js?f=prototype/prototype.js,prototype/validation.js,mage/adminhtml/events.js,mage/adminhtml/form.js,scriptaculous/effects.js All the nginx configs I?ve found (e.g. https://gist.github.com/rafaelstz/3bc3343017dd0118a577) include the same configuration blocks but does it actually work for the above mentioned URL structure? location @handler { ## Magento uses a common front handler rewrite / /index.php; } location ~ \.php/ { ## Forward paths like /js/index.php/x.js to relevant handler rewrite ^(.*\.php)/ $1 last; } location ~ \.php$ { ## Execute PHP scripts expires off; ## Do not cache dynamic content fastcgi_pass fpm_backend; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; ## See /etc/nginx/fastcgi_params } location / { index index.html index.php; ## Allow a static html file to be shown first try_files $uri $uri/ @handler; ## If missing pass the URI to Magento's front handler expires 30d; ## Assume all files are cachable if ($request_uri ~* "\.(png|gif|jpg|jpeg|css|js|swf|ico|txt|xml|bmp|pdf|doc|docx|ppt|pptx|zip)$") { expires max; } # set fastcgi settings, not allowed in the "if" block include /usr/local/etc/nginx/fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; #this line fastcgi_param SCRIPT_FILENAME $document_root/index.php; fastcgi_param SCRIPT_NAME /index.php; fastcgi_param MAGE_RUN_CODE default; fastcgi_param MAGE_RUN_TYPE store; # rewrite - if file not found, pass it to the backend if (!-f $request_filename) { fastcgi_pass fpm_backend; break; } } -- Palvelin.fi Hostmaster postmaster at palvelin.fi From francis at daoine.org Fri May 15 17:11:18 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 15 May 2020 18:11:18 +0100 Subject: Passing a special Magento URL to PHP-FPM In-Reply-To: <42350956-81E1-41E9-AFCD-EE7CEF5744B5@palvelin.fi> References: <42350956-81E1-41E9-AFCD-EE7CEF5744B5@palvelin.fi> Message-ID: <20200515171118.GI20939@daoine.org> On Fri, May 15, 2020 at 09:49:38AM -0700, Palvelin Postmaster wrote: Hi there, > I?m trying to learn how to pass specific Magento 1.x URLs such as this to a PHP-FPM backend. > > /js/index.php/x.js?f=prototype/prototype.js,prototype/validation.js,mage/adminhtml/events.js,mage/adminhtml/form.js,scriptaculous/effects.js > All the nginx configs I?ve found (e.g. https://gist.github.com/rafaelstz/3bc3343017dd0118a577) include the same configuration blocks but does it actually work for the above mentioned URL structure? > Are you asking whether you should try it; or are you reporting that something does not respond as you want it to, when you do try it? Your request, as far as choosing the location{} is concerned, is /js/index.php/x.js If your config is only what you show here, that will be handled in your "location ~ \.php/" block, and it should all Just Work. But the "e.g." link you show includes "location ~ \.js {", and *that* is the location that will handle this request if you use that full config. So it won't use fastcgi_pass at all. Remove that stanza, or put it after the "\.php/" one. Without testing it, I would expect that this part will probably work... > location ~ \.php$ { ## Execute PHP scripts > expires off; ## Do not cache dynamic content > > fastcgi_pass fpm_backend; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > include fastcgi_params; ## See /etc/nginx/fastcgi_params > } ...but I'm not sure about this next part: > # rewrite - if file not found, pass it to the backend > if (!-f $request_filename) { > fastcgi_pass fpm_backend; > break; > } > } Maybe does not matter here; presumably the person who published the config believes it is necessary. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri May 15 18:33:01 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 15 May 2020 19:33:01 +0100 Subject: How nignx handled the header to the client in this situation In-Reply-To: <503653944eb0b83a0c1e00eecef26365.NginxMailingListEnglish@forum.nginx.org> References: <503653944eb0b83a0c1e00eecef26365.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200515183301.GJ20939@daoine.org> On Fri, May 15, 2020 at 01:51:13AM -0400, allenhe wrote: Hi there, > Will nginx buffer the header before receiving of the whole body? > If not, what if error happens in the middle of body receiving? nginx has no > chance to resend the error status then. What do you want your nginx to do, in that case? I suspect that what nginx does do, depends on things like http://nginx.org/r/proxy_buffering As you suggest: if nginx gets the entire response before sending anything to the client, it can send a suitable response code. If nginx is told to stream the response "live", then if the response is http 200 plus half the intended body, that is what the client will get. f -- Francis Daly francis at daoine.org From francis at daoine.org Fri May 15 18:52:20 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 15 May 2020 19:52:20 +0100 Subject: SSL_ERROR_BAD_CERT_DOMAIN In-Reply-To: References: Message-ID: <20200515185220.GK20939@daoine.org> On Thu, May 14, 2020 at 03:37:38PM -0400, Amakesh wrote: Hi there, > Earlier Let?s encrypt certificates worked fine for for all of them, but > recently installed Nginx as web proxy. > https://www.ssllabs.com shows all of the certificates have Server > hostname(rsvix170.gerwanserver.de) as domain name and not their own > addresses. Have you configured your nginx like is shown at http://nginx.org/en/docs/http/configuring_https_servers.html? You probably want one server{} block per certificate that you have, each with "listen 443 ssl", and with server_name matching the names in that certificate. > https://www.ssllabs.com/ssltest/analyze.html?d=solaris-ustronie.eu That is not showing any obvious problems to me right now, so maybe something has been changed recently? (The fact that it shows a different certificate if the client does not use SNI is not something I consider a problem.) Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri May 15 18:58:15 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 15 May 2020 19:58:15 +0100 Subject: editing a general location match to exclude one, specific instance? In-Reply-To: <9a09daeb-f46b-de13-d5b6-376c290bbf5c@gmail.com> References: <9a09daeb-f46b-de13-d5b6-376c290bbf5c@gmail.com> Message-ID: <20200515185815.GL20939@daoine.org> On Thu, May 14, 2020 at 11:10:20AM -0700, PGNet Dev wrote: Hi there, > editing a general location match to exclude one, specific instance? It is usually easier to use positive matches instead of negative ones. > I've had a trivial 'protection' rule in place for a long time > > location ~* (gulpfile\.js|settings.php|readme|schema|htpasswd|password|config) { > deny all; > } > 2020/05/12 22:16:39 [error] 57803#57803: *1 access forbidden by rule, > client: 10.10.10.10, server: testapp.example1.com, request: "GET /api/configuration HTTP/2.0", > I'd like to edit the match to PASS that^ logged match -- as specifically/uniquely as possible -- but CONTINUE to 'deny all' for all other/remaining matches on "config". > > How would that best be done? A preceding location match? Or editing the existing one? A separate "location" that matches what you want and is "higher priority" than the regex location that this request currently matches. location = /api/configuration { # do what you want, probably including proxy_pass } You could use "location ^~ /api/configuration", if you want to allow anything with that prefix. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sat May 16 03:09:53 2020 From: nginx-forum at forum.nginx.org (allenhe) Date: Fri, 15 May 2020 23:09:53 -0400 Subject: How nignx handled the header to the client in this situation In-Reply-To: <20200515183301.GJ20939@daoine.org> References: <20200515183301.GJ20939@daoine.org> Message-ID: Hi Francis, Thanks for the reply! w.r.t. the "http://nginx.org/r/proxy_buffering", the doc does not mention if the buffering works for header, body or both, I'm wondering if nginx can postpone the sending of upstream header in any ways? otherwise the client will get wrong status code in this case. Allen BR Posted at Nginx Forum: https://forum.nginx.org/read.php?2,287990,288056#msg-288056 From nginx-forum at forum.nginx.org Sat May 16 09:24:42 2020 From: nginx-forum at forum.nginx.org (petecooper) Date: Sat, 16 May 2020 05:24:42 -0400 Subject: `if` or `include` for mode-specific `server` directives? Message-ID: <372f6eb1a7b42bfd1765709c6796503d.NginxMailingListEnglish@forum.nginx.org> I compile Nginx mainline from source and update every release. I run a small fleet of open source project and some small business Linux servers with multiple websites per server. There are occasions when a site is taken down for maintenance (typically minutes or hours of downtime out of peak hours), or is under development for extended periods, and also when 'normal' production status is happening. I use a different directory for each mode, so there is separation of files etc. A typical setup looks like this: /var/www/sites/example.com/subdomain/live/ (production mode) /var/www/sites/example.com/subdomain/holding/ (under construction, coming soon) /var/www/sites/example.com/subdomain/maintenance/ (back soon) At present, each site is managed by a monolithic `subdomain.example.com.conf` file with its own `server` block and associated directives. I manually change the `root` for each state when a site changes mode. The current modes are: * live (production) state serves files as usual, no changes. * holding (under construction) state presents HTTP 503 Service Unavailable status with relatively long Retry-After response header. * maintenance (back soon) state presents HTTP 503 Service Unavailable status with relatively short Retry-After response header. Each site has a unique variable namespace and its own mode (live, holding, maintenance), like this: server { set $fqdn_abcd1234 'subdomain.example.com'; set $status_abcd1234 'live'; ... } I would like to change the operation to include some extra, browser-friendly functionality and to make better use of site variables. Two of the three modes above have a Retry-After header, and each of those has a different time value. This gives me a few potential routes to contend with, and I would be grateful for your advice on feedback on what is considered current best practice. == `if` in the `server` block == I will readily admit I have never used `if`, having been scared away by If Is Evil. My (untested) approach looks like this: server { set $fqdn_abcd1234 'subdomain.example.com'; set $status_abcd1234 'live'; root /var/www/sites/example.com/subdomain/$status_abcd1234/; if ($status_abcd1234 = 'holding') { return 503; add_header Retry-After 604800 always; } if ($status_abcd1234 = 'maintenance') { return 503; add_header Retry-After 10800 always; } } So, two inlined `if` inside a monolithic `server` block file. == Stub `server` excerpts for each mode == Rather than include `root` and `if` checks in the monolithic `server` block file, I am considering outboarding each mode to its own stub file as a sidecar to the main `server` block file, like this: * `subdomain.example.com--live.conf` * `subdomain.example.com--holding.conf` * `subdomain.example.com--maintenance.conf` Each site mode stub would include the relevant `return` and `add_header` directives as required, plus any other mode-specific things, and be called by an `include` directive in the main `server` block file. No `if` involved in this route. == Something else I haven't thought of == You folks are much smarter than me. I am certain I've missed something, whether it's obvious or not. What else might I be able to do in this scenario, please? Given the above routes, which is (subjectively / objectively) better from an Nginx point of view? Thank you for reading, and best wishes to you from sunny Cornwall, United Kingdom. Pete Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288057,288057#msg-288057 From nginx-forum at forum.nginx.org Sat May 16 12:05:07 2020 From: nginx-forum at forum.nginx.org (itpp2012) Date: Sat, 16 May 2020 08:05:07 -0400 Subject: `if` or `include` for mode-specific `server` directives? In-Reply-To: <372f6eb1a7b42bfd1765709c6796503d.NginxMailingListEnglish@forum.nginx.org> References: <372f6eb1a7b42bfd1765709c6796503d.NginxMailingListEnglish@forum.nginx.org> Message-ID: https://blog.devcloud.hosting/configuring-nginx-for-quickly-switching-to-maintenance-mode-e4136cf497f3 https://forum.openresty.us/d/4770-c84503afcecd42ad08f3ec457c0948b7 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288057,288058#msg-288058 From francis at daoine.org Sat May 16 12:11:36 2020 From: francis at daoine.org (Francis Daly) Date: Sat, 16 May 2020 13:11:36 +0100 Subject: How nignx handled the header to the client in this situation In-Reply-To: References: <20200515183301.GJ20939@daoine.org> Message-ID: <20200516121136.GM20939@daoine.org> On Fri, May 15, 2020 at 11:09:53PM -0400, allenhe wrote: Hi there, > w.r.t. the "http://nginx.org/r/proxy_buffering", the doc does not mention if > the buffering works for header, body or both, It's "the response". It sounds like it should be fairly straightforward to test on your setup, if you want to convince yourself what it does -- make an upstream that receives a request, waits 5 second, sends the response header, waits 10 seconds, sends some of the response body, waits 10 seconds, and sends the rest of the response body. Then make a request directly to that upstream, and see that you see something after 5, 15, and 25 seconds. Then make a request through nginx, and see if you see anything before 25 seconds (all buffered); or see something after 5 seconds (header sent early) or after 15 (header and start of body sent early). > I'm wondering if nginx can > postpone the sending of upstream header in any ways? Can you show the request that you make and the response that you get that is not the response that you want? Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sat May 16 12:49:55 2020 From: francis at daoine.org (Francis Daly) Date: Sat, 16 May 2020 13:49:55 +0100 Subject: `if` or `include` for mode-specific `server` directives? In-Reply-To: <372f6eb1a7b42bfd1765709c6796503d.NginxMailingListEnglish@forum.nginx.org> References: <372f6eb1a7b42bfd1765709c6796503d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200516124955.GN20939@daoine.org> On Sat, May 16, 2020 at 05:24:42AM -0400, petecooper wrote: Hi there, > At present, each site is managed by a monolithic > `subdomain.example.com.conf` file with its own `server` block and associated > directives. I manually change the `root` for each state when a site changes > mode. > Each site has a unique variable namespace and its own mode (live, holding, > maintenance), like this: > > server { > set $fqdn_abcd1234 'subdomain.example.com'; > set $status_abcd1234 'live'; > ... > } Your current system is that you edit the conf file to change some "set" variable values, and reload nginx. I suggest that you'll be happier in the long run using a templating language, or macro-substituting language, external to nginx; along with "source" conf files that are to have the substitutions applied; and change the value there and regenerate the nginx conf parts, and then reload nginx. That language (your own shell scripts; m4; or whatever you like the look of) will be able to write exactly the "root/return/add_header" or whatever statements that you want based on the variable values that you set. It is a significant change to your setup right now; but it means that your running nginx config will have exactly what you want, without needing to worry about levels of indirection of run-time variable substitution. "set" to a static string is using nginx-conf like a macro language. "When all you have is a hammer, everything looks like a thumb." I suspect you'll be better off using a real macro language -- you can do more things with it, and the running nginx will have less work to do on every request. > == `if` in the `server` block == > > I will readily admit I have never used `if`, having been scared away by If > Is Evil. "If Is Evil" is limited to "when used in location context"; so you're ok to do what you suggest here. It just is not as efficient as it could be. > == Stub `server` excerpts for each mode == > > Rather than include `root` and `if` checks in the monolithic `server` block > file, I am considering outboarding each mode to its own stub file as a > sidecar to the main `server` block file, like this: This is better -- presumably you will edit the "include" line and reload nginx (or may use symlinks; and change where the symlink points and reload nginx) when you want to change mode. If you can avoid the run-time variable substitution altogether, this is good. If your "mode"-conf files have lots of parts that are very similar apart from a mode-related word, then you may be ok with manually updating all of them when you want to add something else; an external thing that just generates the three "mode" files could avoid that manual-duplication step. > Given the above routes, which is (subjectively / objectively) better from an > Nginx point of view? Objectively, using run-time variable substitution means that nginx does more work on every request than it would do without that. And nginx testing one or two extra "if" statements on every request means that nginx does more work than it would do without that. It's not a lot of extra work; and the machine-time saved over a year may not make up for the person-time taken in changing to a system that avoids those things. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sat May 16 13:13:00 2020 From: nginx-forum at forum.nginx.org (petecooper) Date: Sat, 16 May 2020 09:13:00 -0400 Subject: `if` or `include` for mode-specific `server` directives? In-Reply-To: <20200516124955.GN20939@daoine.org> References: <20200516124955.GN20939@daoine.org> Message-ID: Hi Francis. Francis Daly Wrote: ------------------------------------------------------- > I suggest that you'll be happier in the long run using a templating > language, or macro-substituting language, external to nginx; along > with > "source" conf files that are to have the substitutions applied; and > change the value there and regenerate the nginx conf parts, and then > reload nginx. > [...] > It is a significant change to your setup right now; but it means that > your running nginx config will have exactly what you want, without > needing > to worry about levels of indirection of run-time variable > substitution. Excellent. Thank you. You're absolutely correct, this is a big step change for me and will require some r&d, but I like the sounds of it. > > == `if` in the `server` block == > > > > I will readily admit I have never used `if`, having been scared away > by If > > Is Evil. > > "If Is Evil" is limited to "when used in location context"; so you're > ok to do what you suggest here. > > It just is not as efficient as it could be. Message received and understood on the `if` context, thank you. > > == Stub `server` excerpts for each mode == > > > > Rather than include `root` and `if` checks in the monolithic > `server` block > > file, I am considering outboarding each mode to its own stub file as > a > > sidecar to the main `server` block file, like this: > > This is better -- presumably you will edit the "include" line and > reload > nginx (or may use symlinks; and change where the symlink points and > reload nginx) when you want to change mode. Correct - edit, check config, then reload. > If you can avoid the run-time variable substitution altogether, this > is good. This is useful, too. I have zero reliance on any `set` variables at a site identification level, and I would prefer to reduce the workload Nginx has to undertake generally for the sake of more human time when it comes to mode switching. > If your "mode"-conf files have lots of parts that are very similar > apart > from a mode-related word, then you may be ok with manually updating > all > of them when you want to add something else; an external thing that > just > generates the three "mode" files could avoid that manual-duplication > step. My intention is for the mode-related stubs to just contain the unique parts: `root`, the `add_header` and anything else that crops up. The vast majority of site directives are deliberately (at least currently) included in the main `server` block file, the outboarding would be as little as possible directive-wise. > Objectively, using run-time variable substitution means that nginx > does > more work on every request than it would do without that. > > And nginx testing one or two extra "if" statements on every request > means that nginx does more work than it would do without that. > > It's not a lot of extra work; and the machine-time saved over a year > may not make up for the person-time taken in changing to a system that > avoids those things. ...and I have my next lockdown project! Thank you very much for your informative and useful reply, I really appreciate it. With best wishes, Pete Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288057,288061#msg-288061 From nginx-forum at forum.nginx.org Sat May 16 22:29:01 2020 From: nginx-forum at forum.nginx.org (hgv) Date: Sat, 16 May 2020 18:29:01 -0400 Subject: Passing a special Magento URL to PHP-FPM Message-ID: I?m trying to learn how to pass special Magento 1.x URLs such as this to a PHP-FPM backend. /js/index.php/x.js?f=prototype/prototype.js,prototype/validation.js,mage/adminhtml/events.js,mage/adminhtml/form.js,scriptaculous/effects.js All the Nginx configs I?ve found (e.g. https://gist.github.com/rafaelstz/3bc3343017dd0118a577) include the same configuration blocks but I wonder if this even worksfor the above-mentioned URL? location @handler { ## Magento uses a common front handler rewrite / /index.php; } location ~ \.php/ { ## Forward paths like /js/index.php/x.js to relevant handler rewrite ^(.*\.php)/ $1 last; } location ~ \.php$ { fastcgi_pass fpm_backend; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; ## See /etc/nginx/fastcgi_params } location / { index index.html index.php; ## Allow a static html file to be shown first try_files $uri $uri/ @handler; ## If missing pass the URI to Magento's front handler expires 30d; ## Assume all files are cachable if ($request_uri ~* "\.(png|gif|jpg|jpeg|css|js|swf|ico|txt|xml|bmp|pdf|doc|docx|ppt|pptx|zip)$") { expires max; } # set fastcgi settings, not allowed in the "if" block include /usr/local/etc/nginx/fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; #this line fastcgi_param SCRIPT_FILENAME $document_root/index.php; fastcgi_param SCRIPT_NAME /index.php; fastcgi_param MAGE_RUN_CODE default; fastcgi_param MAGE_RUN_TYPE store; # rewrite - if file not found, pass it to the backend if (!-f $request_filename) { fastcgi_pass fpm_backend; break; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288062,288062#msg-288062 From nginx-forum at forum.nginx.org Sun May 17 16:13:20 2020 From: nginx-forum at forum.nginx.org (Olaf van der Spek) Date: Sun, 17 May 2020 12:13:20 -0400 Subject: TLSv1.3 by default? In-Reply-To: <20181123165100.GF99070@mdounin.ru> References: <20181123165100.GF99070@mdounin.ru> Message-ID: <19af5d70a7e1196d09a9c07e152dcf8c.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > On Fri, Nov 23, 2018 at 08:43:03AM -0500, Olaf van der Spek wrote: > > > > Why isn't 1.3 enabled by default (when available)? > > > > Syntax: ssl_protocols [SSLv2] [SSLv3] [TLSv1] [TLSv1.1] [TLSv1.2] > > [TLSv1.3]; > > Default: > > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > > > > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_protocols > > The main reason is that when it was implemented, TLSv1.3 RFC > wasn't yet finalized, and TLSv1.3 was only available via various > drafts, and only with pre-release versions of OpenSSL. > > Now with RFC 8446 published and OpenSSL 1.1.1 with TLSv1.3 > released this probably can be reconsidered. On the other hand, Has this been reconsidered yet? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282098,288063#msg-288063 From qusaialbreazet1995 at gmail.com Sun May 17 21:37:57 2020 From: qusaialbreazet1995 at gmail.com (=?UTF-8?B?2LnZgNqH2YDal9q82YDZsSDavNmA2qPZgNm82YDaltqo26bbkg==?=) Date: Mon, 18 May 2020 00:37:57 +0300 Subject: No subject Message-ID: https://www.facebook.com/1995SadBedouinpoet/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From qusaialbreazet1995 at gmail.com Sun May 17 22:22:27 2020 From: qusaialbreazet1995 at gmail.com (=?UTF-8?B?2LnZgNqH2YDal9q82YDZsSDavNmA2qPZgNm82YDaltqo26bbkg==?=) Date: Mon, 18 May 2020 01:22:27 +0300 Subject: No subject Message-ID: iQIzBAABCAAdFiEEWlKIB4H3VWCL+BX8kQ3rRvU+oxIFAl66Wf4ACgkQkQ3rRvU+ oxId8w//efveYkf806hAp6vwlBlIzo7vzsIM8Mt/LIHSrkTnj79ovlkkunEJCcrH WIEX66Rsb0jjmen0ppUpI1graTDK0gnJ8yYeoD94uzvcw7/XJuXl6LfuxZuNOwsi 4XcHYXV6ZDPH2Lee5EWe2crw4wRIpTWmzqb+AGvgrxtrtltt/N+xr834KNbV/Klc iNagDniBYZMEkpGZC1+1YmLQ7kwkyerjTutJHHpXMdH7lS79h8sFVCoTxNrO0ZIU ePrt67V+ed3l8P2mYK0Nv6TX1R5z2ukccXJ3Da8Ku0z4rqQw1XqP4tjJkaDl7Veo wHeoHTZvlsq/XXF7a8rnhY6R1WzXYI69sCBrJD9BxaJPLDuO2WDQHVviZyqRJwoz RCZEpYyuQ6bEecKUs5kEEAxdEbA4wGTkI8RWf6CC2IPOVkq0I4JJBx5AMEliiOF+ T2X9hXEQowx9Pg5ZqWD2BeU1IzWfCuIaeFiOwBbrLLQrHQ8/oas43c3BjALw0bpa n+zJGY6yy24SQm2rBvJSHsocaNXMgQEVn/wW8gmzIhvn/GvR4nFZiTj/yF6qPv2Z PuFwsrvbxHDOyq/BkqqcFvU1ENursw5FTVktPF6AvXRBV+mESVJ4mlJz7H3DNx74 eetdUdONkyLGPRaJIQXRiloolU/IYbdHmjS/SosgVSn2tInRpMQ= =EN5z -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue May 19 12:08:47 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 19 May 2020 13:08:47 +0100 Subject: Passing a special Magento URL to PHP-FPM In-Reply-To: References: Message-ID: <20200519120847.GO20939@daoine.org> On Sat, May 16, 2020 at 06:29:01PM -0400, hgv wrote: Hi there, > I?m trying to learn how to pass special Magento 1.x URLs such as this to a > PHP-FPM backend. https://forum.nginx.org/read.php?2,288050, plus the response at https://forum.nginx.org/read.php?2,288050,288051, perhaps? Cheers, f -- Francis Daly francis at daoine.org From xeioex at nginx.com Tue May 19 14:25:24 2020 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 19 May 2020 17:25:24 +0300 Subject: njs-0.4.1 Message-ID: Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release extends http module. Notable new features: - raw headers API: With the following request headers: : Host: localhost : Foo: bar : foo: bar2 All 'foo' headers can be collected with the syntax: : r.rawHeadersIn.filter(v=>v[0].toLowerCase() == 'foo').map(v=>v[1]); the output will be: : ['bar', 'bar2'] - TypeScript API definition: : foo.ts: : /// : function content_handler(r: NginxHTTPRequest) : { : r.headersOut['content-type'] = 'text/plain'; : r.return(200, "Hello from TypeScript"); : } : : tsc foo.ts --outFile foo.js foo.js can be used directly with njs. You can learn more about njs: - Overview and introduction: http://nginx.org/en/docs/njs/ - Using node modules with njs: http://nginx.org/en/docs/njs/node_modules.html - Writing njs code using TypeScript definition files: http://nginx.org/en/docs/njs/typescript.html Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.4.1 19 May 2020 *) Feature: added support for multi-value headers in r.headersIn. *) Feature: introduced raw headers API. *) Feature: added TypeScript API description. Core: *) Bugfix: fixed Array.prototype.slice() for sparse arrays. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu May 21 17:27:30 2020 From: nginx-forum at forum.nginx.org (finalturismo) Date: Thu, 21 May 2020 13:27:30 -0400 Subject: http_request_failed - cURL error 60: SSL certificate problem: unable to get local issuer certificate. Message-ID: So i have a few sites setup on my nginx web server and my ssl has been working fine. Problem is iam getting a curl ssl error and iam not sure why? The error is as follows http_request_failed - cURL error 60: SSL certificate problem: unable to get local issuer certificate. I never gotten any ssl errors before, besides this time when i went to import demo data on a wordpress site? I need help as i need to get this site up by Friday. Here is my current nginx ssl configuration file server { listen 80; root /tmp/ewtwtertert; index index.html index.htm index.nginx-debian.html index.php; server_name dfwelectronicsrecycling.com www.dfwelectronicsrecycling.com; location / { rewrite .* https://www.dfwelectronicsrecycling.com/$1; } } server { listen 443 ssl; ssl_certificate /etc/nginx/ssl/dfwelectronicsrecycling.com/dfwelectronicsrecycling.crt; ssl_certificate_key /etc/nginx/ssl/dfwelectronicsrecycling.com/dfwelectronicsrecycling.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_session_timeout 10m; ssl_session_cache shared:SSL:10m; ssl_ciphers 'kEECDH+ECDSA+AES128 kEECDH+ECDSA+AES256 kEECDH+AES128 kEECDH+AES256 kEDH+AES128 kEDH+AES256 DES-CBC3-SHA +SHA !aNULL !eNULL !LOW !kECDH !DSS !MD5 !EXP !PSK !SRP !CAMELLIA !SEED'; root /var/www/dfwelectronicsrecycling.com/public_html; index index.html index.htm index.php; server_name dfwelectronicsrecycling.com www.dfwelectronicsrecycling.com; location / { try_files $uri $uri/ /index.php?$args; } ##### Include , Security and configuration files # include /etc/nginx/sites-conf/dfwelectronicsrecycling.com/*; location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/var/run/php-fpm.socket; } location = /.well-known/pki-validation { types {} default_type text/html; } } Here is my nginx configuration file. user nginx nginx; worker_processes auto; pid /run/nginx.pid; #include /etc/nginx/modules-enabled/*.conf; # BEGIN W3TC Page Cache cache # END W3TC Page Cache cache events { use epoll; worker_connections 1024; multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 15; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; client_max_body_size 120M; client_body_buffer_size 1M; include /etc/nginx/mime.types; default_type application/octet-stream; ## # SSL Settings ## ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log warn; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## # included custom scripts include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; # error because this is in http {} directive. for redirecting you need in server {} directive } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288085,288085#msg-288085 From teward at thomas-ward.net Thu May 21 18:14:36 2020 From: teward at thomas-ward.net (Thomas Ward) Date: Thu, 21 May 2020 14:14:36 -0400 Subject: http_request_failed - cURL error 60: SSL certificate problem: unable to get local issuer certificate. In-Reply-To: References: Message-ID: <060fb4e4-22dd-e57a-9cf0-545e8ec45c0f@thomas-ward.net> How did you generate your certificate at /etc/nginx/ssl/dfwelectronicsrecycling.com/dfwelectronicsrecycling.crt ? Is it a self-signed certificate or generated by LetsEncrypt or some other mechanism?? IF it's self-signed this is Normal Behavior, you can override it with the `-k` flag/argument to Curl.? If it's from a legitimate SSL provider then you aren't serving the certificate chain too. Thomas On 5/21/20 1:27 PM, finalturismo wrote: > So i have a few sites setup on my nginx web server and my ssl has been > working fine. > > Problem is iam getting a curl ssl error and iam not sure why? > > The error is as follows http_request_failed - cURL error 60: SSL certificate > problem: unable to get local issuer certificate. > > I never gotten any ssl errors before, besides this time when i went to > import demo data on a wordpress site? > > I need help as i need to get this site up by Friday. > > Here is my current nginx ssl configuration file > > server { > listen 80; > > > > root /tmp/ewtwtertert; > index index.html index.htm index.nginx-debian.html index.php; > server_name dfwelectronicsrecycling.com www.dfwelectronicsrecycling.com; > > location / { > rewrite .* https://www.dfwelectronicsrecycling.com/$1; > } > } > > server { > listen 443 ssl; > ssl_certificate /etc/nginx/ssl/dfwelectronicsrecycling.com/dfwelectronicsrecycling.crt; > ssl_certificate_key /etc/nginx/ssl/dfwelectronicsrecycling.com/dfwelectronicsrecycling.key; > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > ssl_prefer_server_ciphers on; > ssl_session_timeout 10m; > ssl_session_cache shared:SSL:10m; > ssl_ciphers 'kEECDH+ECDSA+AES128 kEECDH+ECDSA+AES256 kEECDH+AES128 > kEECDH+AES256 kEDH+AES128 kEDH+AES256 DES-CBC3-SHA +SHA !aNULL !eNULL !LOW > !kECDH !DSS !MD5 !EXP !PSK !SRP !CAMELLIA !SEED'; > > root /var/www/dfwelectronicsrecycling.com/public_html; > index index.html index.htm index.php; > server_name dfwelectronicsrecycling.com www.dfwelectronicsrecycling.com; > > location / { > try_files $uri $uri/ /index.php?$args; > } > > ##### Include , Security and configuration files > # include /etc/nginx/sites-conf/dfwelectronicsrecycling.com/*; > > > location ~ \.php$ { > include snippets/fastcgi-php.conf; > fastcgi_pass unix:/var/run/php-fpm.socket; > } > > > > > location = /.well-known/pki-validation { > types {} > default_type text/html; > } > > > > > > } > > > > > Here is my nginx configuration file. > user nginx nginx; > worker_processes auto; > pid /run/nginx.pid; > #include /etc/nginx/modules-enabled/*.conf; > > > # BEGIN W3TC Page Cache cache > # END W3TC Page Cache cache > > events { > use epoll; > worker_connections 1024; > multi_accept on; > } > > > > http { > > > > ## > # Basic Settings > ## > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 15; > types_hash_max_size 2048; > # server_tokens off; > > # server_names_hash_bucket_size 64; > # server_name_in_redirect off; > client_max_body_size 120M; > client_body_buffer_size 1M; > > include /etc/nginx/mime.types; > default_type application/octet-stream; > > ## > # SSL Settings > ## > > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE > ssl_prefer_server_ciphers on; > > ## > # Logging Settings > ## > > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log warn; > > ## > # Gzip Settings > ## > > gzip on; > gzip_disable "msie6"; > > gzip_vary on; > gzip_proxied any; > gzip_comp_level 6; > gzip_buffers 16 8k; > gzip_http_version 1.1; > gzip_types text/plain text/css application/json application/javascript > text/xml application/xml application/xml+rss text/javascript; > > ## > # Virtual Host Configs > ## > > > > > # included custom scripts > include /etc/nginx/conf.d/*.conf; > include /etc/nginx/sites-enabled/*; > > # error because this is in http {} directive. for redirecting you need in > server {} directive > } > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288085,288085#msg-288085 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From themadbeaker at gmail.com Thu May 21 18:15:16 2020 From: themadbeaker at gmail.com (J.R.) Date: Thu, 21 May 2020 13:15:16 -0500 Subject: http_request_failed - cURL error 60: SSL certificate problem: unable to get local issuer certificate. Message-ID: Your certificate chain is incomplete, and curl is complaining... https://www.ssllabs.com/ssltest/analyze.html?d=www.dfwelectronicsrecycling.com&hideResults=on You should add the Sectigo RSA Domain Validation Secure Server CA to your cert file, then it will probably be happy... From themadbeaker at gmail.com Thu May 21 18:18:02 2020 From: themadbeaker at gmail.com (J.R.) Date: Thu, 21 May 2020 13:18:02 -0500 Subject: http_request_failed - cURL error 60: SSL certificate problem: unable to get local issuer certificate. Message-ID: > location / { > rewrite .* https://www.dfwelectronicsrecycling.com/$1; > } Don't do that... The correct way when you want to redirect http to https would be: server { listen 80; server_name dfwelectronicsrecycling.com www.dfwelectronicsrecycling.com; access_log off; return 301 https://www.dfwelectronicsrecycling.com$request_uri; } From nginx-forum at forum.nginx.org Thu May 21 19:53:13 2020 From: nginx-forum at forum.nginx.org (finalturismo) Date: Thu, 21 May 2020 15:53:13 -0400 Subject: http_request_failed - cURL error 60: SSL certificate problem: unable to get local issuer certificate. In-Reply-To: References: Message-ID: How do i go about doing this? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288085,288089#msg-288089 From nginx-forum at forum.nginx.org Thu May 21 19:54:56 2020 From: nginx-forum at forum.nginx.org (finalturismo) Date: Thu, 21 May 2020 15:54:56 -0400 Subject: http_request_failed - cURL error 60: SSL certificate problem: unable to get local issuer certificate. In-Reply-To: References: Message-ID: I know the correct way as you are saying but i have an extremely secure wordpress setup and most files are not in the public_html folder, there is a specific reason me and my friend did this. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288085,288090#msg-288090 From nginx-forum at forum.nginx.org Thu May 21 20:38:18 2020 From: nginx-forum at forum.nginx.org (eckern) Date: Thu, 21 May 2020 16:38:18 -0400 Subject: Conditionally removing a proxy header In-Reply-To: <20200514220821.GH20939@daoine.org> References: <20200514220821.GH20939@daoine.org> Message-ID: <5e863f245763848bc7f2ec80085a81bd.NginxMailingListEnglish@forum.nginx.org> Thanks so much Francis, yes that seems to be have worked. When the application is accessed outside our domain, it doesn't try to negotiate which would pop up the Windows authentication prompt and would never work anyways, but if the user is inside our domain either by being physically inside the building or through a VPN, the negotiate header is there to allow for automatic sign-in using their Windows credentials. As you suggested I used a map: map $external_traffic $negotiate { 1 ''; 0 $upstream_http_www_authenticate; } Then inside the location block I removed and conditionally added the WWW-Authenticate header: proxy_hide_header WWW-Authenticate; # Remove negotiate header add_header WWW-Authenticate $negotiate always; #Add negotiate header for internal addresses Thanks again! Neil Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288015,288091#msg-288091 From nginx-forum at forum.nginx.org Fri May 22 11:14:42 2020 From: nginx-forum at forum.nginx.org (Tyler_durden_83) Date: Fri, 22 May 2020 07:14:42 -0400 Subject: How to use dynamic IP in resolver directive when NGINX installed on Multi Nodes Openshift cluster In-Reply-To: <8C57D97BF3785A4A8066B7EB3E6CEB1801F1F8306E@ILRAADAGBE3.corp.amdocs.com> References: <8C57D97BF3785A4A8066B7EB3E6CEB1801F1F8306E@ILRAADAGBE3.corp.amdocs.com> Message-ID: Anyone has found a way to resolve this? Seems like a pretty big deal to me, it completely breaks porting of microservices to Openshift... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281239,288095#msg-288095 From zlmitche at syr.edu Fri May 22 14:59:22 2020 From: zlmitche at syr.edu (Zach Mitchell) Date: Fri, 22 May 2020 14:59:22 +0000 Subject: nginx 1.18.0 does not reload on ubuntu 18.04 Message-ID: I'm using a nginx 1.18.0 and 1.16.1 and when i perform a systemctl restart nginx, it does not actually reload the configs. nginx -s reload does not work either. Am i missing a configure flag that allows this to work properly? Here is my nginx -V built by gcc 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.12) built with OpenSSL 1.1.1g 21 Apr 2020 TLS SNI support enabled configure arguments: --prefix=/usr/local/nginx --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-threads --with-file-aio --with-http_ssl_module --with-http_v2_module --with-http_stub_status_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_stub_status_module --with-http_realip_module --with-compat --with-pcre=../pcre-8.44 --with-pcre-jit --with-zlib=../zlib-1.2.11 --with-openssl=../openssl-1.1.1g --with-openssl-opt=no-nextprotoneg --add-module=../ngx_cache_purge-2.3 Here is my systemd nginx.service [Unit] Description=The NGINX HTTP and reverse proxy server After=syslog.target network.target remote-fs.target nss-lookup.target [Service] Type=forking PIDFile=/usr/local/nginx/logs/nginx.pid ExecStartPre=/usr/local/sbin/nginx -t -q -g 'daemon on; master_process on;' ExecStart=/usr/local/sbin/nginx -g 'daemon on; master_process on;' ExecReload=/usr/local/sbin/nginx -g 'daemon on; master_process on;' -s reload ExecStop=/bin/kill -s QUIT $MAINPID PrivateTmp=true [Install] WantedBy=multi-user.target I've got a workaround, which i spawn a 2nd process and wait then kill the old, but what is the deal? ExecReload=/bin/bash -c "/bin/kill -USR2 $MAINPID && /bin/sleep 5 && /bin/kill -WINCH $MAINPID && /bin/sleep 5 && /bin/kill -QUIT $MAINPID" I appreciate the help! Thanks. Zach -------------- next part -------------- An HTML attachment was scrubbed... URL: From kohenkatz at gmail.com Fri May 22 15:30:36 2020 From: kohenkatz at gmail.com (Moshe Katz) Date: Fri, 22 May 2020 11:30:36 -0400 Subject: nginx 1.18.0 does not reload on ubuntu 18.04 In-Reply-To: References: Message-ID: I installed nginx on Ubuntu 18.04 from the nginx official repository, and the provided systemd service file is much simpler than yours. It looks like this: ``` [Unit] Description=nginx - high performance web server Documentation=http://nginx.org/en/docs/ After=network-online.target remote-fs.target nss-lookup.target Wants=network-online.target [Service] Type=forking PIDFile=/var/run/nginx.pid ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf ExecReload=/bin/kill -s HUP $MAINPID ExecStop=/bin/kill -s TERM $MAINPID [Install] WantedBy=multi-user.target ``` You should see if something more like that works for you. On Fri, May 22, 2020 at 10:59 AM Zach Mitchell wrote: > I'm using a nginx 1.18.0 and 1.16.1 and when i perform a systemctl restart > nginx, it does not actually reload the configs. nginx -s reload does not > work either. Am i missing a configure flag that allows this to work > properly? > > > > Here is my nginx -V > > > > built by gcc 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.12) > > built with OpenSSL 1.1.1g 21 Apr 2020 > > TLS SNI support enabled > > configure arguments: --prefix=/usr/local/nginx > --modules-path=/usr/lib/nginx/modules > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-threads > --with-file-aio --with-http_ssl_module --with-http_v2_module > --with-http_stub_status_module --with-http_gunzip_module > --with-http_gzip_static_module --with-http_stub_status_module > --with-http_realip_module --with-compat --with-pcre=../pcre-8.44 > --with-pcre-jit --with-zlib=../zlib-1.2.11 --with-openssl=../openssl-1.1.1g > --with-openssl-opt=no-nextprotoneg --add-module=../ngx_cache_purge-2.3 > > > > Here is my systemd nginx.service > > > > [Unit] > > Description=The NGINX HTTP and reverse proxy server > > After=syslog.target network.target remote-fs.target nss-lookup.target > > > > [Service] > > Type=forking > > PIDFile=/usr/local/nginx/logs/nginx.pid > > ExecStartPre=/usr/local/sbin/nginx -t -q -g 'daemon on; master_process on;' > > ExecStart=/usr/local/sbin/nginx -g 'daemon on; master_process on;' > > ExecReload=/usr/local/sbin/nginx -g 'daemon on; master_process on;' -s > reload > > ExecStop=/bin/kill -s QUIT $MAINPID > > PrivateTmp=true > > > > [Install] > > WantedBy=multi-user.target > > > > I've got a workaround, which i spawn a 2nd process and wait then kill the > old, but what is the deal? > > > > ExecReload=/bin/bash -c "/bin/kill -USR2 $MAINPID && /bin/sleep 5 && > /bin/kill -WINCH $MAINPID && /bin/sleep 5 && /bin/kill -QUIT $MAINPID" > > > > I appreciate the help! Thanks. > > > > *Zach* > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From zlmitche at syr.edu Fri May 22 15:48:35 2020 From: zlmitche at syr.edu (Zach Mitchell) Date: Fri, 22 May 2020 15:48:35 +0000 Subject: nginx 1.18.0 does not reload on ubuntu 18.04 In-Reply-To: References: Message-ID: <0969127b3354490f80e3e3b5be9bc12f@syr.edu> I?ve also tried that ?ExecReload=/bin/kill -s HUP $MAINPID? which doesn?t work either. I?m testing by adding a new location rule and then reloading and it never gets picked up, I have to restart the nginx process and then it finally gets the new config. Zach From: nginx On Behalf Of Moshe Katz Sent: Friday, May 22, 2020 11:31 AM To: nginx at nginx.org Subject: Re: nginx 1.18.0 does not reload on ubuntu 18.04 I installed nginx on Ubuntu 18.04 from the nginx official repository, and the provided systemd service file is much simpler than yours. It looks like this: ``` [Unit] Description=nginx - high performance web server Documentation=http://nginx.org/en/docs/ After=network-online.target remote-fs.target nss-lookup.target Wants=network-online.target [Service] Type=forking PIDFile=/var/run/nginx.pid ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf ExecReload=/bin/kill -s HUP $MAINPID ExecStop=/bin/kill -s TERM $MAINPID [Install] WantedBy=multi-user.target ``` You should see if something more like that works for you. On Fri, May 22, 2020 at 10:59 AM Zach Mitchell > wrote: I'm using a nginx 1.18.0 and 1.16.1 and when i perform a systemctl restart nginx, it does not actually reload the configs. nginx -s reload does not work either. Am i missing a configure flag that allows this to work properly? Here is my nginx -V built by gcc 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.12) built with OpenSSL 1.1.1g 21 Apr 2020 TLS SNI support enabled configure arguments: --prefix=/usr/local/nginx --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-threads --with-file-aio --with-http_ssl_module --with-http_v2_module --with-http_stub_status_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_stub_status_module --with-http_realip_module --with-compat --with-pcre=../pcre-8.44 --with-pcre-jit --with-zlib=../zlib-1.2.11 --with-openssl=../openssl-1.1.1g --with-openssl-opt=no-nextprotoneg --add-module=../ngx_cache_purge-2.3 Here is my systemd nginx.service [Unit] Description=The NGINX HTTP and reverse proxy server After=syslog.target network.target remote-fs.target nss-lookup.target [Service] Type=forking PIDFile=/usr/local/nginx/logs/nginx.pid ExecStartPre=/usr/local/sbin/nginx -t -q -g 'daemon on; master_process on;' ExecStart=/usr/local/sbin/nginx -g 'daemon on; master_process on;' ExecReload=/usr/local/sbin/nginx -g 'daemon on; master_process on;' -s reload ExecStop=/bin/kill -s QUIT $MAINPID PrivateTmp=true [Install] WantedBy=multi-user.target I've got a workaround, which i spawn a 2nd process and wait then kill the old, but what is the deal? ExecReload=/bin/bash -c "/bin/kill -USR2 $MAINPID && /bin/sleep 5 && /bin/kill -WINCH $MAINPID && /bin/sleep 5 && /bin/kill -QUIT $MAINPID" I appreciate the help! Thanks. Zach _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From zlmitche at syr.edu Fri May 22 17:50:06 2020 From: zlmitche at syr.edu (Zach Mitchell) Date: Fri, 22 May 2020 17:50:06 +0000 Subject: nginx 1.18.0 does not reload on ubuntu 18.04 In-Reply-To: <0969127b3354490f80e3e3b5be9bc12f@syr.edu> References: <0969127b3354490f80e3e3b5be9bc12f@syr.edu> Message-ID: <82f864a533724b8f81a1a0bd438f29b8@syr.edu> Finally figured it out. I enabled debug mode on the error log. It turns out setting the worker_rlimit_nofile in the nginx.conf was not working. On nginx startup it would say ?getrlimit(RLIMIT_NOFILE): 1024:4096". I had already checked the security limits and they were set? I confirmed that the workers were using the worker_rlimit_nofile correctly, but the master was not able to reload correctly. Solution: Add LimitNOFILE to the system service, on the next nginx start up the debug logs show the correct nolimt! Reloading now works! Hurray! [Service] LimitNOFILE=60000 Zach From: nginx On Behalf Of Zach Mitchell Sent: Friday, May 22, 2020 11:49 AM To: nginx at nginx.org Subject: RE: nginx 1.18.0 does not reload on ubuntu 18.04 I?ve also tried that ?ExecReload=/bin/kill -s HUP $MAINPID? which doesn?t work either. I?m testing by adding a new location rule and then reloading and it never gets picked up, I have to restart the nginx process and then it finally gets the new config. Zach From: nginx > On Behalf Of Moshe Katz Sent: Friday, May 22, 2020 11:31 AM To: nginx at nginx.org Subject: Re: nginx 1.18.0 does not reload on ubuntu 18.04 I installed nginx on Ubuntu 18.04 from the nginx official repository, and the provided systemd service file is much simpler than yours. It looks like this: ``` [Unit] Description=nginx - high performance web server Documentation=http://nginx.org/en/docs/ After=network-online.target remote-fs.target nss-lookup.target Wants=network-online.target [Service] Type=forking PIDFile=/var/run/nginx.pid ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf ExecReload=/bin/kill -s HUP $MAINPID ExecStop=/bin/kill -s TERM $MAINPID [Install] WantedBy=multi-user.target ``` You should see if something more like that works for you. On Fri, May 22, 2020 at 10:59 AM Zach Mitchell > wrote: I'm using a nginx 1.18.0 and 1.16.1 and when i perform a systemctl restart nginx, it does not actually reload the configs. nginx -s reload does not work either. Am i missing a configure flag that allows this to work properly? Here is my nginx -V built by gcc 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.12) built with OpenSSL 1.1.1g 21 Apr 2020 TLS SNI support enabled configure arguments: --prefix=/usr/local/nginx --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-threads --with-file-aio --with-http_ssl_module --with-http_v2_module --with-http_stub_status_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_stub_status_module --with-http_realip_module --with-compat --with-pcre=../pcre-8.44 --with-pcre-jit --with-zlib=../zlib-1.2.11 --with-openssl=../openssl-1.1.1g --with-openssl-opt=no-nextprotoneg --add-module=../ngx_cache_purge-2.3 Here is my systemd nginx.service [Unit] Description=The NGINX HTTP and reverse proxy server After=syslog.target network.target remote-fs.target nss-lookup.target [Service] Type=forking PIDFile=/usr/local/nginx/logs/nginx.pid ExecStartPre=/usr/local/sbin/nginx -t -q -g 'daemon on; master_process on;' ExecStart=/usr/local/sbin/nginx -g 'daemon on; master_process on;' ExecReload=/usr/local/sbin/nginx -g 'daemon on; master_process on;' -s reload ExecStop=/bin/kill -s QUIT $MAINPID PrivateTmp=true [Install] WantedBy=multi-user.target I've got a workaround, which i spawn a 2nd process and wait then kill the old, but what is the deal? ExecReload=/bin/bash -c "/bin/kill -USR2 $MAINPID && /bin/sleep 5 && /bin/kill -WINCH $MAINPID && /bin/sleep 5 && /bin/kill -QUIT $MAINPID" I appreciate the help! Thanks. Zach _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.evonosky at gmail.com Sat May 23 01:31:46 2020 From: alex.evonosky at gmail.com (Alex Evonosky) Date: Fri, 22 May 2020 21:31:46 -0400 Subject: Quick question on NGINX cache Message-ID: This should be pretty simple as I really cannot find a good answer on: Running Wordpress with the default permalinks (?page_id=xxx) On NGINX conf, I tried: location / { try_files $uri $uri/ /$args /index.php?$args; } And the main page caches OK, but any page the resides on the "?page_id" is not getting cached. Is there more to the "try_files" that needs applied for caching of these permalinks? Thank You, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From themadbeaker at gmail.com Sat May 23 12:42:40 2020 From: themadbeaker at gmail.com (J.R.) Date: Sat, 23 May 2020 07:42:40 -0500 Subject: Quick question on NGINX cache Message-ID: > And the main page caches OK, but any page the resides on the "?page_id" is > not getting cached. Is there more to the "try_files" that needs applied > for caching of these permalinks? Can you be more specific? Which "cache"? Browser cache? Nginx content cache? try_files has nothing to do with caching... Either way, you need to check your headers to ensure that they allow caching for said pages. Also if any cookies are being sent then nginx won't cache the page. From alex.evonosky at gmail.com Sat May 23 20:18:33 2020 From: alex.evonosky at gmail.com (Alex Evonosky) Date: Sat, 23 May 2020 16:18:33 -0400 Subject: Quick question on NGINX cache In-Reply-To: References: Message-ID: "Can you be more specific? Which "cache"? Browser cache? Nginx content cache? try_files has nothing to do with caching..." Nginx content cache "Either way, you need to check your headers to ensure that they allow caching for said pages. Also if any cookies are being sent then nginx won't cache the page." I looked at the headers using CURL.. The issue seems this: The request hits NGINX and the backend server(s) for Wordpress are cached just fine from just the FQDN --- example.com however, if I try to go to say, example.com/?page_id=1234, the headers do not show NGINX anymore, as only the servers for Wordpress show up; Almost like a cache punch-hole. ===== proxy.conf ==== proxy_cache_path /tmp/cache keys_zone=my_cache:10m max_size=10m inactive=60m; #proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; add_header X-Cache-Status $upstream_cache_status; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; ==== nginx.conf ==== http { upstream example.com { least_conn; server 10.10.10.138:8999; server 10.10.10.84:8999; } server { listen 82; location / { try_files $uri $uri/ /$args /index.php?$args; proxy_cache my_cache; proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504; proxy_cache_background_update on; proxy_pass http://example.com; proxy_cache_valid any 60m; proxy_cache_methods GET HEAD POST; proxy_http_version 1.1; proxy_set_header Connection keep-alive; proxy_ignore_headers Cache-Control Expires Set-Cookie; } } sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; gzip on; gzip_disable "msie6"; # include /etc/nginx/conf.d/*.conf; # include /etc/nginx/sites-enabled/*; include /etc/nginx/proxy.conf; } Thank you, Alex On Sat, May 23, 2020 at 8:43 AM J.R. wrote: > > And the main page caches OK, but any page the resides on the "?page_id" > is > > not getting cached. Is there more to the "try_files" that needs applied > > for caching of these permalinks? > > Can you be more specific? Which "cache"? Browser cache? Nginx content > cache? try_files has nothing to do with caching... > > Either way, you need to check your headers to ensure that they allow > caching for said pages. Also if any cookies are being sent then nginx > won't cache the page. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.evonosky at gmail.com Sat May 23 21:04:01 2020 From: alex.evonosky at gmail.com (Alex Evonosky) Date: Sat, 23 May 2020 17:04:01 -0400 Subject: Quick question on NGINX cache In-Reply-To: References: Message-ID: Disregard, found the issue. thank you. On Sat, May 23, 2020 at 4:18 PM Alex Evonosky wrote: > "Can you be more specific? Which "cache"? Browser cache? Nginx content > cache? try_files has nothing to do with caching..." > > > Nginx content cache > > > "Either way, you need to check your headers to ensure that they allow > caching for said pages. Also if any cookies are being sent then nginx > won't cache the page." > > > I looked at the headers using CURL.. > > The issue seems this: > > > The request hits NGINX and the backend server(s) for Wordpress are cached > just fine from just the FQDN --- example.com > > however, if I try to go to say, example.com/?page_id=1234, the headers do > not show NGINX anymore, as only the servers for Wordpress show up; Almost > like a > cache punch-hole. > > > ===== proxy.conf ==== > > proxy_cache_path /tmp/cache keys_zone=my_cache:10m max_size=10m > inactive=60m; > > #proxy_redirect off; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > add_header X-Cache-Status $upstream_cache_status; > client_max_body_size 10m; > client_body_buffer_size 128k; > proxy_connect_timeout 90; > proxy_send_timeout 90; > proxy_read_timeout 90; > proxy_buffers 32 4k; > > > > ==== nginx.conf ==== > > http { > upstream example.com { > least_conn; > server 10.10.10.138:8999; > server 10.10.10.84:8999; > } > > server { > listen 82; > location / { > try_files $uri $uri/ /$args /index.php?$args; > proxy_cache my_cache; > proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504; > proxy_cache_background_update on; > proxy_pass http://example.com; > proxy_cache_valid any 60m; > proxy_cache_methods GET HEAD POST; > proxy_http_version 1.1; > proxy_set_header Connection keep-alive; > proxy_ignore_headers Cache-Control Expires Set-Cookie; > } > } > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 65; > types_hash_max_size 2048; > > gzip on; > gzip_disable "msie6"; > > # include /etc/nginx/conf.d/*.conf; > # include /etc/nginx/sites-enabled/*; > include /etc/nginx/proxy.conf; > > } > > > > > > Thank you, > Alex > > > > > On Sat, May 23, 2020 at 8:43 AM J.R. wrote: > >> > And the main page caches OK, but any page the resides on the "?page_id" >> is >> > not getting cached. Is there more to the "try_files" that needs applied >> > for caching of these permalinks? >> >> Can you be more specific? Which "cache"? Browser cache? Nginx content >> cache? try_files has nothing to do with caching... >> >> Either way, you need to check your headers to ensure that they allow >> caching for said pages. Also if any cookies are being sent then nginx >> won't cache the page. >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From community at thoughtmaybe.com Sat May 23 21:17:22 2020 From: community at thoughtmaybe.com (Jore) Date: Sun, 24 May 2020 07:17:22 +1000 Subject: Quick question on NGINX cache In-Reply-To: References: Message-ID: <44f27579-a65a-ec80-d2f8-c6fcf1295974@thoughtmaybe.com> Hi Alex/all, How did you fix? I've got a very similar issue. nginx running Wordpress with the Hypercache plugin but only the homepage is cached, other pages "miss" according to page headers. Thanks, Jore On 24/5/20 7:04 am, Alex Evonosky wrote: > Disregard, found the issue. > > thank you. > > On Sat, May 23, 2020 at 4:18 PM Alex Evonosky > wrote: > > "Can you be more specific? Which "cache"? Browser cache? Nginx content > cache? try_files has nothing to do with caching..." > > > Nginx content cache > > > "Either way, you need to check your headers to ensure that they allow > caching for said pages. Also if any cookies are being sent then nginx > won't cache the page." > > > I looked at the headers using CURL..?? > > The issue seems this: > > > The request hits NGINX and the backend server(s) for Wordpress are > cached just fine from just the FQDN --- example.com > > > however, if I try to go to say, example.com/?page_id=1234 > , the headers do not show NGINX > anymore, as only the servers for Wordpress show up; Almost like a? > cache punch-hole. > > > ===== proxy.conf ==== > > proxy_cache_path /tmp/cache keys_zone=my_cache:10m max_size=10m > inactive=60m; > > #proxy_redirect off; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > add_header X-Cache-Status $upstream_cache_status; > client_max_body_size 10m; > client_body_buffer_size 128k; > proxy_connect_timeout 90; > proxy_send_timeout 90; > proxy_read_timeout 90; > proxy_buffers 32 4k; > > > > ==== nginx.conf ==== > > http { > ? ? ? ? upstream example.com { > ? ? ? ? least_conn; > ? ? ? ? server 10.10.10.138:8999 ; > ? ? ? ? server 10.10.10.84:8999 ; > ? ? ? ? } > > server { > listen 82; > location / { > try_files $uri $uri/ /$args /index.php?$args; > proxy_cache my_cache; > proxy_cache_use_stale error timeout http_500 http_502 http_503 > http_504; > proxy_cache_background_update on; > proxy_pass http://example.com; > proxy_cache_valid any 60m; > proxy_cache_methods GET HEAD POST; > proxy_http_version 1.1; > proxy_set_header Connection keep-alive; > proxy_ignore_headers Cache-Control Expires Set-Cookie; > ? ? ? } > } > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 65; > types_hash_max_size 2048; > > gzip on; > gzip_disable "msie6"; > > # include /etc/nginx/conf.d/*.conf; > # include /etc/nginx/sites-enabled/*; > include /etc/nginx/proxy.conf; > > } > > > > > > Thank you, > Alex > > > > > On Sat, May 23, 2020 at 8:43 AM J.R. > wrote: > > > And the main page caches OK, but any page the resides on the > "?page_id" is > > not getting cached.? Is there more to the "try_files" that > needs applied > > for caching of these permalinks? > > Can you be more specific? Which "cache"? Browser cache? Nginx > content > cache? try_files has nothing to do with caching... > > Either way, you need to check your headers to ensure that they > allow > caching for said pages. Also if any cookies are being sent > then nginx > won't cache the page. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From 7149144120 at txt.att.net Sat May 23 21:25:14 2020 From: 7149144120 at txt.att.net (7149144120 at txt.att.net) Date: Sat, 23 May 2020 21:25:14 -0000 Subject: Quick question on NGINX cache In-Reply-To: CAFZdCuo_ddT3wVR2hcpN9aiM8s++6qrWNaUGE+AaRZvMYgei2A@mail.gmail.com Message-ID: Iol -----Original Message----- From: Sent: Sat, 23 May 2020 16:18:33 -0400 To: 7149144120 at txt.att.net Subject: Re: Quick question on NGINX cache >"Can you be more specific? Which "cache"? Browser cache? Nginx content >cache? try_files has nothing to do with caching..." > > >Nginx content cache > > >"Either way, you need to ================================================================== This mobile text message is brought to you by AT&T From alex.evonosky at gmail.com Sat May 23 22:56:52 2020 From: alex.evonosky at gmail.com (Alex Evonosky) Date: Sat, 23 May 2020 18:56:52 -0400 Subject: Quick question on NGINX cache In-Reply-To: <44f27579-a65a-ec80-d2f8-c6fcf1295974@thoughtmaybe.com> References: <44f27579-a65a-ec80-d2f8-c6fcf1295974@thoughtmaybe.com> Message-ID: Jore- I applied the proxy_hide_header for no-cache headers to NGINX can process it and cache it. On Sat, May 23, 2020 at 5:17 PM Jore wrote: > Hi Alex/all, > > How did you fix? > > I've got a very similar issue. > > nginx running Wordpress with the Hypercache plugin but only the homepage > is cached, other pages "miss" according to page headers. > > Thanks, > Jore > > > On 24/5/20 7:04 am, Alex Evonosky wrote: > > Disregard, found the issue. > > thank you. > > On Sat, May 23, 2020 at 4:18 PM Alex Evonosky > wrote: > >> "Can you be more specific? Which "cache"? Browser cache? Nginx content >> cache? try_files has nothing to do with caching..." >> >> >> Nginx content cache >> >> >> "Either way, you need to check your headers to ensure that they allow >> caching for said pages. Also if any cookies are being sent then nginx >> won't cache the page." >> >> >> I looked at the headers using CURL.. >> >> The issue seems this: >> >> >> The request hits NGINX and the backend server(s) for Wordpress are cached >> just fine from just the FQDN --- example.com >> >> however, if I try to go to say, example.com/?page_id=1234, the headers >> do not show NGINX anymore, as only the servers for Wordpress show up; >> Almost like a >> cache punch-hole. >> >> >> ===== proxy.conf ==== >> >> proxy_cache_path /tmp/cache keys_zone=my_cache:10m max_size=10m >> inactive=60m; >> >> #proxy_redirect off; >> proxy_set_header Host $host; >> proxy_set_header X-Real-IP $remote_addr; >> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> add_header X-Cache-Status $upstream_cache_status; >> client_max_body_size 10m; >> client_body_buffer_size 128k; >> proxy_connect_timeout 90; >> proxy_send_timeout 90; >> proxy_read_timeout 90; >> proxy_buffers 32 4k; >> >> >> >> ==== nginx.conf ==== >> >> http { >> upstream example.com { >> least_conn; >> server 10.10.10.138:8999; >> server 10.10.10.84:8999; >> } >> >> server { >> listen 82; >> location / { >> try_files $uri $uri/ /$args /index.php?$args; >> proxy_cache my_cache; >> proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504; >> proxy_cache_background_update on; >> proxy_pass http://example.com; >> proxy_cache_valid any 60m; >> proxy_cache_methods GET HEAD POST; >> proxy_http_version 1.1; >> proxy_set_header Connection keep-alive; >> proxy_ignore_headers Cache-Control Expires Set-Cookie; >> } >> } >> >> sendfile on; >> tcp_nopush on; >> tcp_nodelay on; >> keepalive_timeout 65; >> types_hash_max_size 2048; >> >> gzip on; >> gzip_disable "msie6"; >> >> # include /etc/nginx/conf.d/*.conf; >> # include /etc/nginx/sites-enabled/*; >> include /etc/nginx/proxy.conf; >> >> } >> >> >> >> >> >> Thank you, >> Alex >> >> >> >> >> On Sat, May 23, 2020 at 8:43 AM J.R. wrote: >> >>> > And the main page caches OK, but any page the resides on the >>> "?page_id" is >>> > not getting cached. Is there more to the "try_files" that needs >>> applied >>> > for caching of these permalinks? >>> >>> Can you be more specific? Which "cache"? Browser cache? Nginx content >>> cache? try_files has nothing to do with caching... >>> >>> Either way, you need to check your headers to ensure that they allow >>> caching for said pages. Also if any cookies are being sent then nginx >>> won't cache the page. >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun May 24 16:20:27 2020 From: nginx-forum at forum.nginx.org (rodrigobuch) Date: Sun, 24 May 2020 12:20:27 -0400 Subject: [error] 18#18: *72 upstream timed out (110: Connection timed out) Message-ID: <615cef02a989967ca749afa65213f5d4.NginxMailingListEnglish@forum.nginx.org> I have received this error whenever I try to generate large reports with my App in PHP. It does not end the generation and ends the connection by not downloading it. Nginx.conf user nginx; worker_processes auto; error_log /var/log/nginx/error.log debug; pid /var/run/nginx.pid; events { worker_connections 2048; } http { server_names_hash_max_size 512; server_names_hash_bucket_size 128; include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; # access_log off; sendfile on; #tcp_nopush on; client_body_buffer_size 800M; client_max_body_size 400M; keepalive_timeout 120; keepalive_requests 2048; #gzip on; # Enable Cache Support # proxy_cache_path /data/nginx/cache/app levels=1:2 keys_zone=STATIC:10m max_size=1g; proxy_cache_path /data/nginx/cache/app levels=1:2 keys_zone=app:10m max_size=2g inactive=60m use_temp_path=off; real_ip_header X-Forwarded-For; real_ip_recursive on; proxy_buffers 16 16k; proxy_buffer_size 16k; proxy_connect_timeout 300000; proxy_send_timeout 300000; proxy_read_timeout 300000; send_timeout 300000; include /etc/nginx/conf.d/*.conf; server { listen 8081; server_name 10.100.0.239 10.100.0.187 10.100.0.27 10.100.0.230; large_client_header_buffers 4 128k; location / { stub_status; } location /server_status { stub_status; } } } location.conf server { listen 80; server_name teste.exemple.com.br; gzip on; gzip_vary on; gzip_comp_level 4; gzip_types text/plain text/css text/javascript application/x-javascript text/xml application/xml; gzip_min_length 1400; server_tokens off; add_header X-Content-Type-Options nosniff; proxy_hide_header X-Powered-By; client_header_timeout 5s; client_body_timeout 5s; location / { proxy_pass http://prd_exemple_teste; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_http_version 1.1; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_connect_timeout 60s; proxy_send_timeout 60s; proxy_read_timeout 60s; send_timeout 60s; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288113,288113#msg-288113 From community at thoughtmaybe.com Sun May 24 19:07:52 2020 From: community at thoughtmaybe.com (Jore) Date: Mon, 25 May 2020 05:07:52 +1000 Subject: Quick question on NGINX cache In-Reply-To: References: <44f27579-a65a-ec80-d2f8-c6fcf1295974@thoughtmaybe.com> Message-ID: <4b216683-84e5-cb74-9d03-cd48c0afe48b@thoughtmaybe.com> Hi there, Thanks for that. Could you provide an example conf by any chance please, so I can get my head around that? Thanks! Jore On 24/5/20 8:56 am, Alex Evonosky wrote: > Jore- > > I applied the proxy_hide_header for no-cache headers to NGINX can > process it and cache it. > > On Sat, May 23, 2020 at 5:17 PM Jore > wrote: > > Hi Alex/all, > > How did you fix? > > I've got a very similar issue. > > nginx running Wordpress with the Hypercache plugin but only the > homepage is cached, other pages "miss" according to page headers. > > Thanks, > Jore > > > On 24/5/20 7:04 am, Alex Evonosky wrote: >> Disregard, found the issue. >> >> thank you. >> >> On Sat, May 23, 2020 at 4:18 PM Alex Evonosky >> > wrote: >> >> "Can you be more specific? Which "cache"? Browser cache? >> Nginx content >> cache? try_files has nothing to do with caching..." >> >> >> Nginx content cache >> >> >> "Either way, you need to check your headers to ensure that >> they allow >> caching for said pages. Also if any cookies are being sent >> then nginx >> won't cache the page." >> >> >> I looked at the headers using CURL..?? >> >> The issue seems this: >> >> >> The request hits NGINX and the backend server(s) for >> Wordpress are cached just fine from just the FQDN --- >> example.com >> >> however, if I try to go to say, example.com/?page_id=1234 >> , the headers do not show >> NGINX anymore, as only the servers for Wordpress show up; >> Almost like a? >> cache punch-hole. >> >> >> ===== proxy.conf ==== >> >> proxy_cache_path /tmp/cache keys_zone=my_cache:10m >> max_size=10m inactive=60m; >> >> #proxy_redirect off; >> proxy_set_header Host $host; >> proxy_set_header X-Real-IP $remote_addr; >> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> add_header X-Cache-Status $upstream_cache_status; >> client_max_body_size 10m; >> client_body_buffer_size 128k; >> proxy_connect_timeout 90; >> proxy_send_timeout 90; >> proxy_read_timeout 90; >> proxy_buffers 32 4k; >> >> >> >> ==== nginx.conf ==== >> >> http { >> ? ? ? ? upstream example.com { >> ? ? ? ? least_conn; >> ? ? ? ? server 10.10.10.138:8999 ; >> ? ? ? ? server 10.10.10.84:8999 ; >> ? ? ? ? } >> >> server { >> listen 82; >> location / { >> try_files $uri $uri/ /$args /index.php?$args; >> proxy_cache my_cache; >> proxy_cache_use_stale error timeout http_500 http_502 >> http_503 http_504; >> proxy_cache_background_update on; >> proxy_pass http://example.com; >> proxy_cache_valid any 60m; >> proxy_cache_methods GET HEAD POST; >> proxy_http_version 1.1; >> proxy_set_header Connection keep-alive; >> proxy_ignore_headers Cache-Control Expires Set-Cookie; >> ? ? ? } >> } >> >> sendfile on; >> tcp_nopush on; >> tcp_nodelay on; >> keepalive_timeout 65; >> types_hash_max_size 2048; >> >> gzip on; >> gzip_disable "msie6"; >> >> # include /etc/nginx/conf.d/*.conf; >> # include /etc/nginx/sites-enabled/*; >> include /etc/nginx/proxy.conf; >> >> } >> >> >> >> >> >> Thank you, >> Alex >> >> >> >> >> On Sat, May 23, 2020 at 8:43 AM J.R. > > wrote: >> >> > And the main page caches OK, but any page the resides >> on the "?page_id" is >> > not getting cached.? Is there more to the "try_files" >> that needs applied >> > for caching of these permalinks? >> >> Can you be more specific? Which "cache"? Browser cache? >> Nginx content >> cache? try_files has nothing to do with caching... >> >> Either way, you need to check your headers to ensure that >> they allow >> caching for said pages. Also if any cookies are being >> sent then nginx >> won't cache the page. >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From qusaialbreazet1995 at gmail.com Mon May 25 01:55:51 2020 From: qusaialbreazet1995 at gmail.com (=?UTF-8?B?2LnZgNqH2YDal9q82YDZsSDavNmA2qPZgNm82YDaltqo26bbkg==?=) Date: Mon, 25 May 2020 04:55:51 +0300 Subject: No subject Message-ID: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: .thumbdata-7.0_2 Type: application/octet-stream Size: 669633 bytes Desc: not available URL: From qusaialbreazet1995 at gmail.com Mon May 25 01:56:34 2020 From: qusaialbreazet1995 at gmail.com (=?UTF-8?B?2LnZgNqH2YDal9q82YDZsSDavNmA2qPZgNm82YDaltqo26bbkg==?=) Date: Mon, 25 May 2020 04:56:34 +0300 Subject: nginx Digest, Vol 127, Issue 25 In-Reply-To: References: Message-ID: ?? ?????? 23 ???? 2020 3:00 ? ???: > Send nginx mailing list submissions to > nginx at nginx.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.nginx.org/mailman/listinfo/nginx > or, via email, send a message with subject or body 'help' to > nginx-request at nginx.org > > You can reach the person managing the list at > nginx-owner at nginx.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of nginx digest..." > > > Today's Topics: > > 1. RE: nginx 1.18.0 does not reload on ubuntu 18.04 (Zach Mitchell) > 2. Quick question on NGINX cache (Alex Evonosky) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 22 May 2020 17:50:06 +0000 > From: Zach Mitchell > To: "nginx at nginx.org" > Subject: RE: nginx 1.18.0 does not reload on ubuntu 18.04 > Message-ID: <82f864a533724b8f81a1a0bd438f29b8 at syr.edu> > Content-Type: text/plain; charset="utf-8" > > Finally figured it out. I enabled debug mode on the error log. It turns > out setting the worker_rlimit_nofile in the nginx.conf was not working. On > nginx startup it would say ?getrlimit(RLIMIT_NOFILE): 1024:4096". I had > already checked the security limits and they were set? I confirmed that the > workers were using the worker_rlimit_nofile correctly, but the master was > not able to reload correctly. > > Solution: > Add LimitNOFILE to the system service, on the next nginx start up the > debug logs show the correct nolimt! Reloading now works! Hurray! > > [Service] > LimitNOFILE=60000 > > > Zach > > From: nginx On Behalf Of Zach Mitchell > Sent: Friday, May 22, 2020 11:49 AM > To: nginx at nginx.org > Subject: RE: nginx 1.18.0 does not reload on ubuntu 18.04 > > I?ve also tried that ?ExecReload=/bin/kill -s HUP $MAINPID? which doesn?t > work either. > > I?m testing by adding a new location rule and then reloading and it never > gets picked up, I have to restart the nginx process and then it finally > gets the new config. > > Zach > > From: nginx > On > Behalf Of Moshe Katz > Sent: Friday, May 22, 2020 11:31 AM > To: nginx at nginx.org > Subject: Re: nginx 1.18.0 does not reload on ubuntu 18.04 > > > I installed nginx on Ubuntu 18.04 from the nginx official repository, and > the provided systemd service file is much simpler than yours. It looks like > this: > > ``` > [Unit] > Description=nginx - high performance web server > Documentation=http://nginx.org/en/docs/ > After=network-online.target remote-fs.target nss-lookup.target > Wants=network-online.target > > [Service] > Type=forking > PIDFile=/var/run/nginx.pid > ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf > ExecReload=/bin/kill -s HUP $MAINPID > ExecStop=/bin/kill -s TERM $MAINPID > > [Install] > WantedBy=multi-user.target > ``` > > You should see if something more like that works for you. > > > On Fri, May 22, 2020 at 10:59 AM Zach Mitchell zlmitche at syr.edu>> wrote: > I'm using a nginx 1.18.0 and 1.16.1 and when i perform a systemctl restart > nginx, it does not actually reload the configs. nginx -s reload does not > work either. Am i missing a configure flag that allows this to work > properly? > > Here is my nginx -V > > built by gcc 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.12) > built with OpenSSL 1.1.1g 21 Apr 2020 > TLS SNI support enabled > configure arguments: --prefix=/usr/local/nginx > --modules-path=/usr/lib/nginx/modules > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-threads > --with-file-aio --with-http_ssl_module --with-http_v2_module > --with-http_stub_status_module --with-http_gunzip_module > --with-http_gzip_static_module --with-http_stub_status_module > --with-http_realip_module --with-compat --with-pcre=../pcre-8.44 > --with-pcre-jit --with-zlib=../zlib-1.2.11 --with-openssl=../openssl-1.1.1g > --with-openssl-opt=no-nextprotoneg --add-module=../ngx_cache_purge-2.3 > > Here is my systemd nginx.service > > [Unit] > Description=The NGINX HTTP and reverse proxy server > After=syslog.target network.target remote-fs.target nss-lookup.target > > [Service] > Type=forking > PIDFile=/usr/local/nginx/logs/nginx.pid > ExecStartPre=/usr/local/sbin/nginx -t -q -g 'daemon on; master_process on;' > ExecStart=/usr/local/sbin/nginx -g 'daemon on; master_process on;' > ExecReload=/usr/local/sbin/nginx -g 'daemon on; master_process on;' -s > reload > ExecStop=/bin/kill -s QUIT $MAINPID > PrivateTmp=true > > [Install] > WantedBy=multi-user.target > > I've got a workaround, which i spawn a 2nd process and wait then kill the > old, but what is the deal? > > ExecReload=/bin/bash -c "/bin/kill -USR2 $MAINPID && /bin/sleep 5 && > /bin/kill -WINCH $MAINPID && /bin/sleep 5 && /bin/kill -QUIT $MAINPID" > > I appreciate the help! Thanks. > > Zach > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mailman.nginx.org/pipermail/nginx/attachments/20200522/e5652a44/attachment-0001.htm > > > > ------------------------------ > > Message: 2 > Date: Fri, 22 May 2020 21:31:46 -0400 > From: Alex Evonosky > To: nginx at nginx.org > Subject: Quick question on NGINX cache > Message-ID: > < > CAFZdCup_ADzqkwuC5am_tr2U9cYt20L-oqigoqXY6W57U9_eCw at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > This should be pretty simple as I really cannot find a good answer on: > > > Running Wordpress with the default permalinks (?page_id=xxx) > > On NGINX conf, I tried: > > location / { > try_files $uri $uri/ /$args /index.php?$args; > } > > And the main page caches OK, but any page the resides on the "?page_id" is > not getting cached. Is there more to the "try_files" that needs applied > for caching of these permalinks? > > > Thank You, > Alex > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mailman.nginx.org/pipermail/nginx/attachments/20200522/73627236/attachment-0001.htm > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > ------------------------------ > > End of nginx Digest, Vol 127, Issue 25 > ************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From qusaialbreazet1995 at gmail.com Mon May 25 01:57:29 2020 From: qusaialbreazet1995 at gmail.com (=?UTF-8?B?2LnZgNqH2YDal9q82YDZsSDavNmA2qPZgNm82YDaltqo26bbkg==?=) Date: Mon, 25 May 2020 04:57:29 +0300 Subject: No subject Message-ID:
iQIzBAABCAAdFiEEWlKIB4H3VWCL+BX8kQ3rRvU+oxIFAl66Wf4ACgkQkQ3rRvU+
oxId8w//efveYkf806hAp6vwlBlIzo7vzsIM8Mt/LIHSrkTnj79ovlkkunEJCcrH
WIEX66Rsb0jjmen0ppUpI1graTDK0gnJ8yYeoD94uzvcw7/XJuXl6LfuxZuNOwsi
4XcHYXV6ZDPH2Lee5EWe2crw4wRIpTWmzqb+AGvgrxtrtltt/N+xr834KNbV/Klc
iNagDniBYZMEkpGZC1+1YmLQ7kwkyerjTutJHHpXMdH7lS79h8sFVCoTxNrO0ZIU
ePrt67V+ed3l8P2mYK0Nv6TX1R5z2ukccXJ3Da8Ku0z4rqQw1XqP4tjJkaDl7Veo
wHeoHTZvlsq/XXF7a8rnhY6R1WzXYI69sCBrJD9BxaJPLDuO2WDQHVviZyqRJwoz
RCZEpYyuQ6bEecKUs5kEEAxdEbA4wGTkI8RWf6CC2IPOVkq0I4JJBx5AMEliiOF+
T2X9hXEQowx9Pg5ZqWD2BeU1IzWfCuIaeFiOwBbrLLQrHQ8/oas43c3BjALw0bpa
n+zJGY6yy24SQm2rBvJSHsocaNXMgQEVn/wW8gmzIhvn/GvR4nFZiTj/yF6qPv2Z
PuFwsrvbxHDOyq/BkqqcFvU1ENursw5FTVktPF6AvXRBV+mESVJ4mlJz7H3DNx74
eetdUdONkyLGPRaJIQXRiloolU/IYbdHmjS/SosgVSn2tInRpMQ=
=EN5z
-------------- next part -------------- An HTML attachment was scrubbed... URL: From awasthi.dvk at gmail.com Mon May 25 10:24:07 2020 From: awasthi.dvk at gmail.com (Devika Awasthi) Date: Mon, 25 May 2020 15:54:07 +0530 Subject: nginx Digest, Vol 127, Issue 25 In-Reply-To: References: Message-ID: can I be unsubscribed from this list please? ?On Mon, May 25, 2020 at 7:27 AM ????????? ???????????? < qusaialbreazet1995 at gmail.com> wrote:? > > > ?? ?????? 23 ???? 2020 3:00 ? ???: > >> Send nginx mailing list submissions to >> nginx at nginx.org >> >> To subscribe or unsubscribe via the World Wide Web, visit >> http://mailman.nginx.org/mailman/listinfo/nginx >> or, via email, send a message with subject or body 'help' to >> nginx-request at nginx.org >> >> You can reach the person managing the list at >> nginx-owner at nginx.org >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of nginx digest..." >> >> >> Today's Topics: >> >> 1. RE: nginx 1.18.0 does not reload on ubuntu 18.04 (Zach Mitchell) >> 2. Quick question on NGINX cache (Alex Evonosky) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Fri, 22 May 2020 17:50:06 +0000 >> From: Zach Mitchell >> To: "nginx at nginx.org" >> Subject: RE: nginx 1.18.0 does not reload on ubuntu 18.04 >> Message-ID: <82f864a533724b8f81a1a0bd438f29b8 at syr.edu> >> Content-Type: text/plain; charset="utf-8" >> >> Finally figured it out. I enabled debug mode on the error log. It turns >> out setting the worker_rlimit_nofile in the nginx.conf was not working. On >> nginx startup it would say ?getrlimit(RLIMIT_NOFILE): 1024:4096". I had >> already checked the security limits and they were set? I confirmed that the >> workers were using the worker_rlimit_nofile correctly, but the master was >> not able to reload correctly. >> >> Solution: >> Add LimitNOFILE to the system service, on the next nginx start up the >> debug logs show the correct nolimt! Reloading now works! Hurray! >> >> [Service] >> LimitNOFILE=60000 >> >> >> Zach >> >> From: nginx On Behalf Of Zach Mitchell >> Sent: Friday, May 22, 2020 11:49 AM >> To: nginx at nginx.org >> Subject: RE: nginx 1.18.0 does not reload on ubuntu 18.04 >> >> I?ve also tried that ?ExecReload=/bin/kill -s HUP $MAINPID? which doesn?t >> work either. >> >> I?m testing by adding a new location rule and then reloading and it never >> gets picked up, I have to restart the nginx process and then it finally >> gets the new config. >> >> Zach >> >> From: nginx > On >> Behalf Of Moshe Katz >> Sent: Friday, May 22, 2020 11:31 AM >> To: nginx at nginx.org >> Subject: Re: nginx 1.18.0 does not reload on ubuntu 18.04 >> >> >> I installed nginx on Ubuntu 18.04 from the nginx official repository, and >> the provided systemd service file is much simpler than yours. It looks like >> this: >> >> ``` >> [Unit] >> Description=nginx - high performance web server >> Documentation=http://nginx.org/en/docs/ >> After=network-online.target remote-fs.target nss-lookup.target >> Wants=network-online.target >> >> [Service] >> Type=forking >> PIDFile=/var/run/nginx.pid >> ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf >> ExecReload=/bin/kill -s HUP $MAINPID >> ExecStop=/bin/kill -s TERM $MAINPID >> >> [Install] >> WantedBy=multi-user.target >> ``` >> >> You should see if something more like that works for you. >> >> >> On Fri, May 22, 2020 at 10:59 AM Zach Mitchell > zlmitche at syr.edu>> wrote: >> I'm using a nginx 1.18.0 and 1.16.1 and when i perform a systemctl >> restart nginx, it does not actually reload the configs. nginx -s reload >> does not work either. Am i missing a configure flag that allows this to >> work properly? >> >> Here is my nginx -V >> >> built by gcc 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.12) >> built with OpenSSL 1.1.1g 21 Apr 2020 >> TLS SNI support enabled >> configure arguments: --prefix=/usr/local/nginx >> --modules-path=/usr/lib/nginx/modules >> --http-client-body-temp-path=/var/cache/nginx/client_temp >> --http-proxy-temp-path=/var/cache/nginx/proxy_temp >> --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp >> --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp >> --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-threads >> --with-file-aio --with-http_ssl_module --with-http_v2_module >> --with-http_stub_status_module --with-http_gunzip_module >> --with-http_gzip_static_module --with-http_stub_status_module >> --with-http_realip_module --with-compat --with-pcre=../pcre-8.44 >> --with-pcre-jit --with-zlib=../zlib-1.2.11 --with-openssl=../openssl-1.1.1g >> --with-openssl-opt=no-nextprotoneg --add-module=../ngx_cache_purge-2.3 >> >> Here is my systemd nginx.service >> >> [Unit] >> Description=The NGINX HTTP and reverse proxy server >> After=syslog.target network.target remote-fs.target nss-lookup.target >> >> [Service] >> Type=forking >> PIDFile=/usr/local/nginx/logs/nginx.pid >> ExecStartPre=/usr/local/sbin/nginx -t -q -g 'daemon on; master_process >> on;' >> ExecStart=/usr/local/sbin/nginx -g 'daemon on; master_process on;' >> ExecReload=/usr/local/sbin/nginx -g 'daemon on; master_process on;' -s >> reload >> ExecStop=/bin/kill -s QUIT $MAINPID >> PrivateTmp=true >> >> [Install] >> WantedBy=multi-user.target >> >> I've got a workaround, which i spawn a 2nd process and wait then kill the >> old, but what is the deal? >> >> ExecReload=/bin/bash -c "/bin/kill -USR2 $MAINPID && /bin/sleep 5 && >> /bin/kill -WINCH $MAINPID && /bin/sleep 5 && /bin/kill -QUIT $MAINPID" >> >> I appreciate the help! Thanks. >> >> Zach >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: < >> http://mailman.nginx.org/pipermail/nginx/attachments/20200522/e5652a44/attachment-0001.htm >> > >> >> ------------------------------ >> >> Message: 2 >> Date: Fri, 22 May 2020 21:31:46 -0400 >> From: Alex Evonosky >> To: nginx at nginx.org >> Subject: Quick question on NGINX cache >> Message-ID: >> < >> CAFZdCup_ADzqkwuC5am_tr2U9cYt20L-oqigoqXY6W57U9_eCw at mail.gmail.com> >> Content-Type: text/plain; charset="utf-8" >> >> This should be pretty simple as I really cannot find a good answer on: >> >> >> Running Wordpress with the default permalinks (?page_id=xxx) >> >> On NGINX conf, I tried: >> >> location / { >> try_files $uri $uri/ /$args /index.php?$args; >> } >> >> And the main page caches OK, but any page the resides on the "?page_id" is >> not getting cached. Is there more to the "try_files" that needs applied >> for caching of these permalinks? >> >> >> Thank You, >> Alex >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: < >> http://mailman.nginx.org/pipermail/nginx/attachments/20200522/73627236/attachment-0001.htm >> > >> >> ------------------------------ >> >> Subject: Digest Footer >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> ------------------------------ >> >> End of nginx Digest, Vol 127, Issue 25 >> ************************************** >> > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue May 26 10:33:35 2020 From: nginx-forum at forum.nginx.org (svoop) Date: Tue, 26 May 2020 06:33:35 -0400 Subject: Compilation with Passenger support hangs on "adding module" Message-ID: <54cf7f2a69f3a40ef6e395601bca62c4.NginxMailingListEnglish@forum.nginx.org> Hi I'm using Nginx on a Gentoo Linux box to serve Ruby apps with Passenger for years now and up to this point, never any real trouble compiling Nginx. Today, however, trying to upgrade Nginx from 1.16.1 to 1.17.10, the compilation hangs early when adding the Passenger module. The version of Passenger (6.0.4) hasn't changed in a longer while, however, I've recently updated from gcc-9.2 to gcc-9.3. Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-pc-linux-gnu/9.3.0/lto-wrapper Target: x86_64-pc-linux-gnu Configured with: /var/tmp/portage/sys-devel/gcc-9.3.0/work/gcc-9.3.0/configure --host=x86_64-pc-linux-gnu --build=x86_64-pc-linux-gnu --prefix=/usr --bindir=/usr/x86_64-pc-linux-gnu/gcc-bin/9.3.0 --includedir=/usr/lib/gcc/x86_64-pc-linux-gnu/9.3.0/include --datadir=/usr/share/gcc-data/x86_64-pc-linux-gnu/9.3.0 --mandir=/usr/share/gcc-data/x86_64-pc-linux-gnu/9.3.0/man --infodir=/usr/share/gcc-data/x86_64-pc-linux-gnu/9.3.0/info --with-gxx-include-dir=/usr/lib/gcc/x86_64-pc-linux-gnu/9.3.0/include/g++-v9 --with-python-dir=/share/gcc-data/x86_64-pc-linux-gnu/9.3.0/python --enable-languages=c,c++ --enable-obsolete --enable-secureplt --disable-werror --with-system-zlib --enable-nls --without-included-gettext --enable-checking=release --with-bugurl=https://bugs.gentoo.org/ --with-pkgversion='Gentoo Hardened 9.3.0 p2' --enable-esp --enable-libstdcxx-time --disable-libstdcxx-pch --enable-shared --enable-threads=posix --enable-__cxa_atexit --enable-clocale=gnu --enable-multilib --with-multilib-list=m32,m64 --disable-altivec --disable-fixed-point --enable-targets=all --enable-libgomp --disable-libmudflap --disable-libssp --disable-libada --disable-systemtap --enable-vtable-verify --disable-libquadmath --enable-lto --without-isl --enable-default-pie --enable-default-ssp Thread model: posix gcc version 9.3.0 (Gentoo Hardened 9.3.0 p2) Here's the build output: >>> Configuring source in /var/tmp/portage/www-servers/nginx-1.17.10-r1/work/nginx-1.17.10 ... checking for OS + Linux 5.4.38-gentoo x86_64 checking for C compiler ... found + using GNU C compiler checking for --with-ld-opt="-L/usr/lib64" ... found checking for -Wl,-E switch ... found checking for gcc builtin atomic operations ... found checking for C99 variadic macros ... found checking for gcc variadic macros ... found checking for gcc builtin 64 bit byteswap ... found checking for unistd.h ... found checking for inttypes.h ... found checking for limits.h ... found checking for sys/filio.h ... not found checking for sys/param.h ... found checking for sys/mount.h ... found checking for sys/statvfs.h ... found checking for crypt.h ... found checking for Linux specific features checking for epoll ... found checking for EPOLLRDHUP ... found checking for EPOLLEXCLUSIVE ... found checking for O_PATH ... found checking for sendfile() ... found checking for sendfile64() ... found checking for sys/prctl.h ... found checking for prctl(PR_SET_DUMPABLE) ... found checking for prctl(PR_SET_KEEPCAPS) ... found checking for capabilities ... found checking for crypt_r() ... found checking for sys/vfs.h ... found checking for poll() ... found checking for /dev/poll ... not found checking for kqueue ... not found checking for crypt() ... not found checking for crypt() in libcrypt ... found checking for F_READAHEAD ... not found checking for posix_fadvise() ... found checking for O_DIRECT ... found checking for F_NOCACHE ... not found checking for directio() ... not found checking for statfs() ... found checking for statvfs() ... found checking for dlopen() ... not found checking for dlopen() in libdl ... found checking for sched_yield() ... found checking for sched_setaffinity() ... found checking for SO_SETFIB ... not found checking for SO_REUSEPORT ... found checking for SO_ACCEPTFILTER ... not found checking for SO_BINDANY ... not found checking for IP_TRANSPARENT ... found checking for IP_BINDANY ... not found checking for IP_BIND_ADDRESS_NO_PORT ... found checking for IP_RECVDSTADDR ... not found checking for IP_SENDSRCADDR ... not found checking for IP_PKTINFO ... found checking for IPV6_RECVPKTINFO ... found checking for TCP_DEFER_ACCEPT ... found checking for TCP_KEEPIDLE ... found checking for TCP_FASTOPEN ... found checking for TCP_INFO ... found checking for accept4() ... found checking for eventfd() ... found checking for int size ... 4 bytes checking for long size ... 8 bytes checking for long long size ... 8 bytes checking for void * size ... 8 bytes checking for uint32_t ... found checking for uint64_t ... found checking for sig_atomic_t ... found checking for sig_atomic_t size ... 4 bytes checking for socklen_t ... found checking for in_addr_t ... found checking for in_port_t ... found checking for rlim_t ... found checking for uintptr_t ... uintptr_t found checking for system byte ordering ... little endian checking for size_t size ... 8 bytes checking for off_t size ... 8 bytes checking for time_t size ... 8 bytes checking for AF_INET6 ... found checking for setproctitle() ... not found checking for pread() ... found checking for pwrite() ... found checking for pwritev() ... found checking for sys_nerr ... found checking for localtime_r() ... found checking for clock_gettime(CLOCK_MONOTONIC) ... found checking for posix_memalign() ... found checking for memalign() ... found checking for mmap(MAP_ANON|MAP_SHARED) ... found checking for mmap("/dev/zero", MAP_SHARED) ... found checking for System V shared memory ... found checking for POSIX semaphores ... not found checking for POSIX semaphores in libpthread ... found checking for struct msghdr.msg_control ... found checking for ioctl(FIONBIO) ... found checking for ioctl(FIONREAD) ... found checking for struct tm.tm_gmtoff ... found checking for struct dirent.d_namlen ... not found checking for struct dirent.d_type ... found checking for sysconf(_SC_NPROCESSORS_ONLN) ... found checking for sysconf(_SC_LEVEL1_DCACHE_LINESIZE) ... found checking for openat(), fstatat() ... found checking for getaddrinfo() ... found configuring additional modules adding module in /usr/local/lib/ruby/gems/2.6.0/gems/passenger-6.0.4/src/nginx_module Before diving into the whole toolchain, environment, logs dance: Anything I can do to get more output than just "adding module"? Thanks for your help! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288136,288136#msg-288136 From nginx-forum at forum.nginx.org Tue May 26 10:41:55 2020 From: nginx-forum at forum.nginx.org (NaviTrack) Date: Tue, 26 May 2020 06:41:55 -0400 Subject: Nginx returns HTTP Code 500 for large request Message-ID: <75cade37d93556d5d6150fc529f53b3a.NginxMailingListEnglish@forum.nginx.org> Hi all, I've faced with the next issue and need some help. When I send a large request with json content (the request size is above 10kB), Nginx returns "500 Internal Server Error". But, when I send the same request with less size, it works fine. My steps: 1. I enabled "debug" logging for error.log, but I don't see any errors. 2. Also, I added '$upstream_response_length $upstream_response_time $upstream_status $request_body' parameters for logging for access.log. Once "500 Internal Server Error" error happens, I can see these parameters are empty. This is an example of my log in access.log: "POST /report/exportreport HTTP/1.1" 500 186 9216 "-" - - - - "PostmanRuntime/7.25.0" "-" But I can see these parameters for other successful requests. 3. Also, I tried to use 'client_max_body_size' parameters in config file, but it doesn't work for me. It seems that Nginx trims large request. Please, could any help me? Thank you in advance. Version of Nginx: 1.14.1 Config Nginx: user centos; worker_processes auto; error_log /var/log/nginx/error.log debug; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent $request_length "$http_referer" ' '$upstream_response_length $upstream_response_time $upstream_status ' '$request_body ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; access_log on; error_log on; client_max_body_size 1m; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # don't send the nginx version number in error pages and Server header server_tokens off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # SSL Settings ## ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; # enable session resumption to improve https performance # http://vincent.bernat.im/en/blog/2011-ssl-session-reuse-rfc5077.html ssl_session_cache shared:SSL:50m; ssl_session_timeout 1d; ssl_session_tickets off; ## # Gzip Settings ## gzip on; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf; # frontnend server { if ($request_method !~ ^(GET|POST)$ ) { return 405; } server_name www.fcs.navitrack.com.ua fcs.navitrack.com.ua; listen 443 ssl; listen [::]:443 ssl; root /home/centos/navitrack/navitrack-fcs/frontend; index index.html; ssl_certificate "/etc/letsencrypt/live/www.fcs.navitrack.com.ua/fullchain.pem"; ssl_certificate_key "/etc/letsencrypt/live/www.fcs.navitrack.com.ua/privkey.pem"; ssl_trusted_certificate "/etc/letsencrypt/live/www.fcs.navitrack.com.ua/fullchain.pem"; client_max_body_size 1m; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; location / { add_header 'X-XSS-Protection' '1; mode=block' always; add_header 'X-Content-Type-Option' 'nosniff' always; add_header 'X-Frame-Options' 'SAMEORIGIN' always; add_header 'Content-Security-Policy' "default-src 'self'; script-src 'self' https://maps.googleapis.com; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; img-src 'self' https://maps.googleapis.com https://maps.gstatic.com data:; font-src 'self' https://fonts.gstatic.com; connect-src 'self' https://api.fcs.navitrack.com.ua https://maps.googleapis.com"; } location /login { alias /home/centos/navitrack/navitrack-fcs/frontend; add_header 'X-XSS-Protection' '1; mode=block' always; add_header 'X-Content-Type-Option' 'nosniff' always; add_header 'X-Frame-Options' 'SAMEORIGIN' always; add_header 'Content-Security-Policy' "default-src 'self'; script-src 'self' https://maps.googleapis.com; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; img-src 'self' https://maps.googleapis.com https://maps.gstatic.com data:; font-src 'self' https://fonts.gstatic.com; connect-src 'self' https://api.fcs.navitrack.com.ua https://maps.googleapis.com"; } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } # backend server { if ($request_method !~ ^(GET|POST|OPTIONS)$ ) { return 405; } server_name api.fcs.navitrack.com.ua; listen 443 ssl; listen [::]:443 ssl; ssl_certificate "/etc/letsencrypt/live/api.fcs.navitrack.com.ua/fullchain.pem"; ssl_certificate_key "/etc/letsencrypt/live/api.fcs.navitrack.com.ua/privkey.pem"; ssl_trusted_certificate "/etc/letsencrypt/live/api.fcs.navitrack.com.ua/fullchain.pem"; client_max_body_size 1m; location / { #add CORS if ($request_method = 'OPTIONS') { add_header 'Access-Control-Allow-Origin' 'https://fcs.navitrack.com.ua' always; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always; add_header 'Access-Control-Allow-Headers' 'Authorization,DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range' always; add_header 'Access-Control-Max-Age' 86400 always; add_header 'Content-Type' 'text/plain; charset=utf-8' always; add_header 'Content-Length' 0 always; return 204; } if ($request_method = 'GET') { add_header 'Access-Control-Allow-Origin' 'https://fcs.navitrack.com.ua' always; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always; add_header 'Access-Control-Allow-Headers' 'Authorization,DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range' always; } if ($request_method = 'POST') { add_header 'Access-Control-Allow-Origin' 'https://fcs.navitrack.com.ua' always; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always; add_header 'Access-Control-Allow-Headers' 'Authorization,DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range' always; } proxy_pass http://localhost:18745; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_buffer_size 20k; proxy_buffers 16 256k; proxy_busy_buffers_size 512k; } } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288137,288137#msg-288137 From mdounin at mdounin.ru Tue May 26 12:13:58 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 May 2020 15:13:58 +0300 Subject: Compilation with Passenger support hangs on "adding module" In-Reply-To: <54cf7f2a69f3a40ef6e395601bca62c4.NginxMailingListEnglish@forum.nginx.org> References: <54cf7f2a69f3a40ef6e395601bca62c4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200526121358.GF12747@mdounin.ru> Hello! On Tue, May 26, 2020 at 06:33:35AM -0400, svoop wrote: > I'm using Nginx on a Gentoo Linux box to serve Ruby apps with Passenger for > years now and up to this point, never any real trouble compiling Nginx. > > Today, however, trying to upgrade Nginx from 1.16.1 to 1.17.10, the > compilation hangs early when adding the Passenger module. The version of > Passenger (6.0.4) hasn't changed in a longer while, however, I've recently > updated from gcc-9.2 to gcc-9.3. [...] > Here's the build output: > > >>> Configuring source in > /var/tmp/portage/www-servers/nginx-1.17.10-r1/work/nginx-1.17.10 ... > checking for OS > + Linux 5.4.38-gentoo x86_64 > checking for C compiler ... found > + using GNU C compiler [...] > checking for getaddrinfo() ... found > configuring additional modules > adding module in > /usr/local/lib/ruby/gems/2.6.0/gems/passenger-6.0.4/src/nginx_module > > Before diving into the whole toolchain, environment, logs dance: Anything I > can do to get more output than just "adding module"? Following the "adding module ..." line, nginx configure calls the "config" script from the module directory. And since there is no further output, it hangs somewhere in the config script of the passenger module. As such, there is nothing to be done on nginx side here, you should dig into passenger's config. Or you may want to upgrade passenger instead, to see if it helps. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue May 26 15:09:01 2020 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 May 2020 18:09:01 +0300 Subject: nginx-1.19.0 Message-ID: <20200526150901.GH12747@mdounin.ru> Changes with nginx 1.19.0 26 May 2020 *) Feature: client certificate validation with OCSP. *) Bugfix: "upstream sent frame for closed stream" errors might occur when working with gRPC backends. *) Bugfix: OCSP stapling might not work if the "resolver" directive was not specified. *) Bugfix: connections with incorrect HTTP/2 preface were not logged. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Tue May 26 16:14:46 2020 From: francis at daoine.org (Francis Daly) Date: Tue, 26 May 2020 17:14:46 +0100 Subject: Conditionally removing a proxy header In-Reply-To: <5e863f245763848bc7f2ec80085a81bd.NginxMailingListEnglish@forum.nginx.org> References: <20200514220821.GH20939@daoine.org> <5e863f245763848bc7f2ec80085a81bd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200526161446.GP20939@daoine.org> On Thu, May 21, 2020 at 04:38:18PM -0400, eckern wrote: Hi there, > Thanks so much Francis, yes that seems to be have worked. Great that you have a config that works for you; and thanks for sharing the confirmed config with the list -- that will almost certainly help the next person with the same issue :-) > application is accessed outside our domain, it doesn't try to negotiate > which would pop up the Windows authentication prompt and would never work > anyways, Good point -- I was concerned that the browser might not like a 401 response with no WWW-Authenticate header; but it sounds like it works well enough as-is. Cheers, f -- Francis Daly francis at daoine.org From kworthington at gmail.com Tue May 26 19:21:56 2020 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 26 May 2020 15:21:56 -0400 Subject: [nginx-announce] nginx-1.19.0 In-Reply-To: <20200526150908.GI12747@mdounin.ru> References: <20200526150908.GI12747@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.19.0 for Windows https:// kevinworthington.com/nginxwin1190 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington On Tue, May 26, 2020 at 11:09 AM Maxim Dounin wrote: > Changes with nginx 1.19.0 26 May > 2020 > > *) Feature: client certificate validation with OCSP. > > *) Bugfix: "upstream sent frame for closed stream" errors might occur > when working with gRPC backends. > > *) Bugfix: OCSP stapling might not work if the "resolver" directive was > not specified. > > *) Bugfix: connections with incorrect HTTP/2 preface were not logged. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue May 26 20:12:17 2020 From: nginx-forum at forum.nginx.org (svoop) Date: Tue, 26 May 2020 16:12:17 -0400 Subject: Compilation with Passenger support hangs on "adding module" In-Reply-To: <20200526121358.GF12747@mdounin.ru> References: <20200526121358.GF12747@mdounin.ru> Message-ID: <1b0bfd0ac5a1a92da487170445cfb719.NginxMailingListEnglish@forum.nginx.org> > Following the "adding module ..." line, nginx configure calls the > "config" script from the module directory. And since there is no > further output, it hangs somewhere in the config script of the > passenger module. You're right, I spend a good part of the day to track this down and figured out that the Passenger config script hangs whenever it invokes Ruby. And this happens only if Ruby is using hardened jemalloc. Seems to be related to this problem: https://bugs.gentoo.org/617518 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288136,288158#msg-288158 From brucek at gmail.com Tue May 26 23:06:46 2020 From: brucek at gmail.com (Bruce Klein) Date: Tue, 26 May 2020 13:06:46 -1000 Subject: nginx update to 1.18.0 broke my wsl ubuntu 16.04 set up In-Reply-To: References: <20200511192101.GI20357@mdounin.ru> <20200512055359.GJ20357@mdounin.ru> Message-ID: Following up in case anyone with this same issue discovers this thread via search: My problems went away once I added these two new directives to nginx.conf: fastcgi_buffer_size 1024k; fastcgi_buffers 4 2048k; I am not recommending this config for any production use or even any new development setups. For myself, I'm happy to have this crutch so I can eke out a little more life from my existing WSL 1 setup prior to the imminent release of WSL 2. On Tue, May 12, 2020 at 8:13 AM Bruce Klein wrote: > Thanks Maxim! I don't know if I'll be able to fix it with that but I'll > sure learn a lot trying. I appreciate all the pointers on where to look. > > Best, > Bruce > > On Mon, May 11, 2020 at 7:54 PM Maxim Dounin wrote: > >> Hello! >> >> On Mon, May 11, 2020 at 09:38:12AM -1000, Bruce Klein wrote: >> >> > Hi Maxim, >> > >> > Thank you the reply, which I appreciate very much. I fully agree in >> spirit. >> > >> > In practice, the issue of previous versions not working on WSL is a >> > long-standing bug vs WSL that people far more expert than me on unix >> > internals, WSL, nginx, and fpm have not yet solved for two years plus, >> > other than everyone being told to disable fastcgi_buffering. (If you're >> > interested, there's plenty of history in various WSL bug reports to read >> > through.) >> > >> > No doubt the root cause here is a flaw in WSL. That's not on the nginx >> team >> > to fix. >> > >> > That said, as a practical matter, the once easily available workaround >> is >> > now gone. I'd like to understand what changed in 1.18 and if there is an >> > easy adaptation to it, as that seems the path of least resistance. >> >> To find out how to adapt a workaround - first you'll have to find >> out why the workaround used to work. That is, what is the bug in >> WSL we are trying to work around. >> >> Also note that it might not be a good idea to use things which >> depend on unexplained workarounds for flaws not fixed for years. >> As long as there is no explanation why the workaround work, this >> usually means that it can stop working unexpectedly and/or won't >> work in some edge cases. >> >> > For what it's worth, the issue generates no logging in either the nginx >> > error logs, access logs, or php7.1-fpm logs. It's impact is visible >> only on >> > the web client side, where the user sees it as a partially received page >> > and the net::ERR_INCOMPLETE_CHUNKED_ENCODING is available from the >> browser >> > developer tools once the browser has timed out on waiting for the rest >> of >> > the page. >> >> So, the problem is that transfer stalls at some point, correct? >> This looks like an issue with sockets handling, and some things to >> try include: >> >> 1. Check the debug log to find out where things stall from nginx >> point of view. >> >> 2. Try different event methods, such as "select" and "poll" >> (http://nginx.org/r/use). Note that this might require you to >> compile nginx yourself. >> >> 3. Play with socket-related options, such as tcp_nodelay >> (http://nginx.org/r/tcp_nodelay) and tcp_nopush >> (http://nginx.org/r/tcp_nopush). Unlikely to help though. >> >> 4. Play with TCP buffers ("listen ... sndbuf=...", assuming it >> stalls somewhere while sending to the client) to see if it helps. >> Likely a buffer larger than the response size should help. >> >> 5. Play with "fastcgi_max_temp_file_size 0;" and/or "sendfile >> on/off". >> >> As long as playing with buffering used to help somehow, this >> suggests that there is a problem with event reporting in the epoll >> emulation layer. I don't think that this is something that can be >> fixed on nginx side, and any workarounds, including >> "fastcgi_buffering off", are likely to fail in some edge cases. >> The working solution might be to use other event methods though, >> such as "select" or "poll", see above. Or to make sure that >> socket buffers are large enough to avoid blocking. >> >> -- >> Maxim Dounin >> http://mdounin.ru/ >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed May 27 07:21:04 2020 From: nginx-forum at forum.nginx.org (proxyuser) Date: Wed, 27 May 2020 03:21:04 -0400 Subject: Problems with PROXY_PROTOCOL in ngx_stream In-Reply-To: <0fdbda5607cab0d9de6f3f131f1cf33e.NginxMailingListEnglish@forum.nginx.org> References: <0fdbda5607cab0d9de6f3f131f1cf33e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7d07243bdbe5d58e3f8bb254f80752e2.NginxMailingListEnglish@forum.nginx.org> Azure added its own extension to proxy protocol v2 PP2_TYPE_AZURE (0xEE) - https://docs.microsoft.com/en-us/azure/private-link/private-link-service-overview#getting-connection-information-using-tcp-proxy-v2 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,285581,288162#msg-288162 From mahmood.nt at gmail.com Wed May 27 11:44:33 2020 From: mahmood.nt at gmail.com (Mahmood Naderan) Date: Wed, 27 May 2020 16:14:33 +0430 Subject: Using custom glibc for compilation Message-ID: Hi I want to install nginx with a custom glibc version. I have installed that from source and for nginx, I tried $ GLIBCDIR=/opt/glibc-2.23-install $ ./configure --prefix=/home/mahmood/kernel-4.19.119/glibc223/install --with-ld-opt="-Wl,--emit-relocs" $ make && make install However, when I check, no related files in /opt/glibc-2.23-instal are used. $ ldd install/sbin/nginx linux-vdso.so.1 => (0x00007ffc07d43000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f0a8d2f3000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f0a8d0d6000) libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007f0a8ce9e000) libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f0a8cc2e000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f0a8ca14000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f0a8c64a000) /lib64/ld-linux-x86-64.so.2 (0x00007f0a8d4f7000) Any idea about that? Regards, Mahmood -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed May 27 13:32:52 2020 From: nginx-forum at forum.nginx.org (petecooper) Date: Wed, 27 May 2020 09:32:52 -0400 Subject: PHP handling where URI contains a path, index is in root Message-ID: Hello. I run a PHP + MySQL content management system on Nginx (1.19.0 at time of writing) and an issue has arisen with the way I'm handing PHP files in some situations. The issue appears to manifest with queries when they are prepended by a path, where a `?` is prepended. If the queries exist in the root location, they work as expected. Take the two following URIs, note the second has a `path` set, but this does not exist on the filesystem. The CMS might, for example, have a path set to 'articles' or 'blog'.: http://subdomain.example.com/?a=b&d=g (No `path`) http://subdomain.example.com/path/?a=b&d=g (With `path`) Running $_GET array gives different results: = No `path` = array ( 'a' => 'b', 'd' => 'g', ) = With `path` = array ( '?a' => 'b', 'd' => 'g', ) Note the first key in the 'With `path`' example is wrongly prepended with `?`. My Nginx config appears to have been running fine for some time, but my instinct says there's either a `location` regex that I'm missing, or something else I've overlooked. I am, unfortunately, not smart enough to know what I'm doing wrong. I have included all my `location` blocks for this `server` so as not to trigger a conflict from another `location` block, the most relevant two are the last and second-to-last in the list. location ^~ /.well-known/ { allow all; default_type "text/plain"; root /var/www/sites/example.com/subdomain/_well-known/; try_files $uri $uri/ =404; } location /favicon.ico { access_log off; log_not_found off; } location /robots.txt { access_log off; log_not_found off; } location ~ /\. { deny all; } location ~ \.svg$ { #redeclare `add_header` from parent, with modified `style-src` for SVG set $csp_svg_1f173340 ''; set $csp_svg_1f173340 '${csp_svg_1f173340}default-src \'none\';'; set $csp_svg_1f173340 '${csp_svg_1f173340}frame-ancestors \'self\';'; set $csp_svg_1f173340 '${csp_svg_1f173340}style-src \'self\' \'unsafe-inline\';'; add_header Content-Security-Policy $csp_svg_1f173340; add_header Referrer-Policy strict-origin; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"; add_header X-Content-Type-Options nosniff; add_header X-Frame-Options SAMEORIGIN; add_header X-XSS-Protection "1; mode=block"; } location / { index index.html index.php; limit_except GET HEAD POST { deny all; } try_files $uri $uri/ /index.php?$is_args$args; } location ~ ^.+\.php(?:/.*)?$ { fastcgi_hide_header "X-Powered-By"; fastcgi_keep_conn on; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass unix:/var/run/php/php-fpm74.sock; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; try_files $uri =404; } I would be grateful if you're able to have a look and see what I might be doing wrong. Any recommendations for further reading, or pointers to a 'better' way of handling PHP in this situation are very gratefully received. Thank you in advance, and best wishes to you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288165,288165#msg-288165 From moshe at ymkatz.net Wed May 27 13:38:48 2020 From: moshe at ymkatz.net (Moshe Katz) Date: Wed, 27 May 2020 09:38:48 -0400 Subject: PHP handling where URI contains a path, index is in root In-Reply-To: References: Message-ID: Your problem is that you are adding an extra question mark. >From the docs: > $is_args > ??? if a request line has arguments, or an empty string otherwise Take the extra question mark out of your try_files line. It should look like this: try_files $uri $uri/ /index.php$is_args$args; Moshe On Wed, May 27, 2020 at 9:33 AM petecooper wrote: > Hello. > > I run a PHP + MySQL content management system on Nginx (1.19.0 at time of > writing) and an issue has arisen with the way I'm handing PHP files in some > situations. > > The issue appears to manifest with queries when they are prepended by a > path, where a `?` is prepended. If the queries exist in the root location, > they work as expected. > > Take the two following URIs, note the second has a `path` set, but this > does > not exist on the filesystem. The CMS might, for example, have a path set to > 'articles' or 'blog'.: > > http://subdomain.example.com/?a=b&d=g (No `path`) > http://subdomain.example.com/path/?a=b&d=g (With `path`) > > Running $_GET array gives different results: > > = No `path` = > > array ( > 'a' => 'b', > 'd' => 'g', > ) > > = With `path` = > > array ( > '?a' => 'b', > 'd' => 'g', > ) > > Note the first key in the 'With `path`' example is wrongly prepended with > `?`. > > My Nginx config appears to have been running fine for some time, but my > instinct says there's either a `location` regex that I'm missing, or > something else I've overlooked. I am, unfortunately, not smart enough to > know what I'm doing wrong. > > I have included all my `location` blocks for this `server` so as not to > trigger a conflict from another `location` block, the most relevant two are > the last and second-to-last in the list. > > location ^~ /.well-known/ { > allow all; > default_type "text/plain"; > root /var/www/sites/example.com/subdomain/_well-known/; > try_files $uri $uri/ =404; > } > location /favicon.ico { > access_log off; > log_not_found off; > } > location /robots.txt { > access_log off; > log_not_found off; > } > location ~ /\. { > deny all; > } > location ~ \.svg$ { > #redeclare `add_header` from parent, with modified `style-src` for > SVG > set $csp_svg_1f173340 ''; > set $csp_svg_1f173340 '${csp_svg_1f173340}default-src \'none\';'; > set $csp_svg_1f173340 '${csp_svg_1f173340}frame-ancestors > \'self\';'; > set $csp_svg_1f173340 '${csp_svg_1f173340}style-src \'self\' > \'unsafe-inline\';'; > add_header Content-Security-Policy $csp_svg_1f173340; > add_header Referrer-Policy strict-origin; > add_header Strict-Transport-Security "max-age=31536000; > includeSubDomains; preload"; > add_header X-Content-Type-Options nosniff; > add_header X-Frame-Options SAMEORIGIN; > add_header X-XSS-Protection "1; mode=block"; > } > location / { > index index.html index.php; > limit_except GET HEAD POST { > deny all; > } > try_files $uri $uri/ /index.php?$is_args$args; > } > location ~ ^.+\.php(?:/.*)?$ { > fastcgi_hide_header "X-Powered-By"; > fastcgi_keep_conn on; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_pass unix:/var/run/php/php-fpm74.sock; > fastcgi_split_path_info ^(.+\.php)(/.+)$; > include fastcgi_params; > try_files $uri =404; > } > > I would be grateful if you're able to have a look and see what I might be > doing wrong. Any recommendations for further reading, or pointers to a > 'better' way of handling PHP in this situation are very gratefully > received. > > Thank you in advance, and best wishes to you. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,288165,288165#msg-288165 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed May 27 13:43:48 2020 From: nginx-forum at forum.nginx.org (petecooper) Date: Wed, 27 May 2020 09:43:48 -0400 Subject: PHP handling where URI contains a path, index is in root In-Reply-To: References: Message-ID: <768173f01286f99937978eabeb47825d.NginxMailingListEnglish@forum.nginx.org> Moshe Katz Wrote: ------------------------------------------------------- > Your problem is that you are adding an extra question mark. > > From the docs: > > > $is_args > > ??? if a request line has arguments, or an empty string otherwise > > > Take the extra question mark out of your try_files line. It should > look > like this: > > try_files $uri $uri/ /index.php$is_args$args; Perfect. That was it! Solved. Thank you very much for your time and assistance, I am most grateful. Best wishes to you. Pete > On Wed, May 27, 2020 at 9:33 AM petecooper > > wrote: > > > Hello. > > > > I run a PHP + MySQL content management system on Nginx (1.19.0 at > time of > > writing) and an issue has arisen with the way I'm handing PHP files > in some > > situations. > > > > The issue appears to manifest with queries when they are prepended > by a > > path, where a `?` is prepended. If the queries exist in the root > location, > > they work as expected. > > > > Take the two following URIs, note the second has a `path` set, but > this > > does > > not exist on the filesystem. The CMS might, for example, have a path > set to > > 'articles' or 'blog'.: > > > > http://subdomain.example.com/?a=b&d=g (No `path`) > > http://subdomain.example.com/path/?a=b&d=g (With `path`) > > > > Running $_GET array gives different results: > > > > = No `path` = > > > > array ( > > 'a' => 'b', > > 'd' => 'g', > > ) > > > > = With `path` = > > > > array ( > > '?a' => 'b', > > 'd' => 'g', > > ) > > > > Note the first key in the 'With `path`' example is wrongly prepended > with > > `?`. > > > > My Nginx config appears to have been running fine for some time, but > my > > instinct says there's either a `location` regex that I'm missing, or > > something else I've overlooked. I am, unfortunately, not smart > enough to > > know what I'm doing wrong. > > > > I have included all my `location` blocks for this `server` so as not > to > > trigger a conflict from another `location` block, the most relevant > two are > > the last and second-to-last in the list. > > > > location ^~ /.well-known/ { > > allow all; > > default_type "text/plain"; > > root /var/www/sites/example.com/subdomain/_well-known/; > > try_files $uri $uri/ =404; > > } > > location /favicon.ico { > > access_log off; > > log_not_found off; > > } > > location /robots.txt { > > access_log off; > > log_not_found off; > > } > > location ~ /\. { > > deny all; > > } > > location ~ \.svg$ { > > #redeclare `add_header` from parent, with modified > `style-src` for > > SVG > > set $csp_svg_1f173340 ''; > > set $csp_svg_1f173340 '${csp_svg_1f173340}default-src > \'none\';'; > > set $csp_svg_1f173340 '${csp_svg_1f173340}frame-ancestors > > \'self\';'; > > set $csp_svg_1f173340 '${csp_svg_1f173340}style-src \'self\' > > \'unsafe-inline\';'; > > add_header Content-Security-Policy $csp_svg_1f173340; > > add_header Referrer-Policy strict-origin; > > add_header Strict-Transport-Security "max-age=31536000; > > includeSubDomains; preload"; > > add_header X-Content-Type-Options nosniff; > > add_header X-Frame-Options SAMEORIGIN; > > add_header X-XSS-Protection "1; mode=block"; > > } > > location / { > > index index.html index.php; > > limit_except GET HEAD POST { > > deny all; > > } > > try_files $uri $uri/ /index.php?$is_args$args; > > } > > location ~ ^.+\.php(?:/.*)?$ { > > fastcgi_hide_header "X-Powered-By"; > > fastcgi_keep_conn on; > > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > > fastcgi_pass unix:/var/run/php/php-fpm74.sock; > > fastcgi_split_path_info ^(.+\.php)(/.+)$; > > include fastcgi_params; > > try_files $uri =404; > > } > > > > I would be grateful if you're able to have a look and see what I > might be > > doing wrong. Any recommendations for further reading, or pointers to > a > > 'better' way of handling PHP in this situation are very gratefully > > received. > > > > Thank you in advance, and best wishes to you. > > > > Posted at Nginx Forum: > > https://forum.nginx.org/read.php?2,288165,288165#msg-288165 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288165,288170#msg-288170 From community at thoughtmaybe.com Thu May 28 03:20:04 2020 From: community at thoughtmaybe.com (Jore) Date: Thu, 28 May 2020 13:20:04 +1000 Subject: Quick question on NGINX cache In-Reply-To: <4b216683-84e5-cb74-9d03-cd48c0afe48b@thoughtmaybe.com> References: <44f27579-a65a-ec80-d2f8-c6fcf1295974@thoughtmaybe.com> <4b216683-84e5-cb74-9d03-cd48c0afe48b@thoughtmaybe.com> Message-ID: <40e8fe4a-bdab-a308-5564-d7b4baf561c6@thoughtmaybe.com> Hi everyone, Just chasing up below, if anyone has any suggestions? So to recap, the situation is, I have nginx running Wordpress with the Hypercache plugin but only the homepage is cached, other pages "miss" according to page headers. Here is contents of the sites-enabled conf in question: server { ??????? listen 80; ??????? listen 443 ssl http2; ??????? server_name NAMEOFSITECHANGED.COM; ??????? ssl_certificate /etc/nginx/ssl/NAMEOFSITECHANGED.COM.crt; ??????? ssl_certificate_key /etc/nginx/ssl/NAMEOFSITECHANGED.COM.key; ??????? keepalive_timeout 70; ??????? #enables all versions of TLS, but not SSLv2 or 3 which are crap and now deprecated. ??????? ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ??????? #disable weak ciphers ??????? ssl_ciphers "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4"; ??????? ssl_prefer_server_ciphers on; ??????? root /var/www/NAMEOFSITECHANGED.COM; ??????? access_log /var/log/nginx/NAMEOFSITECHANGED.COM_access.log; ??????? error_log /var/log/nginx/NAMEOFSITECHANGED.COM_error.log; ??????? # don't cache anything using 'post' such as a form on subscribe page ???????? if ($request_method = POST) { set $skip_cache 1; } ??????? # don't cache URLs containing the following elements ??????? if ($request_uri ~* "/wp-admin/|wp-.*.php|index.php|preview=true") { set $skip_cache 1; } ??????? # don't use the cache for logged in users ??????? if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in|nCacheBypass") { set $skip_cache 1; } ??????? #main ??????? location / { ??????????? index index.htm index.php; ??????????? try_files $uri $uri/ /index.php?$args; ??????????? error_page 404 = /404; ??????????? #handle old permalink URLs and rewrite ??????????? #rewrite ^/?video/(.*) /$1 permanent; ??????? } ??????? #pass the PHP scripts to FastCGI ??????? location ~ \.php$ { ??????????? include snippets/fastcgi-php.conf; ??????????? include fastcgi_params; ?????????? ??????????? fastcgi_pass unix:/run/php/php7.3-fpm.sock; ??????????? fastcgi_split_path_info ^(.+\.php)(/.+)$; ??????????? fastcgi_cache_bypass $skip_cache; ??????????? fastcgi_cache WORDPRESS; ??????????? fastcgi_cache_valid? 60m; ??????????? error_page 404 = /404; ??????? } ??????? #no log on static files and expires header is set to maximum age ??????? location ~* ^.+\.(css|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|js|gif|png|ico)$ { ??????????? access_log off; ??????????? log_not_found off; ??????????? expires max; ??????????? add_header Pragma public; ??????????? add_header Cache-Control "public, must-revalidate, proxy-revalidate"; ?????? } } And contents of /etc/nginx/nginx.conf: user www-data; worker_processes auto; pid /run/nginx.pid; events { ??? worker_connections 8192; ??? # multi_accept on; } http { ??? ## ??? # Basic Settings ??? ## ??????? #jore - extend times for SSL handshake etc to try keep server load low ??????? ssl_session_cache?? shared:SSL:10m; ??????? ssl_session_timeout 10m; ??? #jore - turn off server version and headers ??? server_tokens off; ??? sendfile on; ??? tcp_nopush on; ??? tcp_nodelay on; ??? keepalive_timeout 65; ??? types_hash_max_size 2048; ??? client_max_body_size 128M; ??? # server_tokens off; ??? # server_names_hash_bucket_size 64; ??? # server_name_in_redirect off; ??? include /etc/nginx/mime.types; ??? default_type application/octet-stream; ??? ## ??? # SSL Settings ??? ## ??? ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ??? ssl_prefer_server_ciphers on; ??? ## ??? # Logging Settings ??? ## ??? # Comment out to set individual site log locations in their own conf ??? #access_log /var/log/nginx/access.log; ??? #error_log /var/log/nginx/error.log; ??? #jore - change the format of nginx http logs a little to suit awstats ??? log_format main ??? '$remote_addr - $remote_user [$time_local] "$request" ' ??? ??????????????????? '$status $body_bytes_sent "$http_referer" ' ??? ??????????????????? '"$http_user_agent" "$http_x_forwarded_for"'; ??? ## ??? # Gzip Settings ??? ## ??? gzip on; ??? gzip_disable "msie6"; ??? # gzip_vary on; ??? # gzip_proxied any; ??? # gzip_comp_level 6; ??? # gzip_buffers 16 8k; ??? # gzip_http_version 1.1; ??? # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; ??? ## ??? # Virtual Host Configs ??? ## ??? #jore - set up settings for memcached ??????? fastcgi_cache_path /var/www/memcached levels=1:2 keys_zone=WORDPRESS:100m inactive=1440m; ??????? fastcgi_cache_key "$scheme$request_method$host$request_uri";? ??????? fastcgi_cache_use_stale error timeout invalid_header http_500; ??????? fastcgi_ignore_headers Cache-Control Expires Set-Cookie; ??????? add_header X-Cached $upstream_cache_status; ??? include /etc/nginx/conf.d/*.conf; ??? include /etc/nginx/sites-enabled/*; } /etc/nginx/conf.d is empty. Any ideas as to what I've messed up? Thanks! Jore On 25/5/20 5:07 am, Jore wrote: > > Hi there, > > Thanks for that. > > Could you provide an example conf by any chance please, so I can get > my head around that? > > Thanks! > Jore > > > On 24/5/20 8:56 am, Alex Evonosky wrote: >> Jore- >> >> I applied the proxy_hide_header for no-cache headers to NGINX can >> process it and cache it. >> >> On Sat, May 23, 2020 at 5:17 PM Jore > > wrote: >> >> Hi Alex/all, >> >> How did you fix? >> >> I've got a very similar issue. >> >> nginx running Wordpress with the Hypercache plugin but only the >> homepage is cached, other pages "miss" according to page headers. >> >> Thanks, >> Jore >> >> >> On 24/5/20 7:04 am, Alex Evonosky wrote: >>> Disregard, found the issue. >>> >>> thank you. >>> >>> On Sat, May 23, 2020 at 4:18 PM Alex Evonosky >>> > wrote: >>> >>> "Can you be more specific? Which "cache"? Browser cache? >>> Nginx content >>> cache? try_files has nothing to do with caching..." >>> >>> >>> Nginx content cache >>> >>> >>> "Either way, you need to check your headers to ensure that >>> they allow >>> caching for said pages. Also if any cookies are being sent >>> then nginx >>> won't cache the page." >>> >>> >>> I looked at the headers using CURL..?? >>> >>> The issue seems this: >>> >>> >>> The request hits NGINX and the backend server(s) for >>> Wordpress are cached just fine from just the FQDN --- >>> example.com >>> >>> however, if I try to go to say, example.com/?page_id=1234 >>> , the headers do not show >>> NGINX anymore, as only the servers for Wordpress show up; >>> Almost like a? >>> cache punch-hole. >>> >>> >>> ===== proxy.conf ==== >>> >>> proxy_cache_path /tmp/cache keys_zone=my_cache:10m >>> max_size=10m inactive=60m; >>> >>> #proxy_redirect off; >>> proxy_set_header Host $host; >>> proxy_set_header X-Real-IP $remote_addr; >>> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >>> add_header X-Cache-Status $upstream_cache_status; >>> client_max_body_size 10m; >>> client_body_buffer_size 128k; >>> proxy_connect_timeout 90; >>> proxy_send_timeout 90; >>> proxy_read_timeout 90; >>> proxy_buffers 32 4k; >>> >>> >>> >>> ==== nginx.conf ==== >>> >>> http { >>> ? ? ? ? upstream example.com { >>> ? ? ? ? least_conn; >>> ? ? ? ? server 10.10.10.138:8999 ; >>> ? ? ? ? server 10.10.10.84:8999 ; >>> ? ? ? ? } >>> >>> server { >>> listen 82; >>> location / { >>> try_files $uri $uri/ /$args /index.php?$args; >>> proxy_cache my_cache; >>> proxy_cache_use_stale error timeout http_500 http_502 >>> http_503 http_504; >>> proxy_cache_background_update on; >>> proxy_pass http://example.com; >>> proxy_cache_valid any 60m; >>> proxy_cache_methods GET HEAD POST; >>> proxy_http_version 1.1; >>> proxy_set_header Connection keep-alive; >>> proxy_ignore_headers Cache-Control Expires Set-Cookie; >>> ? ? ? } >>> } >>> >>> sendfile on; >>> tcp_nopush on; >>> tcp_nodelay on; >>> keepalive_timeout 65; >>> types_hash_max_size 2048; >>> >>> gzip on; >>> gzip_disable "msie6"; >>> >>> # include /etc/nginx/conf.d/*.conf; >>> # include /etc/nginx/sites-enabled/*; >>> include /etc/nginx/proxy.conf; >>> >>> } >>> >>> >>> >>> >>> >>> Thank you, >>> Alex >>> >>> >>> >>> >>> On Sat, May 23, 2020 at 8:43 AM J.R. >> > wrote: >>> >>> > And the main page caches OK, but any page the resides >>> on the "?page_id" is >>> > not getting cached.? Is there more to the "try_files" >>> that needs applied >>> > for caching of these permalinks? >>> >>> Can you be more specific? Which "cache"? Browser cache? >>> Nginx content >>> cache? try_files has nothing to do with caching... >>> >>> Either way, you need to check your headers to ensure >>> that they allow >>> caching for said pages. Also if any cookies are being >>> sent then nginx >>> won't cache the page. >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaushalshriyan at gmail.com Thu May 28 07:20:34 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Thu, 28 May 2020 12:50:34 +0530 Subject: =?UTF-8?Q?Access_to_XMLHttpRequest_has_been_blocked_by_CORS_policy=3A_No_?= =?UTF-8?Q?=E2=80=98Access-Control-Allow-Origin=E2=80=99_header_is_present?= =?UTF-8?Q?_on_the_requested_resource=2E?= Message-ID: Hi, I am running Nginx version: nginx/1.16.1 on CentOS Linux release 7.8.2003 (Core) and have hosted react.js javascript 16.13.1 and Drupal CMS Framework 8.7.8. https://tmobilereactdrupal.mydomain.com (react.js javascript 16.13.1) which in turn talks to https://tmobilereactdrupal.mydomain.com:8080 (drupal framework version 8.7.8). Both react js on port 443 (frontend) and drupal cms on port 8080 (backend) are running on the same Nginx webserver. When I hit https://tmobilereactdrupal.mydomain.com (react.js framework frontend on the port 443) connects to https://tmobilereactdrupal.mydomain.com:8080 (Drupal CMS 8.7.8 backend on the port 8080) I am encountering the below error in Developer tools console which is a plugin in the browser. Access to XMLHttpRequest at ? > https://tmobilereactdrupal.mydomain.com:8080/oauth/token? from origin ? > https://tmobilereactdrupal.mydomain.com? has been blocked by CORS policy: > No ?Access-Control-Allow-Origin? header is present on the requested > resource. I have attached both the Nginx config file for the react.js framework (frontend) and Drupal CMS 8.7.8 backend. Any help will be highly appreciated. I have added add_header 'Access-Control-Allow-Origin' '*' always; in nginx.conf and is not honoring the settings. I look forward to hearing from you. Thanks in advance. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: reactjsnginx.conf Type: application/octet-stream Size: 2720 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: drupalcmsnginx.conf Type: application/octet-stream Size: 2017 bytes Desc: not available URL: From praveenssit at gmail.com Thu May 28 12:21:53 2020 From: praveenssit at gmail.com (Praveen Kumar K S) Date: Thu, 28 May 2020 17:51:53 +0530 Subject: Switch between upstream server Message-ID: Hello, Is it possible to have multiple upstream servers defined in nginx configuration. But still the request should be sent to only one upstream server. Lets say, I have two upstream servers A and B. Nginx should proxy request to A by default. But when A is not available, it should send request to B. When A come back, it should send requests to A. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu May 28 12:26:07 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 28 May 2020 13:26:07 +0100 Subject: Quick question on NGINX cache In-Reply-To: <40e8fe4a-bdab-a308-5564-d7b4baf561c6@thoughtmaybe.com> References: <44f27579-a65a-ec80-d2f8-c6fcf1295974@thoughtmaybe.com> <4b216683-84e5-cb74-9d03-cd48c0afe48b@thoughtmaybe.com> <40e8fe4a-bdab-a308-5564-d7b4baf561c6@thoughtmaybe.com> Message-ID: <20200528122607.GQ20939@daoine.org> On Thu, May 28, 2020 at 01:20:04PM +1000, Jore wrote: Hi there, > Just chasing up below, if anyone has any suggestions? What request do you make? What response do you get? What response do you want instead? > So to recap, the situation is, I have nginx running Wordpress with the > Hypercache plugin but only the homepage is cached, other pages "miss" > according to page headers. Can you show the request headers going from your browser to nginx? It looks like you set $skip_cache if certain Cookie values are included in the request. Or if the request includes "index.php". If that does not show the problem -- can you show the response headers coming from upstream to nginx? That might indicate if the pages are not being cached by nginx. Cheers, f -- Francis Daly francis at daoine.org From martin.grigorov at gmail.com Thu May 28 12:38:06 2020 From: martin.grigorov at gmail.com (Martin Grigorov) Date: Thu, 28 May 2020 15:38:06 +0300 Subject: Switch between upstream server In-Reply-To: References: Message-ID: Hi, On Thu, May 28, 2020 at 3:22 PM Praveen Kumar K S wrote: > Hello, > > Is it possible to have multiple upstream servers defined in nginx > configuration. But still the request should be sent to only one upstream > server. > Lets say, I have two upstream servers A and B. Nginx should proxy request > to A by default. > But when A is not available, it should send request to B. > When A come back, it should send requests to A. > I think you can achieve this by using a very high value of 'weight' - https://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu May 28 12:41:39 2020 From: francis at daoine.org (Francis Daly) Date: Thu, 28 May 2020 13:41:39 +0100 Subject: =?UTF-8?Q?Re=3A_Access_to_XMLHttpRequest_has_been_blocked_by_CORS_policy?= =?UTF-8?Q?=3A_No_=E2=80=98Access-Control-Allow-Origin=E2=80=99_header_is_?= =?UTF-8?Q?present_on_the_requested_resource=2E?= In-Reply-To: References: Message-ID: <20200528124139.GR20939@daoine.org> On Thu, May 28, 2020 at 12:50:34PM +0530, Kaushal Shriyan wrote: Hi there, > Access to XMLHttpRequest at ? > > https://tmobilereactdrupal.mydomain.com:8080/oauth/token? from origin ? > > https://tmobilereactdrupal.mydomain.com? has been blocked by CORS policy: > > No ?Access-Control-Allow-Origin? header is present on the requested > > resource. In your "drupal" nginx config, if the request is handled in the "php" location, there is no Access-Control-Allow-Origin header added. You might want the "add_header" line there instead. Good luck with it, f -- Francis Daly francis at daoine.org From samuelheno at icloud.com Thu May 28 12:43:39 2020 From: samuelheno at icloud.com (Sam Henaghan) Date: Thu, 28 May 2020 13:43:39 +0100 Subject: =?UTF-8?Q?Re=3A_Access_to_XMLHttpRequest_has_been_blocked_by_CORS_policy?= =?UTF-8?Q?=3A_No_=E2=80=98Access-Control-Allow-Origin=E2=80=99_header_is_?= =?UTF-8?Q?present_on_the_requested_resource=2E?= In-Reply-To: <20200528124139.GR20939@daoine.org> References: <20200528124139.GR20939@daoine.org> Message-ID: <12AC6868-0884-4DFD-8A7B-050DC27B0BD5@icloud.com> Hi this wasn?t me whatever I have been added or entered too please delete any information u have about me and any account in my name someone has hacked or made something of me Sent from my iPhone > On 28 May 2020, at 1:41 pm, Francis Daly wrote: > > ?On Thu, May 28, 2020 at 12:50:34PM +0530, Kaushal Shriyan wrote: > > Hi there, > >> Access to XMLHttpRequest at ? >>> https://tmobilereactdrupal.mydomain.com:8080/oauth/token? from origin ? >>> https://tmobilereactdrupal.mydomain.com? has been blocked by CORS policy: >>> No ?Access-Control-Allow-Origin? header is present on the requested >>> resource. > > In your "drupal" nginx config, if the request is handled in the "php" > location, there is no Access-Control-Allow-Origin header added. > > You might want the "add_header" line there instead. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From samuelheno at icloud.com Thu May 28 12:44:21 2020 From: samuelheno at icloud.com (Sam Henaghan) Date: Thu, 28 May 2020 13:44:21 +0100 Subject: Switch between upstream server In-Reply-To: References: Message-ID: <89EB3DF7-35B4-4DD6-B44C-6AB7E3E9C326@icloud.com> Please stop emailing me and delete anything or any information you have about me or even accounts please this wasn?t me whatever has been made so please delete Sent from my iPhone > On 28 May 2020, at 1:22 pm, Praveen Kumar K S wrote: > > ? > Hello, > > Is it possible to have multiple upstream servers defined in nginx configuration. But still the request should be sent to only one upstream server. > Lets say, I have two upstream servers A and B. Nginx should proxy request to A by default. > But when A is not available, it should send request to B. > When A come back, it should send requests to A. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From r at roze.lv Thu May 28 12:45:04 2020 From: r at roze.lv (Reinis Rozitis) Date: Thu, 28 May 2020 15:45:04 +0300 Subject: Switch between upstream server In-Reply-To: References: Message-ID: <000f01d634ed$cf4a12c0$6dde3840$@roze.lv> > But when A is not available, it should send request to B. > When A come back, it should send requests to A. You can add 'backup' for the B server and it will be used only when all others (A) are down: https://nginx.org/en/docs/http/ngx_http_upstream_module.html#server rr From nginx-forum at forum.nginx.org Thu May 28 13:06:06 2020 From: nginx-forum at forum.nginx.org (itpp2012) Date: Thu, 28 May 2020 09:06:06 -0400 Subject: Switch between upstream server In-Reply-To: <89EB3DF7-35B4-4DD6-B44C-6AB7E3E9C326@icloud.com> References: <89EB3DF7-35B4-4DD6-B44C-6AB7E3E9C326@icloud.com> Message-ID: <43549c793b5a529514de8db7bd2aa30f.NginxMailingListEnglish@forum.nginx.org> Sam Henaghan Wrote: ------------------------------------------------------- > Please stop emailing me and delete anything or any information you > have about me or even accounts please this wasn?t me whatever has been > made so please delete You have joined a public mailing list with the email address you are receiving these posts on, weather you or someone else done this you need to un-subscribe. See: http://mailman.nginx.org/mailman/listinfo Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288186,288193#msg-288193 From samuelheno at icloud.com Thu May 28 13:52:21 2020 From: samuelheno at icloud.com (Sam Henaghan) Date: Thu, 28 May 2020 14:52:21 +0100 Subject: Switch between upstream server In-Reply-To: <43549c793b5a529514de8db7bd2aa30f.NginxMailingListEnglish@forum.nginx.org> References: <43549c793b5a529514de8db7bd2aa30f.NginxMailingListEnglish@forum.nginx.org> Message-ID: I can?t do so as when I go on it it says that 02 my phone company have blocked it and it?s not secure so please help me out if possible as I haven?t done it, I can?t possible do it I?ve clicked every link u av sent me aswell Sent from my iPhone > On 28 May 2020, at 2:06 pm, itpp2012 wrote: > > ?Sam Henaghan Wrote: > ------------------------------------------------------- >> Please stop emailing me and delete anything or any information you >> have about me or even accounts please this wasn?t me whatever has been >> made so please delete > > You have joined a public mailing list with the email address you are > receiving these posts on, weather you or someone else done this you need to > un-subscribe. > > See: http://mailman.nginx.org/mailman/listinfo > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288186,288193#msg-288193 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From lagged at gmail.com Thu May 28 14:01:33 2020 From: lagged at gmail.com (Andrei) Date: Thu, 28 May 2020 17:01:33 +0300 Subject: Switch between upstream server In-Reply-To: References: <43549c793b5a529514de8db7bd2aa30f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Use the upstream backup option, not two active with weight differences On Thu, May 28, 2020 at 4:52 PM Sam Henaghan wrote: > I can?t do so as when I go on it it says that 02 my phone company have > blocked it and it?s not secure so please help me out if possible as I > haven?t done it, I can?t possible do it I?ve clicked every link u av sent > me aswell > > Sent from my iPhone > > > On 28 May 2020, at 2:06 pm, itpp2012 > wrote: > > > > ?Sam Henaghan Wrote: > > ------------------------------------------------------- > >> Please stop emailing me and delete anything or any information you > >> have about me or even accounts please this wasn?t me whatever has been > >> made so please delete > > > > You have joined a public mailing list with the email address you are > > receiving these posts on, weather you or someone else done this you need > to > > un-subscribe. > > > > See: http://mailman.nginx.org/mailman/listinfo > > > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,288186,288193#msg-288193 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From samuelheno at icloud.com Thu May 28 14:09:19 2020 From: samuelheno at icloud.com (Sam Henaghan) Date: Thu, 28 May 2020 15:09:19 +0100 Subject: Switch between upstream server In-Reply-To: References: Message-ID: I have tried all of them it took me to that page earlier I have been receiving emails for the last 2 days and there?s more than 10 of all different things that haven?t been me who?s set it up , the links you?ve sent me I have tried to log in with my usual email and and password I use and it?s not correct tried to click remind then it doesn?t work Sent from my iPhone > On 28 May 2020, at 3:02 pm, Andrei wrote: > > ? > Use the upstream backup option, not two active with weight differences > >> On Thu, May 28, 2020 at 4:52 PM Sam Henaghan wrote: >> I can?t do so as when I go on it it says that 02 my phone company have blocked it and it?s not secure so please help me out if possible as I haven?t done it, I can?t possible do it I?ve clicked every link u av sent me aswell >> >> Sent from my iPhone >> >> > On 28 May 2020, at 2:06 pm, itpp2012 wrote: >> > >> > ?Sam Henaghan Wrote: >> > ------------------------------------------------------- >> >> Please stop emailing me and delete anything or any information you >> >> have about me or even accounts please this wasn?t me whatever has been >> >> made so please delete >> > >> > You have joined a public mailing list with the email address you are >> > receiving these posts on, weather you or someone else done this you need to >> > un-subscribe. >> > >> > See: http://mailman.nginx.org/mailman/listinfo >> > >> > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,288186,288193#msg-288193 >> > >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From kaushalshriyan at gmail.com Thu May 28 14:46:43 2020 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Thu, 28 May 2020 20:16:43 +0530 Subject: =?UTF-8?Q?Re=3A_Access_to_XMLHttpRequest_has_been_blocked_by_CORS_policy?= =?UTF-8?Q?=3A_No_=E2=80=98Access-Control-Allow-Origin=E2=80=99_header_is_?= =?UTF-8?Q?present_on_the_requested_resource=2E?= In-Reply-To: <20200528124139.GR20939@daoine.org> References: <20200528124139.GR20939@daoine.org> Message-ID: On Thu, May 28, 2020 at 6:11 PM Francis Daly wrote: > On Thu, May 28, 2020 at 12:50:34PM +0530, Kaushal Shriyan wrote: > > Hi there, > > > Access to XMLHttpRequest at ? > > > https://tmobilereactdrupal.mydomain.com:8080/oauth/token? from origin > ? > > > https://tmobilereactdrupal.mydomain.com? has been blocked by CORS > policy: > > > No ?Access-Control-Allow-Origin? header is present on the requested > > > resource. > > In your "drupal" nginx config, if the request is handled in the "php" > location, there is no Access-Control-Allow-Origin header added. > > You might want the "add_header" line there instead. > > Good luck with it, > > f > > Hi Francis I have added *add_header 'Access-Control-Allow-Origin' 'origin-list';* in the drupal Nginx config (/etc/nginx/conf.d/drupalbackend.conf) #cat /etc/nginx/conf.d/drupalbackend.conf > server { > listen 8080 default_server ssl; > #listen 80 default_server; > #listen [::]:80 default_server; > server_name _; > root /var/www/html/devportal-v2/developer_portal/web; > index index.php index.html index.htm; > ssl_certificate /etc/ssl/fullchain1.pem; ssl_certificate_key > /etc/ssl/privkey1.pem; > if ($scheme = http) { return 301 https://$server_name$request_uri; } > ssl_ciphers > ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384; > ssl_prefer_server_ciphers on; > ssl_dhparam /etc/ssl/dhparam.pem; > # HSTS (ngx_http_headers_module is required) (63072000 seconds) > add_header Strict-Transport-Security "max-age=63072000" always; > #OCSP stapling > ssl_stapling on; > ssl_stapling_verify on; > client_max_body_size 100M; > # Load configuration files for the default server block. > include /etc/nginx/default.d/*.conf; > > location / { > index index.php; > *add_header 'Access-Control-Allow-Origin' 'origin-list';* > # This is cool because no php is touched for static content > try_files $uri $uri/ @rewrite; > expires max; > } > location @rewrite { > * add_header 'Access-Control-Allow-Origin' 'origin-list';* > # Some modules enforce no slash (/) at the end of the URL > # Else this rewrite block wouldn't be needed (GlobalRedirect) > rewrite ^/(.*)$ /index.php?q=$1; > } > > ssl_certificate /etc/ssl/fullchain1.pem; ssl_certificate_key > /etc/ssl/privkey1.pem; > location ~ \.php$ { > #try_files $uri =404; > *add_header 'Access-Control-Allow-Origin' 'origin-list';* > fastcgi_split_path_info ^(.+\.php)(/.+)$; > fastcgi_pass unix:/run/php-fpm/www.sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > include fastcgi_params; > } > error_page 404 /404.html; > location = /40x.html { > } > error_page 500 502 503 504 /50x.html; > location = /50x.html { > } > } [root at nginx]# nginx -t -c /etc/nginx/nginx.conf > nginx: the configuration file /etc/nginx/nginx.conf syntax is ok > nginx: configuration file /etc/nginx/nginx.conf test is successful > [root at nginx]# I am still encountering the same issue. Access to XMLHttpRequest at ' > https://tmobilereactdrupal.mydomain.com:8080/oauth/token' from origin ' > https://tmobilereactdrupal.mydomain.com' has been blocked by CORS policy: > No 'Access-Control-Allow-Origin' header is present on the requested > resource. > POST https://tmobilereactdrupal.mydomain.com:8080/oauth/token > net::ERR_FAILED Please let me know if you need any additional information. I look forward to hearing from you. Thanks in advance. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Thu May 28 14:53:25 2020 From: r at roze.lv (Reinis Rozitis) Date: Thu, 28 May 2020 17:53:25 +0300 Subject: Switch between upstream server In-Reply-To: References: Message-ID: <002601d634ff$bd9e6eb0$38db4c10$@roze.lv> > the links you?ve sent me I have tried to log in with my usual email and and password I use and it?s not correct tried to click remind then it doesn?t work You can just send email to nginx-request at nginx.org with subject 'unsubscribe' (without quotes). It should remove you from list (it probably will send a confirmation mail you should just make a reply without changing anything). rr From vbart at nginx.com Thu May 28 22:15:51 2020 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 29 May 2020 01:15:51 +0300 Subject: Unit 1.18.0 release Message-ID: <9701759.nUPlyArG6x@vbart-laptop> Hi, I'm glad to announce a new release of NGINX Unit. This release includes a few internal routing improvements that simplify some configurations and a new isolation option for chrooting application processes called "rootfs". Changes with Unit 1.18.0 28 May 2020 *) Feature: the "rootfs" isolation option for changing root filesystem for an application. *) Feature: multiple "targets" in PHP applications. *) Feature: support for percent encoding in the "uri" and "arguments" matching options and in the "pass" option. Also, our official packages for the recently released Ubuntu 20.04 (Focal Fossa) are available now: - https://unit.nginx.org/installation/#ubuntu At least two of the features in this release deserve special attention. Changing The Root Filesystem ---------------------------- Security is our top priority, so let's look closer at the "rootfs" option first. The coolest thing about it is that it's not just a simple chroot() system call as some may expect. It's not a secret that chroot() is not intended for security purposes, and there's plenty of ways for an attacker to get out of the chrooted directory (just check "man 2 chroot"). That's why on modern systems Unit can use pivot_root() with the "mount" namespace isolation enabled, which is way more secure and pretty similar to putting your application in an individual container. Also, our goal is to make any security option as easy to use as possible. In this case, Unit automatically tries to mount all the necessary language-specific dependencies inside a new root, so you won't need to care about them. Currently, this capability works for selected languages only, but the support will be extended in the next releases. For more information and examples of "rootfs" usage, check the documentation: - https://unit.nginx.org/configuration/#process-isolation Now to the second feature... Multiple PHP application "targets" ---------------------------------- The other major update in this release is called "targets", aiming to simplify configuration for many PHP applications. Perhaps, it is best illustrated by an example: WordPress. This is one of many applications that use two different addressing schemes: 1. Most user requests are handled by index.php regardless of the actual request URI. 2. Administration interface and some components rely on direct requests to specific .php scripts named in the URI. Earlier, users had to configure two Unit applications to handle this disparity: { "wp_index": { "type": "php", "user": "wp_user", "group": "wp_user", "root": "/path/to/wordpress/", "script": "index.php" }, "wp_direct": { "type": "php", "user": "wp_user", "group": "wp_user", "root": "/path/to/wordpress/" } } The first app directly executes the .php scripts named by the URI, whereas the second one passes all requests to index.php. Now, you can use "targets" instead: { "wp": { "type": "php", "user": "wp_user", "group": "wp_user", "targets": { "index": { "root": "/path/to/wordpress/", "script": "index.php" }, "direct": { "root": "/path/to/wordpress/" } } } } The complete example is available in our WordPress howto: - https://unit.nginx.org/howto/wordpress/ You can configure as many "targets" in one PHP application as you want, routing requests between them using various sophisticated request matching rules. Check our website to know more about the new option: - https://unit.nginx.org/configuration/#targets To learn more about request matching rules: - https://unit.nginx.org/configuration/#condition-matching Finally, see here for more howtos: - https://unit.nginx.org/howto/ We have plenty of them, covering many popular web applications and frameworks, but if your favorite one is still missing, let us know by opening a ticket here: - https://github.com/nginx/unit-docs/issues To keep the finger on the pulse, refer to our further plans in the roadmap here: - https://github.com/orgs/nginx/projects/1 Stay tuned! wbr, Valentin V. Bartenev From francis at daoine.org Thu May 28 23:23:51 2020 From: francis at daoine.org (Francis Daly) Date: Fri, 29 May 2020 00:23:51 +0100 Subject: =?UTF-8?Q?Re=3A_Access_to_XMLHttpRequest_has_been_blocked_by_CORS_policy?= =?UTF-8?Q?=3A_No_=E2=80=98Access-Control-Allow-Origin=E2=80=99_header_is_?= =?UTF-8?Q?present_on_the_requested_resource=2E?= In-Reply-To: References: <20200528124139.GR20939@daoine.org> Message-ID: <20200528232351.GS20939@daoine.org> On Thu, May 28, 2020 at 08:16:43PM +0530, Kaushal Shriyan wrote: > On Thu, May 28, 2020 at 6:11 PM Francis Daly wrote: > > On Thu, May 28, 2020 at 12:50:34PM +0530, Kaushal Shriyan wrote: Hi there, > > In your "drupal" nginx config, if the request is handled in the "php" > > location, there is no Access-Control-Allow-Origin header added. > > > > You might want the "add_header" line there instead. > I have added *add_header 'Access-Control-Allow-Origin' 'origin-list';* in > the drupal Nginx config (/etc/nginx/conf.d/drupalbackend.conf) > [root at nginx]# nginx -t -c /etc/nginx/nginx.conf > > nginx: the configuration file /etc/nginx/nginx.conf syntax is ok > > nginx: configuration file /etc/nginx/nginx.conf test is successful Presumably that file does "include" the file you modified? > I am still encountering the same issue. > > Access to XMLHttpRequest at ' > > https://tmobilereactdrupal.mydomain.com:8080/oauth/token' from origin ' > > https://tmobilereactdrupal.mydomain.com' has been blocked by CORS policy: > > No 'Access-Control-Allow-Origin' header is present on the requested > > resource. > > POST https://tmobilereactdrupal.mydomain.com:8080/oauth/token > > net::ERR_FAILED What headers are present in the response to that POST request? Does the drupal-nginx log file show that the request was received by nginx? Cheers, f -- Francis Daly francis at daoine.org From emilio.fernandes70 at gmail.com Fri May 29 07:23:31 2020 From: emilio.fernandes70 at gmail.com (Emilio Fernandes) Date: Fri, 29 May 2020 10:23:31 +0300 Subject: aarch64 packages for other Linux flavors In-Reply-To: References: <4e388ac4-8291-9e19-0774-351af78a4445@nginx.com> Message-ID: Hi Konstantin, El mar., 21 abr. 2020 a las 20:23, Konstantin Pavlov () escribi?: > Hi Emilio, > > 15.04.2020 14:21, Emilio Fernandes wrote: > > Our policy is to provide packages for officially > upstream-supported > > distributions. > > > > > https://wiki.centos.org/FAQ/General#What_architectures_are_supported.3F > > states that they only support x86_64, and aarch64 is unofficial. > > > > > > Here is something you may find interesting. > > https://github.com/varnishcache/varnish-cache/pull/3263 - a PR I've > > created for Varnish Cache > > project. > > It is based on Docker + QEMU and builds packages for different > > versions of Debian/Ubuntu/Centos/Alpine for both x64 and aarch64. > > > > > > Nice work, Martin! > > > > @Konstantin: any idea when the new aarch64 packages will be available ? > > May we help you somehow ? > > I've just published RHEL8/CentOS8 aarch64 packages for nginx stable on > http://nginx.org/packages/rhel/8/aarch64/. The mainline will follow the > suit soon, as well as proper documentation on > http://nginx.org/en/linux_packages.html. > > > With Alpine, it is proving to be more difficult than we thought, as > there are problems runing those on AWS EC2 which we use on our build > farm: https://github.com/mcrute/alpine-ec2-ami/issues/28 . > I guess you follow the GitHub issue but just in case: Mike Crute just announced a beta AMI for Alpine: https://github.com/mcrute/alpine-ec2-ami/issues/28#issuecomment-635618625 If there are no major issues he will release an official one next week. Gracias, Emilio > > -- > Konstantin Pavlov > https://www.nginx.com/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thresh at nginx.com Fri May 29 09:24:13 2020 From: thresh at nginx.com (Konstantin Pavlov) Date: Fri, 29 May 2020 12:24:13 +0300 Subject: aarch64 packages for other Linux flavors In-Reply-To: References: <4e388ac4-8291-9e19-0774-351af78a4445@nginx.com> Message-ID: Hello Emilio, 29.05.2020 10:23, Emilio Fernandes wrote: > Hi Konstantin, > > I guess you follow the GitHub issue but just in case: Mike Crute just > announced a beta AMI for > Alpine:?https://github.com/mcrute/alpine-ec2-ami/issues/28#issuecomment-635618625 > If there are no major issues he will release an official one next week. Indeed, we do follow this issue - rest assured we're going to use the release when it happens. That being said, it seems the needed kernel changes for the AMI to boot will only be there for 3.12, which means we're going to be limited to that Alpine version for ARM builds if not backported to previous releases. Thanks! -- Konstantin Pavlov https://www.nginx.com/ From francis at daoine.org Fri May 29 23:42:15 2020 From: francis at daoine.org (Francis Daly) Date: Sat, 30 May 2020 00:42:15 +0100 Subject: [error] 18#18: *72 upstream timed out (110: Connection timed out) In-Reply-To: <615cef02a989967ca749afa65213f5d4.NginxMailingListEnglish@forum.nginx.org> References: <615cef02a989967ca749afa65213f5d4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20200529234215.GT20939@daoine.org> On Sun, May 24, 2020 at 12:20:27PM -0400, rodrigobuch wrote: Hi there, > I have received this error whenever I try to generate large reports with my > App in PHP. What request do you make? (As in: which part of your config is used for this request?) When do you receive the error? (Is it exactly 60 seconds after making the request, for example?) And: how long does it take for the App to generate the report? (As in: how long would you like nginx to wait before deciding that the upstream is probably broken?) > location.conf > server { > location / { > proxy_pass http://prd_exemple_teste; > proxy_read_timeout 60s; Maybe make that bigger than 60s. Good luck with it, f -- Francis Daly francis at daoine.org From pgnet.dev at gmail.com Sat May 30 02:09:45 2020 From: pgnet.dev at gmail.com (PGNet Dev) Date: Fri, 29 May 2020 19:09:45 -0700 Subject: proxy_ssl_verify error: 'upstream SSL certificate does not match "test.example.com" while SSL handshaking to upstream', for CN/SAN 'matched' client & server certs ? Message-ID: ?I'm running nginx -V nginx version: nginx/1.19.0 (pgnd Build) built with OpenSSL 1.1.1g 21 Apr 2020 TLS SNI support enabled ... It serves as front-end SSL termination, site host, and reverse-proxy to backend apps. I'm trying to get a backend app to proxy_ssl_verify the proxy connection to it. I have two self-signed certs: One for "TLS Web Client Authentication, E-mail Protection" openssl x509 -in test.example.com.client.crt -text | egrep "Subject.*CN|DNS|TLS" Subject: C = US, ST = NY, L = New_York, O = example2.com, OU = myCA, CN = test.example.com, emailAddress = ssl at example2.com TLS Web Client Authentication, E-mail Protection DNS:test.example.com, DNS:www.test.example.com, DNS:localhost and the other, for "TLS Web Server Authentication" openssl x509 -in test.example.com.server.crt -text | egrep "Subject.*CN|DNS|TLS" Subject: C = US, ST = NY, L = New_York, O = example2.com, OU = myCA, CN = test.example.com, emailAddress = ssl at example2.com TLS Web Server Authentication DNS:test.example.com, DNS:www.test.example.com, DNS:localhost The certs 'match' CN & SAN, differing in "X509v3 Extended Key Usage". Both are verified "OK" with my local CA cert openssl verify -CAfile myCA.crt.pem test.example.com.server.crt test.example.com.server.crt: OK openssl verify -CAfile /myCA.crt.pem test.example.com.client.crt test.example.com.client.crt: OK My main nginx config includes, upstream test.example.com { server test.example.com:11111; } server { listen 10.10.10.1:443 ssl http2; server_name example.com; ... ssl_verify_client on; ssl_client_certificate "/etc/ssl/nginx/myCA.crt"; ssl_verify_depth 2; ssl_certificate "/etc/ssl/nginx/example.com.server.crt"; ssl_certificate_key "/etc/ssl/nginx/example.com.server.key"; ssl_trusted_certificate "/etc/ssl/nginx/myCA.crt"; location /app1 { proxy_pass https://test.example.com; proxy_ssl_certificate "/etc/ssl/nginx/test.example.com.client.crt"; proxy_ssl_certificate_key "/etc/ssl/nginx/test.example.com.client.key"; proxy_ssl_trusted_certificate "/etc/ssl/nginx/myCA.crt"; proxy_ssl_verify on; proxy_ssl_verify_depth 2; include includes/reverse-proxy.inc; } } and the upstream config, server { listen 127.0.0.1:11111 ssl http2; server_name test.example.com; root /data/webapps/demo_app/; index index.php; expires -1; ssl_certificate "/etc/ssl/nginx/test.example.com.server.crt"; ssl_certificate_key "/etc/ssl/nginx/test.example.com.server.key"; ssl_client_certificate "/etc/ssl/nginx/myCA.crt"; ssl_verify_client optional; ssl_verify_depth 2; location ~ \.php { try_files $uri =404; fastcgi_pass phpfpm; fastcgi_index index.php; fastcgi_param PATH_INFO $fastcgi_script_name; include fastcgi_params; } } access to https://example.com/app1 responds, 502 Bad Gateway logs, show an SSL handshake fail ... 2020/05/29 19:00:06 [debug] 29419#29419: *7 SSL: TLSv1.3, cipher: "TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256) Mac=AEAD" 2020/05/29 19:00:06 [debug] 29419#29419: *7 http upstream ssl handshake: "/app1/?" 2020/05/29 19:00:06 [debug] 29419#29419: *7 X509_check_host(): no match 2020/05/29 19:00:06 [error] 29419#29419: *7 upstream SSL certificate does not match "test.example.com" while SSL handshaking to upstream, client: 10.10.10.73, server: example.com, request: "GET /app1/ HTTP/2.0", upstream: "https://127.0.0.1:11111/app1/", host: "example.com" 2020/05/29 19:00:06 [debug] 29419#29419: *7 http next upstream, 2 ... If I toggle - ssl_verify_client on; + ssl_verify_client off; then I'm able to connect to the backend site, as expected. What exactly is NOT matching in the handshake? CN & SAN do ... &/or, is there a config problem above?