From nginx-forum at forum.nginx.org Mon Oct 1 08:48:59 2018 From: nginx-forum at forum.nginx.org (anish10dec) Date: Mon, 01 Oct 2018 04:48:59 -0400 Subject: GeoIP2 Maxmind Module Support for Nginx In-Reply-To: <3f55ca9764b07d02fa15ec2ecc629cf1.NginxMailingListEnglish@forum.nginx.org> References: <6f4b2e7bc618189784ac5561781375c0.NginxMailingListEnglish@forum.nginx.org> <3f55ca9764b07d02fa15ec2ecc629cf1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8a1bea5bdddcbebadf4fcbb0edf236a0.NginxMailingListEnglish@forum.nginx.org> In both the cases , either geoip2 or ip2location we will have to compile Nginx to support . Currently we are using below two RPM's from Nginx Repository (http://nginx.org/packages/mainline/centos/7/x86_64/RPMS/) nginx-1.10.2-1.el7.ngx.x86_64 nginx-module-geoip-1.10.2-1.el7.ngx.x86_64 Is the rpm module available or is there any plan to make it available. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281341,281455#msg-281455 From nginx-forum at forum.nginx.org Mon Oct 1 09:41:53 2018 From: nginx-forum at forum.nginx.org (alisampras) Date: Mon, 01 Oct 2018 05:41:53 -0400 Subject: Web and Mail Proxy Server Configuration Message-ID: <4345674d26f3f81689506833cf313c9a.NginxMailingListEnglish@forum.nginx.org> Hi All, My objective is to host a Web server and as a Mail proxy to my internal Exchange 2010 RPC over HTTPS. I had compile NGINX open source with --with_mail and SSL. In my nginx.conf file i saw only "http" directive with any MAIL parameters. Can anyone help me to start with a workable configuration to achieve my objective? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281456,281456#msg-281456 From bcdonadio at bcdonadio.com Mon Oct 1 12:34:12 2018 From: bcdonadio at bcdonadio.com (Bernardo Donadio) Date: Mon, 1 Oct 2018 09:34:12 -0300 Subject: OCSP stapling broken with 1.15.4 Message-ID: <2487211b-9088-7deb-60f6-18d81e005145@bcdonadio.com> Hi. I've noticed that OCSP stapling was broken by 1.15.4, as you may see below: ---------- nginx 1.15.4 with OpenSSL 1.1.1 final -------- $ openssl s_client -connect bcdonadio.com:443 -tlsextdebug -status CONNECTED(00000003) TLS server extension "renegotiation info" (id=65281), len=1 0000 - 00 . TLS server extension "EC point formats" (id=11), len=4 0000 - 03 00 01 02 .... TLS server extension "session ticket" (id=35), len=0 TLS server extension "extended master secret" (id=23), len=0 depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3 verify return:1 depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3 verify return:1 depth=0 CN = bcdonadio.com verify return:1 OCSP response: no response sent --- Certificate chain 0 s:/CN=bcdonadio.com i:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3 1 s:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3 i:/O=Digital Signature Trust Co./CN=DST Root CA X3 --- Server certificate -----BEGIN CERTIFICATE----- [long ASCII-armored certificate here] -----END CERTIFICATE----- subject=/CN=bcdonadio.com issuer=/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3 --- No client certificate CA names sent Peer signing digest: SHA256 Server Temp Key: X25519, 253 bits --- SSL handshake has read 3520 bytes and written 326 bytes Verification: OK --- New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384 Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES256-GCM-SHA384 Session-ID: [long session id here] Session-ID-ctx: Master-Key: [long master key here] PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 600 (seconds) TLS session ticket: [long session ticket here] Start Time: 1538394643 Timeout : 7200 (sec) Verify return code: 0 (ok) Extended master secret: yes --- ---------- nginx 1.15.4 with OpenSSL 1.1.1 final -------- ---------- nginx 1.15.3 with OpenSSL 1.1.1 final -------- $ openssl s_client -connect bcdonadio.com:443 -tlsextdebug -status CONNECTED(00000003) TLS server extension "renegotiation info" (id=65281), len=1 0000 - 00 . TLS server extension "EC point formats" (id=11), len=4 0000 - 03 00 01 02 .... TLS server extension "session ticket" (id=35), len=0 TLS server extension "status request" (id=5), len=0 TLS server extension "extended master secret" (id=23), len=0 depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3 verify return:1 depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3 verify return:1 depth=0 CN = bcdonadio.com verify return:1 OCSP response: ====================================== OCSP Response Data: OCSP Response Status: successful (0x0) Response Type: Basic OCSP Response Version: 1 (0x0) Responder Id: C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3 Produced At: Sep 30 06:00:00 2018 GMT Responses: Certificate ID: Hash Algorithm: sha1 Issuer Name Hash: 7EE66AE7729AB3FCF8A220646C16A12D6071085D Issuer Key Hash: A84A6A63047DDDBAE6D139B7A64565EFF3A8ECA1 Serial Number: 0338F3E6D2512FBF1BC91E766E237FE3E319 Cert Status: good This Update: Sep 30 06:00:00 2018 GMT Next Update: Oct 7 06:00:00 2018 GMT Signature Algorithm: sha256WithRSAEncryption 08:c1:47:f6:db:c1:21:da:14:6f:69:ee:8e:fd:b7:ad:82:4c: fa:d9:b8:03:93:a3:eb:ba:48:41:f7:d6:70:24:4a:79:e0:9a: a5:59:ea:d0:e6:ab:e1:ab:bf:60:b9:b4:0a:e1:18:de:a4:f6: 73:ee:74:82:16:f2:88:4f:df:62:18:fc:ec:64:4b:00:46:13: 25:ad:37:35:bc:e1:cc:96:d2:8b:af:26:62:5a:c3:f7:72:ad: d5:da:1b:70:96:c6:b6:e6:2b:06:5f:ab:61:49:ca:1a:a2:ac: b7:eb:91:1e:73:d3:e2:b1:dd:d9:f2:bc:58:e1:3f:07:78:f6: 4b:d5:46:a8:89:80:9b:dd:d1:99:8f:2a:06:06:13:f4:93:dd: 19:b3:ca:b6:77:3d:fa:eb:e4:11:58:ba:e4:41:f0:8a:df:9e: 9a:81:96:49:16:12:ec:5a:eb:49:67:4f:bc:44:0e:4d:a3:c4: f4:f1:a0:43:aa:d4:fb:5f:59:7e:b8:a9:52:81:63:05:f2:37: b6:23:5a:59:82:95:3a:cf:23:8a:ee:89:40:40:bb:93:81:68: 5a:38:b4:d0:e4:ff:eb:d7:c4:e6:de:27:50:73:d6:0e:53:97: 33:4c:e9:44:21:d6:e6:eb:a4:73:c7:68:3a:af:a6:0a:6e:fa: df:92:ec:c2 ====================================== --- Certificate chain 0 s:/CN=bcdonadio.com i:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3 1 s:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3 i:/O=Digital Signature Trust Co./CN=DST Root CA X3 --- Server certificate -----BEGIN CERTIFICATE----- [long ASCII-armored certificate here] -----END CERTIFICATE----- subject=/CN=bcdonadio.com issuer=/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3 --- No client certificate CA names sent Peer signing digest: SHA256 Server Temp Key: X25519, 253 bits --- SSL handshake has read 4064 bytes and written 326 bytes Verification: OK --- New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384 Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES256-GCM-SHA384 Session-ID: [long session id here] Session-ID-ctx: Master-Key: [long master key here] PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 600 (seconds) TLS session ticket: [long session ticket here] Start Time: 1538396356 Timeout : 7200 (sec) Verify return code: 0 (ok) Extended master secret: yes --- ---------- nginx 1.15.3 with OpenSSL 1.1.1 final -------- This problem was also noticed here: https://community.centminmod.com/threads/nginx-announce-nginx-1-15-4.15672/page-2#post-67107 There are no messages on nginx error log about any failed attempt to contact the OCSP stapling server. Should I bisect or do you guys already have some idea about which commit broke this? -- Bernardo Donadio IT Automation Engineer at Stone Payments https://bcdonadio.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From sca at andreasschulze.de Mon Oct 1 13:04:57 2018 From: sca at andreasschulze.de (A. Schulze) Date: Mon, 01 Oct 2018 15:04:57 +0200 Subject: OCSP stapling broken with 1.15.4 In-Reply-To: <2487211b-9088-7deb-60f6-18d81e005145@bcdonadio.com> Message-ID: <20181001150457.Horde.0Ftl2OC_IwMGaPSNjYo47W1@andreasschulze.de> Bernardo Donadio: > Hi. > > I've noticed that OCSP stapling was broken by 1.15.4, as you may see below: > > ---------- nginx 1.15.4 with OpenSSL 1.1.1 final -------- > $ openssl s_client -connect bcdonadio.com:443 -tlsextdebug -status > CONNECTED(00000003) > TLS server extension "renegotiation info" (id=65281), len=1 > 0000 - 00 . > TLS server extension "EC point formats" (id=11), len=4 > 0000 - 03 00 01 02 .... > TLS server extension "session ticket" (id=35), len=0 > TLS server extension "extended master secret" (id=23), len=0 > depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3 > verify return:1 > depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3 > verify return:1 > depth=0 CN = bcdonadio.com > verify return:1 > OCSP response: no response sent works here: $ openssl11 version OpenSSL 1.1.1 11 Sep 2018 $ echo | openssl11 s_client -connect andreasschulze.de:443 -servername andreasschulze.de -tlsextdebug -status 2>&1 | grep -i ocsp OCSP response: OCSP Response Data: OCSP Response Status: successful (0x0) Response Type: Basic OCSP Response (webserver) # nginx -V nginx version: nginx/1.15.4 built with OpenSSL 1.1.1 11 Sep 2018 TLS SNI support enabled configure arguments: --prefix=/usr ... worth to mention: I'm using the configuration option "ssl_stapling_file" If you don't use ssl_stapling_file, after a nginx restart the first TLS session will not contain OCSP data. Did you try to measure twice? Andreas From bcdonadio at bcdonadio.com Mon Oct 1 13:43:12 2018 From: bcdonadio at bcdonadio.com (Bernardo Donadio) Date: Mon, 1 Oct 2018 10:43:12 -0300 Subject: OCSP stapling broken with 1.15.4 In-Reply-To: <20181001150457.Horde.0Ftl2OC_IwMGaPSNjYo47W1@andreasschulze.de> References: <20181001150457.Horde.0Ftl2OC_IwMGaPSNjYo47W1@andreasschulze.de> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 10/1/18 10:04 AM, A. Schulze wrote: > Did you try to measure twice? Indeed, with further tests I think that the stapling is working... sometimes. I've restored the 1.15.4 package and have been making some requests. Some of them are correctly stapled, others do not. There's no restart between tests. I'm not using the staple file, though. Is this behavior expected without such configuration? Also, I've enabled ssl_early_data. [bcdonadio at RJ_DVP0100 ~]$ date; openssl s_client -connect bcdonadio.com:443 -tlsextdebug -status 2>/dev/null | grep -i ocsp Mon Oct 1 10:24:07 -03 2018 OCSP response: OCSP Response Data: OCSP Response Status: successful (0x0) Response Type: Basic OCSP Response ^C [bcdonadio at RJ_DVP0100 ~]$ date; openssl s_client -connect bcdonadio.com:443 -tlsextdebug -status 2>/dev/null | grep -i ocsp Mon Oct 1 10:27:02 -03 2018 OCSP response: no response sent ^C [bcdonadio at RJ_DVP0100 ~]$ date; openssl s_client -connect bcdonadio.com:443 -tlsextdebug -status 2>/dev/null | grep -i ocsp Mon Oct 1 10:39:18 -03 2018 OCSP response: no response sent ^C [bcdonadio at RJ_DVP0100 ~]$ date; openssl s_client -connect bcdonadio.com:443 -tlsextdebug -status 2>/dev/null | grep -i ocsp Mon Oct 1 10:39:27 -03 2018 OCSP response: OCSP Response Data: OCSP Response Status: successful (0x0) Response Type: Basic OCSP Response ^C - -- Bernardo Donadio IT Automation Engineer at Stone Payments https://bcdonadio.com/ -----BEGIN PGP SIGNATURE----- iQEyBAEBCAAdFiEE8FSjwkTqZIehCHZPeerwWqhCJOUFAluyJGgACgkQeerwWqhC JOWYMwf3fY7w+Dg3vYolWg5C0ySB71TwzUIYSJgWB5ZUXy6gRqLg5TUmkQuP04Gb EcxOR3BVmOaXox3vYkedXwzC3KK7DGYbuqL4QciVPAh/lzYSvLhWn8ufdKVHXFaa xuNA9tNd6UAFcty4SGdOraVrJ3JAtm9R8LvFA/baX5D7PItwupDWA/FsvqjILNiB pLZTS05m8b2RWthNWIXEik8L/arbbp8dFzYskJDez8cZCn3Uew8GnHsaU7/h10bT arUh3AvUbvapZsE6tfz74ko6tk9LHQyk/dHLJo9xR/f3EK55WQgWrwSuBFlAF7Fe 3uEQoFBwxc0gFo3GyBa3mHCjrs1t =JlI3 -----END PGP SIGNATURE----- From r at roze.lv Mon Oct 1 14:45:40 2018 From: r at roze.lv (Reinis Rozitis) Date: Mon, 1 Oct 2018 17:45:40 +0300 Subject: OCSP stapling broken with 1.15.4 In-Reply-To: References: <20181001150457.Horde.0Ftl2OC_IwMGaPSNjYo47W1@andreasschulze.de> Message-ID: <000001d45995$6c690c90$453b25b0$@roze.lv> > Indeed, with further tests I think that the stapling is working... > sometimes. > > > I'm not using the staple file, though. Is this behavior expected without such > configuration? Also, I've enabled ssl_early_data. Each nginx worker has it's own cache. Depending on your worker_processes you might get that amount of responses without ocsp data. rr From sca at andreasschulze.de Mon Oct 1 14:47:50 2018 From: sca at andreasschulze.de (A. Schulze) Date: Mon, 1 Oct 2018 16:47:50 +0200 Subject: OCSP stapling broken with 1.15.4 In-Reply-To: References: <20181001150457.Horde.0Ftl2OC_IwMGaPSNjYo47W1@andreasschulze.de> Message-ID: Am 01.10.18 um 15:43 schrieb Bernardo Donadio: > I've restored the 1.15.4 package and have been making some requests. > Some of them are correctly stapled, others do not. There's no restart > between tests. maybe you run multiple threads and for each thread there is one first request? > I'm not using the staple file, though. Is this behavior expected > without such configuration? it's documented somewhere, I guess at nginx.org website > Also, I've enabled ssl_early_data. I don't use this option. Is it TLS1.3 / 0RTT related? Andreas From rob at cow-frenzy.co.uk Mon Oct 1 19:14:30 2018 From: rob at cow-frenzy.co.uk (Rob Fulton) Date: Mon, 1 Oct 2018 20:14:30 +0100 Subject: Nginx caching proxy dns name even when using variables In-Reply-To: <20180927145322.GO56558@mdounin.ru> References: <8ba83b8a-4604-53cd-29f2-a966b3a20037@cow-frenzy.co.uk> <20180927145322.GO56558@mdounin.ru> Message-ID: <21b69df7-74e1-733d-e67f-36abfbae15e7@cow-frenzy.co.uk> Hi, On 27/09/2018 15:53, Maxim Dounin wrote: > Hello! > > On Thu, Sep 27, 2018 at 03:27:03PM +0100, Rob Fulton wrote: > >> I?ve done some further testing on this today and discovered that >> the configuration works correctly when the proxy_pass url is >> accessed via http, I can see dns queries for the proxy_server >> url every minute as per the ttl. The moment I change the url to >> https, this stops. Is this a known limitation? > Most likely, the problem is that you have > > proxy_pass https://somehostname.com; > > somewhere in the configuration, without variables - so nginx > resolves the name during configuration parsing. As a result, your > construct > > set $proxy_server somehostname.com; > proxy_pass https://$proxy_server; > > does not try to resolve the name, but rather ends up using the > existing upstream for somehostname.com. Thank you very much for your help, you were correct I had a proxy_pass directive for a 404 error page to the same hostname configured without a variable, setting this correctly resulted in the correct behavior. Regards Rob From brian at brianwhalen.net Mon Oct 1 19:22:59 2018 From: brian at brianwhalen.net (Brian W.) Date: Mon, 1 Oct 2018 12:22:59 -0700 Subject: Redirect to external site Message-ID: I have gotten the ldap setup working with their backend-sample-app.py file properly and it displays the hello world message. What I cannot figure out is how to redirect it to another url on another machine, as opposed to that local page, if auth works. Most of the attempts I have tried lead to a blank white page sadly. I tried a simple requests or urllibrequest, as well as a selfsend_header with the new url. Has anyone else gotten this to work? Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From quintinpar at gmail.com Tue Oct 2 03:04:25 2018 From: quintinpar at gmail.com (Quintin Par) Date: Tue, 2 Oct 2018 08:34:25 +0530 Subject: Will Nginx serve stale cache after expiry if was unable to refresh the cache? Message-ID: Relevant code: proxy_cache_valid 200 301 302 1d; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; proxy_cache_background_update on; proxy_cache_lock on; The URL in question is obviously cached for 1 day. 15 minutes after the day is over, the proxy gets a request and tries to refresh the content but is met with an error (500) from the backend. In this case, will it continue to serve stale until it gets a proper new 200 content? - Quintin -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Oct 2 07:52:42 2018 From: nginx-forum at forum.nginx.org (alisampras) Date: Tue, 02 Oct 2018 03:52:42 -0400 Subject: Web and Mail Proxy Server Configuration In-Reply-To: <4345674d26f3f81689506833cf313c9a.NginxMailingListEnglish@forum.nginx.org> References: <4345674d26f3f81689506833cf313c9a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3cace6ff4fde567bc1e32ca2140aabbc.NginxMailingListEnglish@forum.nginx.org> Hi all, Based on googling, i found some of the Mail proxy config as per below. Question is, is that config is valid for both as Web server and Mail proxy? [root at ns2 conf]# more nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name xxx.xxx.com; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } mail { listen 443; ssl on; ssl_certificate /etc/ssl/remote.domain.com-unified.crt; ssl_certificate_key /etc/ssl/remote.domain.com.key; ssl_session_timeout 5m; server_name remote.domain.com autodiscover.domain.com; # Set global proxy settings proxy_http_version 1.1; proxy_connect_timeout 360; proxy_read_timeout 360; proxy_pass_request_headers on; proxy_pass_header Date; proxy_pass_header Server; proxy_pass_header Authorization; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Accept-Encoding ""; location / { proxy_pass https://10.202.1.14; } location ~* ^/owa { proxy_pass https://10.202.1.14; } location ~* ^/ecp { proxy_pass https://10.202.1.14; } location ~* ^/rpc { proxy_pass https://10.202.1.14; } location ~* ^/ews { proxy_pass https://10.202.1.14; } location ~* ^/exchweb { proxy_pass https://10.202.1.14; } location ~* ^/public { proxy_pass https://10.202.1.14; } location ~* ^/exchange { proxy_pass https://10.202.1.14; } location ~* ^/Microsoft-Server-ActiveSync { proxy_set_header X-Forwarded-Proto https; proxy_pass https://10.202.1.14; } location ~* ^/autodiscover { proxy_pass https://10.202.1.14; } error_log /usr/local/nginx/logs/owa-ssl-error.log; access_log /usr/local/nginx/logs/owa-ssl-access.log; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281456,281469#msg-281469 From stefan.mueller.83 at gmail.com Tue Oct 2 09:58:22 2018 From: stefan.mueller.83 at gmail.com (=?UTF-8?Q?Stefan_M=c3=bcller?=) Date: Tue, 2 Oct 2018 11:58:22 +0200 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: <000001d4575b$f4fc8fa0$def5aee0$@roze.lv> References: <1abfde26-8aef-5e22-5c35-671d81826b6a@gmail.com> <1b6f8646-9b10-5cc8-993b-68ed92275ed9@gmail.com> <000101d45587$0e87be30$2b973a90$@roze.lv> <000001d4564a$e0981960$a1c84c20$@roze.lv> <000001d4575b$f4fc8fa0$def5aee0$@roze.lv> Message-ID: An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Oct 2 12:22:30 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Oct 2018 15:22:30 +0300 Subject: Will Nginx serve stale cache after expiry if was unable to refresh the cache? In-Reply-To: References: Message-ID: <20181002122230.GA56558@mdounin.ru> Hello! On Tue, Oct 02, 2018 at 08:34:25AM +0530, Quintin Par wrote: > Relevant code: > > proxy_cache_valid 200 301 302 1d; > > proxy_cache_use_stale error timeout invalid_header updating http_500 > http_502 http_503 http_504; > > proxy_cache_background_update on; > > proxy_cache_lock on; > > The URL in question is obviously cached for 1 day. 15 minutes after the day > is over, the proxy gets a request and tries to refresh the content but is > met with an error (500) from the backend. In this case, will it continue to > serve stale until it gets a proper new 200 content? Yes, as per "proxy_cache_use_stale updating;", nginx will serve stale cached response until it will be able to update the cache. -- Maxim Dounin http://mdounin.ru/ From vl at nginx.com Tue Oct 2 12:40:48 2018 From: vl at nginx.com (Vladimir Homutov) Date: Tue, 2 Oct 2018 15:40:48 +0300 Subject: Redirect to external site In-Reply-To: References: Message-ID: <20181002124047.GA23609@vlpc> On Mon, Oct 01, 2018 at 12:22:59PM -0700, Brian W. wrote: > I have gotten the ldap setup working with their backend-sample-app.py file > properly and it displays the hello world message. What I cannot figure out > is how to redirect it to another url on another machine, as opposed to that > local page, if auth works. Most of the attempts I have tried lead to a > blank white page sadly. I tried a simple requests or urllibrequest, as well > as a selfsend_header with the new url. Has anyone else gotten this to work? > > Brian What happens after you succesfully authenticated user, is controlled by location where your 'auth_request' directive is written. In example config, it contains: location / { auth_request /auth-proxy; error_page 401 =200 /login; proxy_pass http://backend/; } i.e. users request is passed to an upstream (backend-sample-app.py application). If you want instead of proxying, return some redirect (and probably set some headers with data obtained from authentication), you have to setup nginx to do it, using appropriate directives. Please refer to: http://nginx.org/en/docs/http/ngx_http_auth_request_module.html http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#return http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header From mdounin at mdounin.ru Tue Oct 2 15:28:50 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Oct 2018 18:28:50 +0300 Subject: nginx-1.15.5 Message-ID: <20181002152849.GI56558@mdounin.ru> Changes with nginx 1.15.5 02 Oct 2018 *) Bugfix: a segmentation fault might occur in a worker process when using OpenSSL 1.1.0h or newer; the bug had appeared in 1.15.4. *) Bugfix: of minor potential bugs. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Oct 2 15:59:56 2018 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 2 Oct 2018 11:59:56 -0400 Subject: [nginx-announce] nginx-1.15.5 In-Reply-To: <20181002152901.GJ56558@mdounin.ru> References: <20181002152901.GJ56558@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.15.5 for Windows https://kevinworthington.com/nginxwin1155 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Oct 2, 2018 at 11:29 AM, Maxim Dounin wrote: > Changes with nginx 1.15.5 02 Oct > 2018 > > *) Bugfix: a segmentation fault might occur in a worker process when > using OpenSSL 1.1.0h or newer; the bug had appeared in 1.15.4. > > *) Bugfix: of minor potential bugs. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Tue Oct 2 16:41:45 2018 From: r at roze.lv (Reinis Rozitis) Date: Tue, 2 Oct 2018 19:41:45 +0300 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: References: <1abfde26-8aef-5e22-5c35-671d81826b6a@gmail.com> <1b6f8646-9b10-5cc8-993b-68ed92275ed9@gmail.com> <000101d45587$0e87be30$2b973a90$@roze.lv> <000001d4564a$e0981960$a1c84c20$@roze.lv> <000001d4575b$f4fc8fa0$def5aee0$@roze.lv> Message-ID: <000601d45a6e$cee8e960$6cbabc20$@roze.lv> > This allows permission management via user accounts but it can can get bulky as soon as you set up user accounts for permission management of each backend application, as they pose a higher risk, as indicated in the previous email Well you asked how to proxy unix sockets... > that is all put in the same http{} block. If you put everything (both the user unix sockets and also the parent proxy server) under the same http{} block then it makes no sense since a single instance of nginx always runs under the same user (and beats the whole user/app isolation). It's more simple then just to make virtualhosts without the sockets and without the proxy. > Nginx just starts php-fpm No. Depending on distribution there might be some init and/or systemd scripts which start both daemons but on its own nginx doesn?t do that. > 4. (new) how to debug > In /etc/nginx/nginx.conf as there is: > access_log syslog:server=unix:/dev/log,facility=local7,tag=nginx_access,nohostname main; > error_log syslog:server=unix:/dev/log,facility=local7,tag=nginx_error,nohostname error; > so I assume Debug Logging is available although $ nginx -V 2>&1 | grep -- '--with-debug' does not return anything. It means that nginx is logging to syslog (which then usually writes somewhere under /var/log). You can change/point both logs also directly to a file. --with-debug is only present when nginx is compiled in debug mode to log internal things and provide more detailed information in case of bugs. I doubt it will give any benefit in this case. In general you are mixing a lot of things together, like asking about a BSD firewall, NATs, Bind and then trying to implement it on a specific linux-based ARM blackbox. I would suggest to start experimenting/researching different technologies one by one rather than trying to achieve everything at once. rr From stefan.mueller.83 at gmail.com Tue Oct 2 18:32:10 2018 From: stefan.mueller.83 at gmail.com (=?UTF-8?Q?Stefan_M=c3=bcller?=) Date: Tue, 2 Oct 2018 20:32:10 +0200 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: <000601d45a6e$cee8e960$6cbabc20$@roze.lv> References: <1abfde26-8aef-5e22-5c35-671d81826b6a@gmail.com> <1b6f8646-9b10-5cc8-993b-68ed92275ed9@gmail.com> <000101d45587$0e87be30$2b973a90$@roze.lv> <000001d4564a$e0981960$a1c84c20$@roze.lv> <000001d4575b$f4fc8fa0$def5aee0$@roze.lv> <000601d45a6e$cee8e960$6cbabc20$@roze.lv> Message-ID: An HTML attachment was scrubbed... URL: From r at roze.lv Wed Oct 3 00:09:07 2018 From: r at roze.lv (Reinis Rozitis) Date: Wed, 3 Oct 2018 03:09:07 +0300 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: References: <1abfde26-8aef-5e22-5c35-671d81826b6a@gmail.com> <1b6f8646-9b10-5cc8-993b-68ed92275ed9@gmail.com> <000101d45587$0e87be30$2b973a90$@roze.lv> <000001d4564a$e0981960$a1c84c20$@roze.lv> <000001d4575b$f4fc8fa0$def5aee0$@roze.lv> <000601d45a6e$cee8e960$6cbabc20$@roze.lv> Message-ID: <000001d45aad$4d488f70$e7d9ae50$@roze.lv> > so all goes in the same nginx.conf but in different http{} block or do I need one nginx.conf for each, the user unix sockets and also the parent proxy server? A typical nginx configuration has only one http {} block. You can look at some examples: https://nginx.org/en/docs/http/request_processing.html https://nginx.org/en/docs/http/server_names.html https://www.nginx.com/resources/wiki/start/topics/examples/server_blocks/ > You suggesting to setup virtualhosts what listen to a port whereto traffic is forwarded from the router. I don't to have multiple ports open at the router, so I would like to stick with UNIX Sockets and proxy. Unless by "router" you mean the same Synology box you can't proxy unix sockets over TCP, they work only inside a single server/machine. Also you don't need to forward multiple ports, just 80 and 443 (if ssl) and have name-based virtualhosts. rr From nginx-forum at forum.nginx.org Wed Oct 3 10:36:49 2018 From: nginx-forum at forum.nginx.org (alisampras) Date: Wed, 03 Oct 2018 06:36:49 -0400 Subject: Web and Mail Proxy Server Configuration In-Reply-To: <4345674d26f3f81689506833cf313c9a.NginxMailingListEnglish@forum.nginx.org> References: <4345674d26f3f81689506833cf313c9a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9ee7800b378e2a54b875cc5189db9cf3.NginxMailingListEnglish@forum.nginx.org> Hi All, BTW, i had compile nginx 1.15.4 from Mainline. nginx version: nginx/1.15.4 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC) built with OpenSSL 1.0.2p 14 Aug 2018 TLS SNI support enabled configure arguments: --prefix=/usr/local/nginx --sbin-path=/usr/local/nginx/sbin/nginx --modules-path=/usr/local/nginx/modules --conf-path=/usr/local/nginx/conf/nginx.conf --error-log-path=/usr/local/nginx/logs/error.log --pid-path=/usr/local/nginx/logs/nginx.pid --http-log-path=/usr/local/nginx/logs/access.log --user=nginx --group=nginx --with-pcre=/usr/local/src/pcre-8.42 --with-zlib=/usr/local/src/zlib-1.2.11 --with-openssl=/usr/local/src/openssl-1.0.2p --with-http_ssl_module --with-mail --with-mail_ssl_module This is my NGINX directory lists and i dont see the "Modules" directory. Is that normal? client_body_temp conf fastcgi_temp html html.orig logs proxy_temp sbin scgi_temp uwsgi_temp Can anyone share a configuration file with simple HTTP and Mail proxy settings in nginx.conf? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281456,281498#msg-281498 From nginx-forum at forum.nginx.org Wed Oct 3 10:39:10 2018 From: nginx-forum at forum.nginx.org (alisampras) Date: Wed, 03 Oct 2018 06:39:10 -0400 Subject: Web and Mail Proxy Server Configuration In-Reply-To: <9ee7800b378e2a54b875cc5189db9cf3.NginxMailingListEnglish@forum.nginx.org> References: <4345674d26f3f81689506833cf313c9a.NginxMailingListEnglish@forum.nginx.org> <9ee7800b378e2a54b875cc5189db9cf3.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi All, We are using OWA, OA and ActiveSync for internet users. Exchange connection protocol is RPC over HTTPS. This forum looks like dead :-( Any help? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281456,281499#msg-281499 From r at roze.lv Wed Oct 3 11:55:48 2018 From: r at roze.lv (Reinis Rozitis) Date: Wed, 3 Oct 2018 14:55:48 +0300 Subject: Web and Mail Proxy Server Configuration In-Reply-To: <9ee7800b378e2a54b875cc5189db9cf3.NginxMailingListEnglish@forum.nginx.org> References: <4345674d26f3f81689506833cf313c9a.NginxMailingListEnglish@forum.nginx.org> <9ee7800b378e2a54b875cc5189db9cf3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <001c01d45b10$066a2150$133e63f0$@roze.lv> > This is my NGINX directory lists and i dont see the "Modules" directory. Is that > normal? Yes, that's normal. By default nginx compiles everything into executable so unless you build dynamic modules (--add-dynamic-module) there won't be any .so files. > Can anyone share a configuration file with simple HTTP and Mail proxy settings > in nginx.conf? Would start with this https://docs.nginx.com/nginx/admin-guide/mail-proxy/mail-proxy/ rr From tolmachev.vlad at gmail.com Wed Oct 3 17:40:04 2018 From: tolmachev.vlad at gmail.com (=?UTF-8?B?0JLQu9Cw0LTQuNGB0LvQsNCyINCi0L7Qu9C80LDRh9C10LI=?=) Date: Wed, 3 Oct 2018 20:40:04 +0300 Subject: Fwd: Encrypted SNI In-Reply-To: References: Message-ID: When nginx will emplemented Encrypted SNI support? Cloudflare already do this, https://www.cloudflare.com/ssl/encrypted-sni/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Oct 3 18:05:41 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 3 Oct 2018 21:05:41 +0300 Subject: Fwd: Encrypted SNI In-Reply-To: References: Message-ID: <20181003180541.GR56558@mdounin.ru> Hello! On Wed, Oct 03, 2018 at 08:40:04PM +0300, ????????? ???????? wrote: > When nginx will emplemented Encrypted SNI support? > Cloudflare already do this, https://www.cloudflare.com/ssl/encrypted-sni/ Likely once Encrypted SNI support will be added to the OpenSSL library. Currently, it is not there. -- Maxim Dounin http://mdounin.ru/ From stefan.mueller.83 at gmail.com Wed Oct 3 21:02:30 2018 From: stefan.mueller.83 at gmail.com (=?UTF-8?Q?Stefan_M=c3=bcller?=) Date: Wed, 3 Oct 2018 23:02:30 +0200 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: <000001d45aad$4d488f70$e7d9ae50$@roze.lv> References: <1abfde26-8aef-5e22-5c35-671d81826b6a@gmail.com> <1b6f8646-9b10-5cc8-993b-68ed92275ed9@gmail.com> <000101d45587$0e87be30$2b973a90$@roze.lv> <000001d4564a$e0981960$a1c84c20$@roze.lv> <000001d4575b$f4fc8fa0$def5aee0$@roze.lv> <000601d45a6e$cee8e960$6cbabc20$@roze.lv> <000001d45aad$4d488f70$e7d9ae50$@roze.lv> Message-ID: <7f86eb5e-0916-589f-e5b8-c08999fb59e9@gmail.com> An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Oct 4 01:52:24 2018 From: nginx-forum at forum.nginx.org (alisampras) Date: Wed, 03 Oct 2018 21:52:24 -0400 Subject: Web and Mail Proxy Server Configuration In-Reply-To: <001c01d45b10$066a2150$133e63f0$@roze.lv> References: <001c01d45b10$066a2150$133e63f0$@roze.lv> Message-ID: <76f7d8f1923324d414993b9368e9a0a6.NginxMailingListEnglish@forum.nginx.org> Hi Reinis, Thanks for your reply. Noted on Modules directory. Yes, i went through the NGINX Admin Guide but talked about SMTP, POP3 and IMAP protocol. My internal exchange is 2010 and all outside users accessing email OWA, OA and ActiveSync via RPC over HTTPS. Hope you can show me some useful nginx conf files to suite my setup. Also show me if i need to post my queries on other forum. Thanks man Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281456,281515#msg-281515 From nginx-forum at forum.nginx.org Thu Oct 4 02:17:55 2018 From: nginx-forum at forum.nginx.org (alisampras) Date: Wed, 03 Oct 2018 22:17:55 -0400 Subject: Web and Mail Proxy Server Configuration In-Reply-To: <76f7d8f1923324d414993b9368e9a0a6.NginxMailingListEnglish@forum.nginx.org> References: <001c01d45b10$066a2150$133e63f0$@roze.lv> <76f7d8f1923324d414993b9368e9a0a6.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi All, All our outside users uses OWA, OA and ActiveSync. These users will connect to my internal Exchange 2010 via RPC over HTTPS.One Example, Users will access OWA by mail.example.com/owa?and this should proxy to internal exchange 2010 server exch01.example.com. All my previous nginx.conf config, i had create "mail" context. Probably i was confused as that mail context is for SMTP, POP3 and IMAP protocol.So for my case, i still have to use HTTP context and add directive for my Web server page and also to mail proxy any request mail.example.com/owa to my internal exchange server.The same should go for my other users access OA and ActiveSync. Am i correct? Any help on nginx config for my scenario? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281456,281516#msg-281516 From nginx-forum at forum.nginx.org Thu Oct 4 02:32:54 2018 From: nginx-forum at forum.nginx.org (George) Date: Wed, 03 Oct 2018 22:32:54 -0400 Subject: Fwd: Encrypted SNI In-Reply-To: <20181003180541.GR56558@mdounin.ru> References: <20181003180541.GR56558@mdounin.ru> Message-ID: <3b64965c44d5074a41e16a6cf77d1765.NginxMailingListEnglish@forum.nginx.org> Nginx supports BoringSSL too and it already has ESNI support apparently https://www.theregister.co.uk/2018/07/17/encrypted_server_names/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281512,281517#msg-281517 From mdounin at mdounin.ru Thu Oct 4 13:44:59 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 4 Oct 2018 16:44:59 +0300 Subject: Fwd: Encrypted SNI In-Reply-To: <3b64965c44d5074a41e16a6cf77d1765.NginxMailingListEnglish@forum.nginx.org> References: <20181003180541.GR56558@mdounin.ru> <3b64965c44d5074a41e16a6cf77d1765.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181004134459.GU56558@mdounin.ru> Hello! On Wed, Oct 03, 2018 at 10:32:54PM -0400, George wrote: > Nginx supports BoringSSL too and it already has ESNI support apparently > https://www.theregister.co.uk/2018/07/17/encrypted_server_names/ Yes, I've seen this article too. But I don't see any traces of ESNI support neither in BoringSSL sources, nor at boringssl-review.googlesource.com. As far as I understand, "Support for ESNI can be found in BoringSSL" in this article means that some ESNI patches for BoringSSL might exist somewhere, not yet committed, and probably not yet public. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Oct 4 19:16:58 2018 From: nginx-forum at forum.nginx.org (huguesjoyal) Date: Thu, 04 Oct 2018 15:16:58 -0400 Subject: Using variable in vhost In-Reply-To: References: Message-ID: <846da936e35b2c1dd6a21e9f5cc8f500.NginxMailingListEnglish@forum.nginx.org> Hi, I don't see any reason why it can be slower since the recommended ways to pass params to fastcgi is by using dynamics variables ex: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; The way you're setting your variable is static by the website, the value is not dynamically linked to the request. So it might have a sight overhead when starting the web server for parsing the config, instead of having the full config repeated for each virtual host, but overall once the "virtual server" is loaded into memory it's shouldn't be a problem. Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,280128,281527#msg-281527 From brian at brianwhalen.net Fri Oct 5 15:59:21 2018 From: brian at brianwhalen.net (Brian W.) Date: Fri, 5 Oct 2018 08:59:21 -0700 Subject: Proxy_pass url with a # Message-ID: In have this largely working aside from the above. If I proxy pass to the url with a # in it it's treated like a comment and ignores the rest. If I encode it with %23 I get a 404 error. What have you all done to get past this? Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis.papathanasiou at gmail.com Sat Oct 6 21:15:50 2018 From: denis.papathanasiou at gmail.com (Denis Papathanasiou) Date: Sat, 6 Oct 2018 17:15:50 -0400 Subject: Simple proxy_pass settings are invalidating/deleting multipart file data sent by POST Message-ID: As noted here -- https://github.com/gin-gonic/gin/issues/1582 -- I have a simple web app that handles multipart form file uploads correctly on its own, but when I put it behind a simple proxy_pass configuration like below, the underlying application sees a null value where the file form data was. This is the configuration I am using: location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_pass https://127.0.0.1:9001; } I have also experimented with setting setting proxy_request_buffering to 'off' but still gave me the same result. If it is not a problem with the wen framework (and based on the logs I suspect it is not), how can I update my nginx config so it works properly? -------------- next part -------------- An HTML attachment was scrubbed... URL: From janejojo at gmail.com Sat Oct 6 21:51:47 2018 From: janejojo at gmail.com (Jane Jojo) Date: Sun, 7 Oct 2018 03:21:47 +0530 Subject: Nginx cache returning empty response for just the home page Message-ID: Here?s the issue in action [video]: https://d.pr/v/HmAiK0 I am using Nginx http caching extensively with the following cache key proxy_cache_key "$scheme://$host$uri"; I know this is a cache issue because I can invalidate it with proxy_cache_bypass $arg_nocache; Here?s a video of that in action: https://d.pr/v/Bj5ey6 Now, this happens only to the homepage URL and the problem recurs after a while (I think after the cache expiry). Can you help me understand why this is happening and what I can do to fix? Here?s my code for reference: charset utf-8; proxy_cache_valid 200 301 302 1d; proxy_redirect off; proxy_cache_valid 404 1m; proxy_cache_revalidate on; proxy_cache_background_update on; proxy_cache_lock on; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; proxy_cache_bypass $arg_nocache; -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Sat Oct 6 22:20:21 2018 From: peter_booth at me.com (Peter Booth) Date: Sat, 06 Oct 2018 18:20:21 -0400 Subject: Nginx cache returning empty response for just the home page In-Reply-To: References: Message-ID: <9E1273B0-2010-4EF6-9564-E3F7E98B9721@me.com> You need to understand what requests are being received, what responses are being sent and the actual keys being used to write to your cache. This means intelligent request logging, possibly use of redbot.org, and examination of your cache. I used to use a script that someone had posted here years ago that would dump cache contents along with cache keys. Sent from my iPhone > On Oct 6, 2018, at 5:51 PM, Jane Jojo wrote: > > Here?s the issue in action [video]: https://d.pr/v/HmAiK0 > > I am using Nginx http caching extensively with the following cache key > > proxy_cache_key "$scheme://$host$uri"; > > I know this is a cache issue because I can invalidate it with > > proxy_cache_bypass $arg_nocache; > > Here?s a video of that in action: https://d.pr/v/Bj5ey6 > > Now, this happens only to the homepage URL and the problem recurs after a while (I think after the cache expiry). > > Can you help me understand why this is happening and what I can do to fix? > > Here?s my code for reference: > charset utf-8; > proxy_cache_valid 200 301 302 1d; > proxy_redirect off; > proxy_cache_valid 404 1m; > proxy_cache_revalidate on; > proxy_cache_background_update on; > proxy_cache_lock on; > proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; > proxy_cache_bypass $arg_nocache; > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Oct 6 22:35:07 2018 From: francis at daoine.org (Francis Daly) Date: Sat, 6 Oct 2018 23:35:07 +0100 Subject: Simple proxy_pass settings are invalidating/deleting multipart file data sent by POST In-Reply-To: References: Message-ID: <20181006223507.4nblurztpcugiogs@daoine.org> On Sat, Oct 06, 2018 at 05:15:50PM -0400, Denis Papathanasiou wrote: Hi there, > As noted here -- https://github.com/gin-gonic/gin/issues/1582 -- I have a > simple web app that handles multipart form file uploads correctly on its > own, but when I put it behind a simple proxy_pass configuration like below, > the underlying application sees a null value where the file form data was. If you "tcpdump" (or otherwise) and look at the traffic going to nginx, and going to the backend, when an upload happens, do you see any difference? I've tried a quick test using a command like curl -v -F file=@/etc/hosts http://localhost:8000/x/ and the only obvious-to-me difference is that nginx sees the client "Expect: 100-Continue" request header, but does not send that to upstream. I think that that should probably not make a difference to the upstream service. > If it is not a problem with the wen framework (and based on the logs I > suspect it is not), how can I update my nginx config so it works properly? I suspect that you'll have to do some more investigation, to see what is different on your upstream server when nginx is and is not involved. That might help point at the problem, and see whether the solution is in nginx or elsewhere. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Oct 6 22:41:24 2018 From: francis at daoine.org (Francis Daly) Date: Sat, 6 Oct 2018 23:41:24 +0100 Subject: Proxy_pass url with a # In-Reply-To: References: Message-ID: <20181006224124.ed5vli7w5yuj6c3a@daoine.org> On Fri, Oct 05, 2018 at 08:59:21AM -0700, Brian W. wrote: Hi there, > In have this largely working aside from the above. If I proxy pass to the > url with a # in it it's treated like a comment and ignores the rest. If I > encode it with %23 I get a 404 error. What have you all done to get past > this? # is usually not included in a url that is sent in a request to a web server. Can you describe what you want to achieve, and what you are currently doing? Perhaps that will make clearer how to make it happen. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sun Oct 7 11:00:48 2018 From: nginx-forum at forum.nginx.org (George) Date: Sun, 07 Oct 2018 07:00:48 -0400 Subject: Fwd: Encrypted SNI In-Reply-To: <20181004134459.GU56558@mdounin.ru> References: <20181004134459.GU56558@mdounin.ru> Message-ID: Thanks Maxim. Guess we just need to wait :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281512,281540#msg-281540 From brian at brianwhalen.net Sun Oct 7 13:54:54 2018 From: brian at brianwhalen.net (Brian Whalen) Date: Sun, 7 Oct 2018 06:54:54 -0700 Subject: Proxy_pass url with a # In-Reply-To: <20181006224124.ed5vli7w5yuj6c3a@daoine.org> References: <20181006224124.ed5vli7w5yuj6c3a@daoine.org> Message-ID: <02d5880e-578e-a994-d9d7-78ee1660aa2a@brianwhalen.net> In summary I am trying to reverse proxy a group of windows web servers, where there is a # in the url. I have it mostly working except this. I used the nginx ldap auth modules, including their sample backend app. In that app, when auth succeeds a page is presented with a hello world you were successful message; what I want to do is actually load one of those windows webservers. I've been trying to use the python requests module via a typical import, but have not gotten this quite right. If I use # then it is treated as a comment skipping the rest of the url, and if I use %23 I get a 404 in a tcpdump. I dont think the hash is the whole problem though, since if I try to load some other page after the successful auth it also fails. I know also that If I just do a straight proxy_pass to the url without the ldap auth and then type in nginxserver/path#/with/pound that works. Brian On 10/6/18 3:41 PM, Francis Daly wrote: > On Fri, Oct 05, 2018 at 08:59:21AM -0700, Brian W. wrote: > > Hi there, > >> In have this largely working aside from the above. If I proxy pass to the >> url with a # in it it's treated like a comment and ignores the rest. If I >> encode it with %23 I get a 404 error. What have you all done to get past >> this? > # is usually not included in a url that is sent in a request to a web server. > > Can you describe what you want to achieve, and what you are currently doing? > > Perhaps that will make clearer how to make it happen. > > Cheers, > > f From stefan.mueller.83 at gmail.com Sun Oct 7 19:42:51 2018 From: stefan.mueller.83 at gmail.com (=?UTF-8?Q?Stefan_M=c3=bcller?=) Date: Sun, 7 Oct 2018 21:42:51 +0200 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: <7f86eb5e-0916-589f-e5b8-c08999fb59e9@gmail.com> References: <1abfde26-8aef-5e22-5c35-671d81826b6a@gmail.com> <1b6f8646-9b10-5cc8-993b-68ed92275ed9@gmail.com> <000101d45587$0e87be30$2b973a90$@roze.lv> <000001d4564a$e0981960$a1c84c20$@roze.lv> <000001d4575b$f4fc8fa0$def5aee0$@roze.lv> <000601d45a6e$cee8e960$6cbabc20$@roze.lv> <000001d45aad$4d488f70$e7d9ae50$@roze.lv> <7f86eb5e-0916-589f-e5b8-c08999fb59e9@gmail.com> Message-ID: good evening, in the past we were mailing each other on a daily base but now it is silent. Anything alright? On 03.10.2018 23:02, Stefan M?ller wrote: > > thank you again for you quick answer? but I'm getting lost > > >> A typical nginx configuration has only one http {} block. >> >> You can look at some examples: > I'm aware of those and other examples. What confuses me that you say > that but also said in the email before that one: > >> If you put everything (both the user unix sockets and also the parent proxy server) under the same http{} block then it makes no sense since a single instance of nginx always runs under the same user (and beats the whole user/app isolation). > > so how must be the setup to the the whole user/app isolation > > nginx.pid? - master process > \_nginx.conf > ? \_http{}? - master server > ? \_http{}? - proxied/app servers > > or > > nginx.pid? - master process > \_nginx1.conf - master server > ? \_http{}?? - reverse proxy server > \_nginx2.conf - proxied servers > ? \_http{}?? - proxied/app servers > > or? > > If it is only one nginx.pid, how to I need to configure it to run > nginx1.conf and nginx2.conf? > > > >> Unless by "router" you mean the same Synology box you can't proxy unix sockets over TCP, they work only inside a single server/machine. > I mean my fibre router and I'm aware that unix sockets? work only > inside a single server/machine. I'll use it only to redirect to the > DNS Server what will run on the Synology box > > > >> Also you don't need to forward multiple ports, just 80 and 443 (if ssl) and have name-based virtualhosts. > > you got me, I have mistaken that, it got to late last night > > > On 03.10.2018 02:09, Reinis Rozitis wrote: >>> so all goes in the same nginx.conf but in different http{} block or do I need one nginx.conf for each, the user unix sockets and also the parent proxy server? >> A typical nginx configuration has only one http {} block. >> >> You can look at some examples: >> https://nginx.org/en/docs/http/request_processing.html >> https://nginx.org/en/docs/http/server_names.html >> https://www.nginx.com/resources/wiki/start/topics/examples/server_blocks/ >> >> >>> You suggesting to setup virtualhosts what listen to a port whereto traffic is forwarded from the router. I don't to have multiple ports open at the router, so I would like to stick with UNIX Sockets and proxy. >> Unless by "router" you mean the same Synology box you can't proxy unix sockets over TCP, they work only inside a single server/machine. >> >> Also you don't need to forward multiple ports, just 80 and 443 (if ssl) and have name-based virtualhosts. >> >> rr >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx From janejojo at gmail.com Sun Oct 7 20:57:36 2018 From: janejojo at gmail.com (Jane Jojo) Date: Sun, 7 Oct 2018 13:57:36 -0700 Subject: Nginx cache returning empty response for just the home page In-Reply-To: <9E1273B0-2010-4EF6-9564-E3F7E98B9721@me.com> References: <9E1273B0-2010-4EF6-9564-E3F7E98B9721@me.com> Message-ID: Thanks for this Peter. I?ll look at redbot. Do you by any chance have the script? My problem is intermittent and I don?t know if it?s a good idea to actively listen to production logging. On Sat, Oct 6, 2018 at 3:21 PM Peter Booth via nginx wrote: > You need to understand what requests are being received, what responses > are being sent and the actual keys being used to write to your cache. > > This means intelligent request logging, possibly use of redbot.org, and > examination of your cache. I used to use a script that someone had posted > here years ago that would dump cache contents along with cache keys. > > Sent from my iPhone > > On Oct 6, 2018, at 5:51 PM, Jane Jojo wrote: > > Here?s the issue in action [video]: https://d.pr/v/HmAiK0 > > > > I am using Nginx http caching extensively with the following cache key > > > > proxy_cache_key "$scheme://$host$uri"; > > > > I know this is a cache issue because I can invalidate it with > > > > proxy_cache_bypass $arg_nocache; > > > > Here?s a video of that in action: https://d.pr/v/Bj5ey6 > > > > Now, this happens only to the homepage URL and the problem recurs after a > while (I think after the cache expiry). > > > > Can you help me understand why this is happening and what I can do to fix? > > > > Here?s my code for reference: > > charset utf-8; > > proxy_cache_valid 200 301 302 1d; > > proxy_redirect off; > > proxy_cache_valid 404 1m; > > proxy_cache_revalidate on; > > proxy_cache_background_update on; > > proxy_cache_lock on; > > proxy_cache_use_stale error timeout invalid_header updating http_500 > http_502 http_503 http_504; > > proxy_cache_bypass $arg_nocache; > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From denis.papathanasiou at gmail.com Sun Oct 7 22:53:26 2018 From: denis.papathanasiou at gmail.com (Denis Papathanasiou) Date: Sun, 7 Oct 2018 18:53:26 -0400 Subject: Simple proxy_pass settings are invalidating/deleting multipart file data sent by POST In-Reply-To: <20181006223507.4nblurztpcugiogs@daoine.org> References: <20181006223507.4nblurztpcugiogs@daoine.org> Message-ID: > > I think that that should probably not make a difference to the upstream > service. > Correct, I did confirm that `proxy_pass` is sending the entire multipart form request. I suspect that you'll have to do some more investigation, to see what is > different on your upstream server when nginx is and is not involved. That > might help point at the problem, and see whether the solution is in > nginx or elsewhere. > Right, it turns out nginx was working correctly, and the real problem is somewhere in the web framework I am using. I've updated the issue on github accordingly: https://github.com/gin-gonic/gin/issues/1582 Thank you for replying so promptly. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Oct 8 14:07:41 2018 From: francis at daoine.org (Francis Daly) Date: Mon, 8 Oct 2018 15:07:41 +0100 Subject: Proxy_pass url with a # In-Reply-To: <02d5880e-578e-a994-d9d7-78ee1660aa2a@brianwhalen.net> References: <20181006224124.ed5vli7w5yuj6c3a@daoine.org> <02d5880e-578e-a994-d9d7-78ee1660aa2a@brianwhalen.net> Message-ID: <20181008140741.a54khid2hr5muujy@daoine.org> On Sun, Oct 07, 2018 at 06:54:54AM -0700, Brian Whalen wrote: Hi there, > In summary I am trying to reverse proxy a group of windows web servers, > where there is a # in the url. I have it mostly working except this. I'm afraid I still don't fully understand what specific problem you are reporting. However, from later in your mail, it appears that the # may not be fully relevant. > I dont think the hash is the whole problem though, since if I try > to load some other page after the successful auth it also fails. Can you show one specific request that you make, and show the response that you get, and indicate how it is different from the response that you want? That might make it possible for someone else to repeat the issue, which might make it easier to identify the solution. > I know also that If I just do a straight proxy_pass to the url without the > ldap auth and then type in nginxserver/path#/with/pound that works. Can you describe that in terms of http requests, perhaps made using the "curl" command line client? If you look in the access logs of nginx and the upstream for these requests, do you see the # character appearing at all? Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Oct 8 14:10:48 2018 From: francis at daoine.org (Francis Daly) Date: Mon, 8 Oct 2018 15:10:48 +0100 Subject: Simple proxy_pass settings are invalidating/deleting multipart file data sent by POST In-Reply-To: References: <20181006223507.4nblurztpcugiogs@daoine.org> Message-ID: <20181008141048.ammcctyttvuivj2s@daoine.org> On Sun, Oct 07, 2018 at 06:53:26PM -0400, Denis Papathanasiou wrote: Hi there, > Correct, I did confirm that `proxy_pass` is sending the entire multipart > form request. Thanks for following up with the nginx-related info; it'll help the next person with the same problem know where to look to prove that nginx is or is not at fault :-) > Right, it turns out nginx was working correctly, and the real problem is > somewhere in the web framework I am using. Good luck getting the problem fixed. Cheers, f -- Francis Daly francis at daoine.org From brian at brianwhalen.net Mon Oct 8 15:21:08 2018 From: brian at brianwhalen.net (Brian W.) Date: Mon, 8 Oct 2018 08:21:08 -0700 Subject: Proxy_pass url with a # In-Reply-To: <20181008140741.a54khid2hr5muujy@daoine.org> References: <20181006224124.ed5vli7w5yuj6c3a@daoine.org> <02d5880e-578e-a994-d9d7-78ee1660aa2a@brianwhalen.net> <20181008140741.a54khid2hr5muujy@daoine.org> Message-ID: I want to do a successful auth, which I can, and then after the successful auth be reverse proxied to the specified web server, not a simple 302 redirect, but actual reverse proxy. When I replace the hello world line with a get, I just get blank white screen. I can curl -i and get a 200. I can also reverse proxy without the auth and have it work. What I need to figure out is how do I do the reverse proxy to a web server on a different machine and send the user there via reverse proxy after a successful auth. Brian On Mon, Oct 8, 2018, 7:07 AM Francis Daly wrote: > On Sun, Oct 07, 2018 at 06:54:54AM -0700, Brian Whalen wrote: > > Hi there, > > > In summary I am trying to reverse proxy a group of windows web servers, > > where there is a # in the url. I have it mostly working except this. > > I'm afraid I still don't fully understand what specific problem you > are reporting. > > However, from later in your mail, it appears that the # may not be > fully relevant. > > > I dont think the hash is the whole problem though, since if I try > > to load some other page after the successful auth it also fails. > > Can you show one specific request that you make, and show the response > that you get, and indicate how it is different from the response that you > want? That might make it possible for someone else to repeat the issue, > which might make it easier to identify the solution. > > > I know also that If I just do a straight proxy_pass to the url without > the > > ldap auth and then type in nginxserver/path#/with/pound that works. > > Can you describe that in terms of http requests, perhaps made using the > "curl" command line client? > > If you look in the access logs of nginx and the upstream for these > requests, do you see the # character appearing at all? > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Oct 8 18:22:51 2018 From: francis at daoine.org (Francis Daly) Date: Mon, 8 Oct 2018 19:22:51 +0100 Subject: Proxy_pass url with a # In-Reply-To: References: <20181006224124.ed5vli7w5yuj6c3a@daoine.org> <02d5880e-578e-a994-d9d7-78ee1660aa2a@brianwhalen.net> <20181008140741.a54khid2hr5muujy@daoine.org> Message-ID: <20181008182251.4qf3l2jxdkfsyfr2@daoine.org> On Mon, Oct 08, 2018 at 08:21:08AM -0700, Brian W. wrote: Hi there, > I want to do a successful auth, which I can, and then after the successful > auth be reverse proxied to the specified web server, not a simple 302 > redirect, but actual reverse proxy. When I replace the hello world line > with a get, I just get blank white screen. I can curl -i and get a 200. I > can also reverse proxy without the auth and have it work. I'm still unclear about what you mean by the above. I *think* you are saying: when your nginx.conf has server { location / { proxy_pass http://windows-server; } } that everything works; you can "curl -v http://nginx/something" and get the expected response from http://windows-server/something. Am I correct in that much? I also think you are saying: when your nginx.conf has server { location / { auth_request /auth; proxy_pass http://windows-server; } location = /auth { # your ldap-related things that return http 200 when things are good, # and 401 or 403 when things are bad } } then some parts fail in some way -- you request http://nginx/something, and you expect one response but you get one other response -- possibly a http 302 to some other url? Am I correct in that? > What I need to figure out is how do I do the reverse proxy to a web server > on a different machine and send the user there via reverse proxy after a > successful auth. In nginx terms, that's auth_request -- http://nginx.org/r/auth_request If we can understand where in the sequence things fail first, maybe it will be clearer what needs to change in order to get things to succeed. Cheers, f -- Francis Daly francis at daoine.org From garbage at gmx.de Mon Oct 8 19:50:36 2018 From: garbage at gmx.de (Gbg) Date: Mon, 08 Oct 2018 21:50:36 +0200 Subject: Best practices for solving location issues Message-ID: During the last days I spent considerable amount of time solving issues with my location definitions. Sometimes I could see attempts to open files in the error.log, sometimes not. I know there is https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/ but wanted to ask if there is a list of best practice troubleshooting advice or even some tool or debugging switch that can help me find out which location gets used for a specific URL or URL pattern -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at brianwhalen.net Mon Oct 8 19:53:36 2018 From: brian at brianwhalen.net (Brian W.) Date: Mon, 8 Oct 2018 12:53:36 -0700 Subject: Proxy_pass url with a # In-Reply-To: <20181008182251.4qf3l2jxdkfsyfr2@daoine.org> References: <20181006224124.ed5vli7w5yuj6c3a@daoine.org> <02d5880e-578e-a994-d9d7-78ee1660aa2a@brianwhalen.net> <20181008140741.a54khid2hr5muujy@daoine.org> <20181008182251.4qf3l2jxdkfsyfr2@daoine.org> Message-ID: Ok I figured it out. A page I saw mentioned copying the ldap conf as the nginx.conf file and using that and I did. I erroneously thought the port 9000 connection in there was a necessary ldap connect piece and so I didn't change it, until today with your questioning. Thanx Brian On Mon, Oct 8, 2018, 11:22 AM Francis Daly wrote: > On Mon, Oct 08, 2018 at 08:21:08AM -0700, Brian W. wrote: > > Hi there, > > > I want to do a successful auth, which I can, and then after the > successful > > auth be reverse proxied to the specified web server, not a simple 302 > > redirect, but actual reverse proxy. When I replace the hello world line > > with a get, I just get blank white screen. I can curl -i and get a 200. I > > can also reverse proxy without the auth and have it work. > > I'm still unclear about what you mean by the above. > > I *think* you are saying: > > when your nginx.conf has > > server { > location / { > proxy_pass http://windows-server; > } > } > > that everything works; you can "curl -v http://nginx/something" and get > the expected response from http://windows-server/something. > > Am I correct in that much? > > > I also think you are saying: > > when your nginx.conf has > > server { > location / { > auth_request /auth; > proxy_pass http://windows-server; > } > location = /auth { > # your ldap-related things that return http 200 when things are good, > # and 401 or 403 when things are bad > } > } > > then some parts fail in some way -- you request http://nginx/something, > and you expect one response but you get one other response -- possibly > a http 302 to some other url? > > Am I correct in that? > > > What I need to figure out is how do I do the reverse proxy to a web > server > > on a different machine and send the user there via reverse proxy after a > > successful auth. > > In nginx terms, that's auth_request -- http://nginx.org/r/auth_request > > If we can understand where in the sequence things fail first, maybe it > will be clearer what needs to change in order to get things to succeed. > > Cheers, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Oct 8 22:12:07 2018 From: francis at daoine.org (Francis Daly) Date: Mon, 8 Oct 2018 23:12:07 +0100 Subject: Proxy_pass url with a # In-Reply-To: References: <20181006224124.ed5vli7w5yuj6c3a@daoine.org> <02d5880e-578e-a994-d9d7-78ee1660aa2a@brianwhalen.net> <20181008140741.a54khid2hr5muujy@daoine.org> <20181008182251.4qf3l2jxdkfsyfr2@daoine.org> Message-ID: <20181008221207.myplapo6r3b4vxxf@daoine.org> On Mon, Oct 08, 2018 at 12:53:36PM -0700, Brian W. wrote: Hi there, > Ok I figured it out. Good stuff. > A page I saw mentioned copying the ldap conf as the > nginx.conf file and using that and I did. I erroneously thought the port > 9000 connection in there was a necessary ldap connect piece and so I didn't > change it, until today with your questioning. Ah, with hindsight, I think I understand what happened. https://github.com/nginxinc/nginx-ldap-auth/blob/master/nginx-ldap-auth.conf shows the backend as both handling /login and providing the common response, while the ldap-auth daemon is separate. And the login service issues a redirect itself. In your case, I guess that the /login-equivalent (if it exists) is *not* on the same server as the backend common content -- so "proxy_pass http://backend/;" there becomes "proxy_pass http://windows-server/;" in your case. And hopefully the part with the # in the client-side url Just Works for you too. Glad you got it working, f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Oct 8 22:19:46 2018 From: francis at daoine.org (Francis Daly) Date: Mon, 8 Oct 2018 23:19:46 +0100 Subject: Best practices for solving location issues In-Reply-To: References: Message-ID: <20181008221946.nfqzgjjrnc5xiwov@daoine.org> On Mon, Oct 08, 2018 at 09:50:36PM +0200, Gbg wrote: Hi there, > wanted to ask if there is a list of best practice troubleshooting > advice or even some tool or debugging switch that can help me find out > which location gets used for a specific URL or URL pattern The debug log (http://nginx.org/en/docs/debugging_log.html) holds lots of information. For testing, a limited version like events { debug_connection 127.0.0.12; } means that it will probably only log for the one specific request you make using curl. Make one request, see the log, pick out the bits that say which location{} was chosen. Or adding something like return 200 "Location No. 9\n"; to each location{} block may be a short-term thing to confirm that you configured nginx the way you wanted to configure nginx. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Oct 9 03:58:38 2018 From: nginx-forum at forum.nginx.org (alisampras) Date: Mon, 08 Oct 2018 23:58:38 -0400 Subject: Can't access OWA, OA and ActiveSync Message-ID: <764d1b34aea910f948e7ee9e2b4552a6.NginxMailingListEnglish@forum.nginx.org> Hi All, Business Objective Outside users (users travelling) should be able to access their email through NGINX and it should redirect the connection to my Internal Exchange server for authentication and access: 1. OWA 2. Outlook Anyway 3. ActiveSync My environment info: Client email access through External Proxy server is mail.example.com, IP 223.153.119.18. External DNS A record for mail.example.com point to IP 223.153.119.18 Internal Exchange server is EX-01.example.com with internal IP 10.10.10.11 Internal DNS A record for mail.example.com point to 10.10.10.11 So, if you noticed, all the outside user's email client will look for mail.example.com with external IP 223.153.119.18. Problem: >From outside my office, i used my laptop to test. 1. Open browser, https://mail.example.com/owa Authentication is pop up and i entered my credentials but it keep failing. 2. Outlook Anywhere got the pop up for authentication but it still keep failing too. Remark: Remember mail.example.com i entered in my browse will point to my external ip 223.153.119.18 Hope ny looking at my below nginx config file, the NGINX or the Exchange expert can spot my mistake. Below is my NGINX config: worker_processes 1; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name www.example.com; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } access_log logs/www.access.log main; error_log logs/www.error.log; } # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} server { listen 443 ssl; server_name mail.example.com autodiscover.example.com; ssl_certificate /etc/ssl/certs/mail.example.com.crt; ssl_certificate_key /etc/ssl/private/mail.example.com.rsa; ssl_session_timeout 5m; client_max_body_size 3G; tcp_nodelay on; proxy_request_buffering off; proxy_http_version 1.1; proxy_read_timeout 360; proxy_pass_header Date; proxy_pass_header Server; proxy_pass_header Authorization; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass_request_headers on; proxy_set_header Accept-Encoding ""; proxy_buffering off; proxy_set_header Connection "Keep-Alive"; location / { #return 301 https://$host$request_uri; #return 301 https://ex-01.esuria.local/owa; #return 301 https://10.10.11.11/owa; return 301 https://mail.example.com/owa; } location ~* ^/owa { proxy_pass https://EX-01.example.com; } location ~* ^/Microsoft-Server-ActiveSync { proxy_pass https://EX-01.example.com; } location ~* ^/rpc { proxy_pass https://EX-01.example.com; } location ~* ^/ews { proxy_pass https://EX-01.example.com; } location ~* ^/autodiscover { proxy_pass https://EX-01.example.com; } access_log /usr/local/nginx/logs/mail.access.log main; error_log /usr/local/nginx/logs/mail.error.log; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281555,281555#msg-281555 From brian at brianwhalen.net Tue Oct 9 13:22:22 2018 From: brian at brianwhalen.net (Brian W.) Date: Tue, 9 Oct 2018 06:22:22 -0700 Subject: Proxy_pass url with a # In-Reply-To: <20181008221207.myplapo6r3b4vxxf@daoine.org> References: <20181006224124.ed5vli7w5yuj6c3a@daoine.org> <02d5880e-578e-a994-d9d7-78ee1660aa2a@brianwhalen.net> <20181008140741.a54khid2hr5muujy@daoine.org> <20181008182251.4qf3l2jxdkfsyfr2@daoine.org> <20181008221207.myplapo6r3b4vxxf@daoine.org> Message-ID: Yes on both counts, I didnt see that the /logon could also be the backend proxy_pass. I was trying to stick all sorts of python gets in the backend-sample-app.py script which was not right, and the # issue was a non issue, because there is an auto redirect from / to the desired url. Brian On Mon, Oct 8, 2018, 3:12 PM Francis Daly wrote: > On Mon, Oct 08, 2018 at 12:53:36PM -0700, Brian W. wrote: > > Hi there, > > > Ok I figured it out. > > Good stuff. > > > A page I saw mentioned copying the ldap conf as the > > nginx.conf file and using that and I did. I erroneously thought the port > > 9000 connection in there was a necessary ldap connect piece and so I > didn't > > change it, until today with your questioning. > > Ah, with hindsight, I think I understand what happened. > > > https://github.com/nginxinc/nginx-ldap-auth/blob/master/nginx-ldap-auth.conf > > shows the backend as both handling /login and providing the common > response, while the ldap-auth daemon is separate. > > And the login service issues a redirect itself. > > In your case, I guess that the /login-equivalent (if it exists) is *not* > on the same server as the backend common content -- so "proxy_pass > http://backend/;" there becomes "proxy_pass http://windows-server/;" > in your case. > > And hopefully the part with the # in the client-side url Just Works for > you too. > > Glad you got it working, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Wed Oct 10 00:55:28 2018 From: peter_booth at me.com (Peter Booth) Date: Tue, 09 Oct 2018 20:55:28 -0400 Subject: Nginx cache returning empty response for just the home page In-Reply-To: References: <9E1273B0-2010-4EF6-9564-E3F7E98B9721@me.com> Message-ID: <9BCF9D83-F664-4DA9-ABF2-F81B1DE76CB4@me.com> I think that the script might have been this: https://github.com/perusio/nginx-cache-inspector Also, the nginx debug log shows you *everything* at 100x the level of detail I expected. It can be overwhelming though When you say that the issue is intermittent do you mean that you make the same request and get different results? As for listening to production logging, it needn?t be an issue > On 7 Oct 2018, at 4:57 PM, Jane Jojo wrote: > > Thanks for this Peter. I?ll look at redbot. > > Do you by any chance have the script? My problem is intermittent and I don?t know if it?s a good idea to actively listen to production logging. > > > > > On Sat, Oct 6, 2018 at 3:21 PM Peter Booth via nginx > wrote: > You need to understand what requests are being received, what responses are being sent and the actual keys being used to write to your cache. > > This means intelligent request logging, possibly use of redbot.org , and examination of your cache. I used to use a script that someone had posted here years ago that would dump cache contents along with cache keys. > > Sent from my iPhone > > On Oct 6, 2018, at 5:51 PM, Jane Jojo > wrote: > >> Here?s the issue in action [video]: https://d.pr/v/HmAiK0 >> >> I am using Nginx http caching extensively with the following cache key >> >> proxy_cache_key "$scheme://$host$uri"; >> >> I know this is a cache issue because I can invalidate it with >> >> proxy_cache_bypass $arg_nocache; >> >> Here?s a video of that in action: https://d.pr/v/Bj5ey6 >> >> Now, this happens only to the homepage URL and the problem recurs after a while (I think after the cache expiry). >> >> Can you help me understand why this is happening and what I can do to fix? >> >> Here?s my code for reference: >> charset utf-8; >> proxy_cache_valid 200 301 302 1d; >> proxy_redirect off; >> proxy_cache_valid 404 1m; >> proxy_cache_revalidate on; >> proxy_cache_background_update on; >> proxy_cache_lock on; >> proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; >> proxy_cache_bypass $arg_nocache; >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Oct 10 03:00:42 2018 From: nginx-forum at forum.nginx.org (NginxRam) Date: Tue, 09 Oct 2018 23:00:42 -0400 Subject: NGINX Dynamic Reverse Proxy Message-ID: <97b56365ae4b0c266e7059ec968afbb2.NginxMailingListEnglish@forum.nginx.org> Hi, Does NGINX covers following Dynamic Reverse proxy features ? Like existing NGINX reverse proxy should not undergo any changes or new NGINX Reverse Proxy should not be created For following 1 & 2 scenarios 1. Cases where new back end services is deployed with new location and new proxy_pass which did not exists in current configuration of NGINX Reverse proxy.. 2. If new Domain added to back end services, then also will the reverse proxy be dynamic , without requiring any changes ? Overall how to design the NGINX Reverse proxy hence any of listed above (1 and 2) listed changes can be absorbed dynamically, hence future changes are absorbed in existing dynamic Reverse proxy ? If such samples are available, please let me know URL or details of syntax on same Based on product understanding few of Dynamic Reverse proxies? behavior are A.If a domain name resolves to several known addresses, all of them will be used in a round-robin fashion B.. Forcing SSL/TLS C. Example use cases are when your website?s domain name has changed, when you want clients to use a canonical URL format (either with or without the www prefix), and when you want to catch and correct common misspellings of your domain name. D. To inform clients that the resource they?re requesting now resides at a different location. Is the Dynamic Reverse Proxy is limited to A and B and C and C does not cover 1 and 2 If so do we have examples for scenario 1 and 2 to design changes in Reverse proxy for future ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281560,281560#msg-281560 From rami.s at taboola.com Wed Oct 10 09:16:15 2018 From: rami.s at taboola.com (Rami Stern) Date: Wed, 10 Oct 2018 12:16:15 +0300 Subject: Allowing my web server to be aware of retries In-Reply-To: References: Message-ID: > We use nginx's retry mechanism on timeout and errors, but we'd like to > have some way (custom header \ url param) to make our web server know that > the request its handling is actually a retry. > > We've tried adding $proxy_add_x_forwarded_for and $upstrea_addr but the > first didn't return the correct IP and the second was always empty. > We've also tried using $request_time as a way to heuristically guess if > this is a retry but it seems to be calculated once before the first request > and reused on the following retries. > > Is it possible? > > We've nginx plus license if there's a feature we can use there for that. > > Thanks! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Oct 10 11:29:12 2018 From: francis at daoine.org (Francis Daly) Date: Wed, 10 Oct 2018 12:29:12 +0100 Subject: Can't access OWA, OA and ActiveSync In-Reply-To: <764d1b34aea910f948e7ee9e2b4552a6.NginxMailingListEnglish@forum.nginx.org> References: <764d1b34aea910f948e7ee9e2b4552a6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181010112912.36q453kbcjmsyz2f@daoine.org> On Mon, Oct 08, 2018 at 11:58:38PM -0400, alisampras wrote: Hi there, I have not tried putting anything to do with Exchange behind nginx, so I do not have any tested-config for you. However... > Business Objective > Outside users (users travelling) should be able to access their email > through NGINX and it should redirect the connection to my Internal Exchange > server for authentication and access: > 1. OWA > 2. Outlook Anyway > 3. ActiveSync When I do a Google search for "nginx owa", among the first few results I get are https://docs.nginx.com/nginx/deployment-guides/microsoft-exchange-load-balancing-nginx-plus/ https://www.reddit.com/r/sysadmin/comments/6wq3rj/nginx_reverse_proxy_to_exchange/ https://gist.github.com/taddev/7275873 The general impression I get from reading those is that, for this to work, it depends significantly on the versions of Exchange and friends, and on the configuration of Exchange and friends. Generally: RPC is bad; NTLM authentication is bad; many other things are good. It does appear that there are some versions and configurations of Exchange and friends that stock-nginx will not successfully reverse-proxy; if you must use some of those, then you may be much happier using a different product to do the reverse-proxying. Both "haproxy" and "nginx plus" appear to have some reports of being made to work. Good luck with it, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Wed Oct 10 15:13:25 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Oct 2018 18:13:25 +0300 Subject: Allowing my web server to be aware of retries In-Reply-To: References: Message-ID: <20181010151325.GY56558@mdounin.ru> Hello! On Wed, Oct 10, 2018 at 12:16:15PM +0300, Rami Stern via nginx wrote: > > We use nginx's retry mechanism on timeout and errors, but we'd like to > > have some way (custom header \ url param) to make our web server know that > > the request its handling is actually a retry. > > > > We've tried adding $proxy_add_x_forwarded_for and $upstrea_addr but the > > first didn't return the correct IP and the second was always empty. > > We've also tried using $request_time as a way to heuristically guess if > > this is a retry but it seems to be calculated once before the first request > > and reused on the following retries. > > > > Is it possible? As long as you are using proxy_next_upstream mechanism for retries, what you are trying to do is not possible. The request which is sent to next servers is completely identical to the one sent to the first server nginx tries - or, more precisely, this is the same request, created once and then sent to different upstream servers as needed. If you want to know on the backend if it is handling the first request or it processes a retry request after an error, a working option would be to switch proxy_next_upstream off, and instead retry requests on 502/504 errors using the error_page directive. See http://nginx.org/r/error_page for examples on how to use error_page. -- Maxim Dounin http://mdounin.ru/ From gfrankliu at gmail.com Wed Oct 10 15:59:15 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Wed, 10 Oct 2018 08:59:15 -0700 Subject: sni hostname and request Host header mismatch Message-ID: Is there a way to configure nginx to fail the request if the client sends a sni header that doesn't match the Host header? curl -k -H "Host: virtual_host2" https://virtual_host1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Oct 10 17:32:58 2018 From: nginx-forum at forum.nginx.org (itpp2012) Date: Wed, 10 Oct 2018 13:32:58 -0400 Subject: sni hostname and request Host header mismatch In-Reply-To: References: Message-ID: <685da6a60ef4d88e114f32dbbcf99489.NginxMailingListEnglish@forum.nginx.org> Via map and the default ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281564,281565#msg-281565 From nginx-forum at forum.nginx.org Wed Oct 10 17:56:47 2018 From: nginx-forum at forum.nginx.org (c4rl) Date: Wed, 10 Oct 2018 13:56:47 -0400 Subject: https auto redirect to specific port Message-ID: <20aed38bfdf50dc9d550e6285c7bf344.NginxMailingListEnglish@forum.nginx.org> Hi experts, I'm not sure if the subject is resuming correctly my question but I'll try to explain it. I have the configuration below in my server, this server has 2 vhosts: example.com and mydomain.com The first vhost needs to listen on 8080 (https) and as you can see I'm using a redirect from http > https 8080. The second one is listening on 80. My problem is that if a user type https in the address bar instead of http it calls the second vhost. How can redirect the https://example.com to https://example.com:8080 instead of http://mydomain.com when a user type https in the address bar? server { listen 80; server_name example.com; location '/.well-known/acme-challenge/' { autoindex on; root /var/www/certbot; } location / { if ($scheme = http) { return 301 https://example.com:8080; } } } server { listen 8080 default ssl; server_name example.com; ssl_certificate /etc/letsencrypt/live/example.com;/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot # logs error_log /var/log/nginx/example.com_error.log error; access_log /var/log/nginx/example.com_access.log; location / { index index.html index.htm; autoindex on; proxy_pass http://internalserver:8080; auth_basic "Restricted area"; auth_basic_user_file /srv/example.com/.htpasswd; client_body_temp_path /tmp 1 2; client_body_buffer_size 256k; client_body_in_file_only off; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281566,281566#msg-281566 From francis at daoine.org Wed Oct 10 18:09:13 2018 From: francis at daoine.org (Francis Daly) Date: Wed, 10 Oct 2018 19:09:13 +0100 Subject: https auto redirect to specific port In-Reply-To: <20aed38bfdf50dc9d550e6285c7bf344.NginxMailingListEnglish@forum.nginx.org> References: <20aed38bfdf50dc9d550e6285c7bf344.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181010180913.horyxsc6e6n6iyae@daoine.org> On Wed, Oct 10, 2018 at 01:56:47PM -0400, c4rl wrote: Hi there, if I am reading this right, then what you describe and what the config file you provide say, are different. > The first vhost needs to listen on 8080 (https) and as you can see I'm using > a redirect from http > https 8080. The second one is listening on 80. > > My problem is that if a user type https in the address bar instead of http > it calls the second vhost. The config file says that if someone goes to http://example.com, you will (almost always) redirect them to https://example.com:8080. If someone goes to https://example.com, they will go to port 443. And you have no listener for port 443 in the config file that you show. > How can redirect the https://example.com to https://example.com:8080 instead > of http://mydomain.com when a user type https in the address bar? Add a 443 listener-server block that does ssl and redirects to https://example.com:8080. Or, probably, change the current 443 listener-server block to redirect to https://example.com:8080 instead of to http://example.com. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Oct 10 18:38:25 2018 From: nginx-forum at forum.nginx.org (c4rl) Date: Wed, 10 Oct 2018 14:38:25 -0400 Subject: https auto redirect to specific port In-Reply-To: <20181010180913.horyxsc6e6n6iyae@daoine.org> References: <20181010180913.horyxsc6e6n6iyae@daoine.org> Message-ID: <86a8dd44f887f71f41f42c3ea5822970.NginxMailingListEnglish@forum.nginx.org> Wow, that's what I needed! It's so simple. Many thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281566,281568#msg-281568 From gfrankliu at gmail.com Thu Oct 11 00:11:40 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Wed, 10 Oct 2018 17:11:40 -0700 Subject: sni hostname and request Host header mismatch In-Reply-To: <685da6a60ef4d88e114f32dbbcf99489.NginxMailingListEnglish@forum.nginx.org> References: <685da6a60ef4d88e114f32dbbcf99489.NginxMailingListEnglish@forum.nginx.org> Message-ID: http://hg.nginx.org/nginx/rev/4fbef397c753 indicates the check is only done for the 2-way SSL virtual host. Has everything been added (maybe through a directive) for 1-way SSL since then? On Wed, Oct 10, 2018 at 10:33 AM itpp2012 wrote: > Via map and the default ? > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,281564,281565#msg-281565 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Oct 11 07:07:47 2018 From: francis at daoine.org (Francis Daly) Date: Thu, 11 Oct 2018 08:07:47 +0100 Subject: sni hostname and request Host header mismatch In-Reply-To: References: <685da6a60ef4d88e114f32dbbcf99489.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181011070747.rzobakjvru5s5k2m@daoine.org> On Wed, Oct 10, 2018 at 05:11:40PM -0700, Frank Liu wrote: Hi there, > http://hg.nginx.org/nginx/rev/4fbef397c753 indicates the check is only done > for the 2-way SSL virtual host. > Has everything been added (maybe through a directive) for 1-way SSL since > then? $ssl_server_name is the name from SNI. $http_host is the Host: header. $host is the host from the request (which usually should be absent), or the host from the Host: header (which usually should be present), or the (first) server_name of the matched server. I think that there is not an extra directive; but you can manipulate and compare those variables as is appropriate for your situation. Specifically: in an SNI-only server, if $host is not the same as $ssl_server_name, something funny is going on. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Oct 11 14:47:56 2018 From: nginx-forum at forum.nginx.org (wld75) Date: Thu, 11 Oct 2018 10:47:56 -0400 Subject: download logging not captured in access.log Message-ID: <5d40e6e4ec3b8f8ae5a64153811f4b64.NginxMailingListEnglish@forum.nginx.org> Hi, I have a server running for client to download and upload files, however i am only able to see the upload request logs and download requests not captured in the access.log file. Thanks for advance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281572,281572#msg-281572 From amdmi3 at amdmi3.ru Fri Oct 12 19:51:01 2018 From: amdmi3 at amdmi3.ru (Dmitry Marakasov) Date: Fri, 12 Oct 2018 22:51:01 +0300 Subject: nginx introduces extra delay when talking to slow backend (probably FreeBSD kevent specific) Message-ID: <20181012195101.GG79805@hades.panopticon> Hi! I've noticed strange behavior of nginx with slow uwsgi backend (turned out to not be uwsgi specific, and acually reproduces with any backend) on FreeBSD 11.2 If backend replies in less than a second, nginx doesn't introduce any additional latency and replies in the same time. However if backend replies no more than around 1.2 seconds, nginx introduces an extra delay and replies in around 2.2 seconds. I've gathered some details here, including the graph of nginx reply time vs. backend reply time: https://github.com/AMDmi3/nginx-bug-demo It reproduces on FreeBSD 11.2 and around year old -CURRENT, but not the recent -CURRENT, so I suspect this may be FreeBSD specific (probably kevent-related) and already fixed. Still, I'm posting to both related nginx and FreeBSD lists for this problem to be known. -- Dmitry Marakasov . 55B5 0596 FF1E 8D84 5F56 9510 D35A 80DD F9D2 F77D amdmi3 at amdmi3.ru ..: https://github.com/AMDmi3 From amdmi3 at amdmi3.ru Fri Oct 12 20:31:26 2018 From: amdmi3 at amdmi3.ru (Dmitry Marakasov) Date: Fri, 12 Oct 2018 23:31:26 +0300 Subject: nginx introduces extra delay when talking to slow backend (probably FreeBSD kevent specific) In-Reply-To: <20181012195101.GG79805@hades.panopticon> References: <20181012195101.GG79805@hades.panopticon> Message-ID: <20181012203126.GH79805@hades.panopticon> * Dmitry Marakasov (amdmi3 at amdmi3.ru) wrote: I've gathered ktrace dumps for both cases, and it really looks that the problem is related to kevent. After nginx sends response back to client, it calls kevent(2) on client fd (which is 5). When there is a bug (FreeBSD 11.2), the following happens: 49365 nginx 3.099362 CALL kevent(0x7,0x8022a2000,0,0x8023005c0,0x200,0x7fffffffe598) 49365 nginx 3.099419 STRU struct kevent[] = { } 49365 nginx 3.194695 STRU struct kevent[] = { { ident=5, filter=EVFILT_WRITE, flags=0x20, fflags=0, data=0xbf88, udata=0x8023633d1 } } 49365 nginx 3.194733 RET kevent 1 ... 49365 nginx 3.194858 CALL kevent(0x7,0x8022a2000,0,0x8023005c0,0x200,0x7fffffffe598) 49365 nginx 3.194875 STRU struct kevent[] = { } 49365 nginx 3.835259 STRU struct kevent[] = { { ident=5, filter=EVFILT_READ, flags=0x8020, fflags=0, data=0, udata=0x802346111 } } 49365 nginx 3.835299 RET kevent 1 E.g. read and write events come separately, both with huge delays. On FreeBSD-CURRENT which doesn't experience the problem, kdump looks this way: 8049 nginx 3.081367 CALL kevent(0x7,0x8012d1b40,0,0x8012da040,0x200,0x7fffffffe598) 8049 nginx 3.081371 STRU struct kevent[] = { } 8049 nginx 3.081492 STRU struct kevent[] = { { ident=5, filter=EVFILT_WRITE, flags=0x20, fflags=0, data=0xbf88, udata=0x801341f11 } { ident=5, filter=EVFILT_READ, flags=0x8020, fflags=0, data=0, udata=0x801324e51 } } 8049 nginx 3.081498 RET kevent 2 E.g. both events come immediately and at the same time. Not sure if that's problem of kevent or something it relies on or the way nginx uses it. -- Dmitry Marakasov . 55B5 0596 FF1E 8D84 5F56 9510 D35A 80DD F9D2 F77D amdmi3 at amdmi3.ru ..: https://github.com/AMDmi3 From stefan.mueller.83 at gmail.com Fri Oct 12 21:59:48 2018 From: stefan.mueller.83 at gmail.com (=?UTF-8?Q?Stefan_M=c3=bcller?=) Date: Fri, 12 Oct 2018 23:59:48 +0200 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: References: <1abfde26-8aef-5e22-5c35-671d81826b6a@gmail.com> <1b6f8646-9b10-5cc8-993b-68ed92275ed9@gmail.com> <000101d45587$0e87be30$2b973a90$@roze.lv> <000001d4564a$e0981960$a1c84c20$@roze.lv> <000001d4575b$f4fc8fa0$def5aee0$@roze.lv> <000601d45a6e$cee8e960$6cbabc20$@roze.lv> <000001d45aad$4d488f70$e7d9ae50$@roze.lv> <7f86eb5e-0916-589f-e5b8-c08999fb59e9@gmail.com> Message-ID: hallo, mostly all question are answered 1. local DNS Server using DHCP server of the router and run a DNS Server on the NAS, all unersolved queries are solved in by the means of the routers WAN0's DNS settings 2. debug logging 3. php isolation create a pool per webage and rund them as seperate users by creating a php.conf per pool 4. *nginx* this is the only one remaining. How can I isolate the servers? thx a lot Stefan On 07.10.2018 21:42, Stefan M?ller wrote: > good evening, > > in the past we were mailing each other on a daily base but now it is > silent. Anything alright? > > On 03.10.2018 23:02, Stefan M?ller wrote: >> >> thank you again for you quick answer? but I'm getting lost >> >> >>> A typical nginx configuration has only one http {} block. >>> >>> You can look at some examples: >> I'm aware of those and other examples. What confuses me that you say >> that but also said in the email before that one: >> >>> If you put everything (both the user unix sockets and also the >>> parent proxy server) under the same http{} block then it makes no >>> sense since a single instance of nginx always runs under the same >>> user (and beats the whole user/app isolation). >> >> so how must be the setup to the the whole user/app isolation >> >> nginx.pid? - master process >> \_nginx.conf >> ? \_http{}? - master server >> ? \_http{}? - proxied/app servers >> >> or >> >> nginx.pid? - master process >> \_nginx1.conf - master server >> ? \_http{}?? - reverse proxy server >> \_nginx2.conf - proxied servers >> ? \_http{}?? - proxied/app servers >> >> or? >> >> If it is only one nginx.pid, how to I need to configure it to run >> nginx1.conf and nginx2.conf? >> >> >> >>> Unless by "router" you mean the same Synology box you can't proxy >>> unix sockets over TCP, they work only inside a single server/machine. >> I mean my fibre router and I'm aware that unix sockets? work only >> inside a single server/machine. I'll use it only to redirect to the >> DNS Server what will run on the Synology box >> >> >>> Also you don't need to forward multiple ports, just 80 and 443 (if >>> ssl) and have name-based virtualhosts. >> >> you got me, I have mistaken that, it got to late last night >> >> >> On 03.10.2018 02:09, Reinis Rozitis wrote: >>>> so all goes in the same nginx.conf but in different http{} block or >>>> do I need one nginx.conf? for each, the user unix sockets and also >>>> the parent proxy server? >>> A typical nginx configuration has only one http {} block. >>> >>> You can look at some examples: >>> https://nginx.org/en/docs/http/request_processing.html >>> https://nginx.org/en/docs/http/server_names.html >>> https://www.nginx.com/resources/wiki/start/topics/examples/server_blocks/ >>> >>> >>>> You suggesting to setup virtualhosts what listen to a port whereto >>>> traffic is forwarded from the router. I don't to have multiple >>>> ports open at the router, so I would like to stick with UNIX >>>> Sockets and proxy. >>> Unless by "router" you mean the same Synology box you can't proxy >>> unix sockets over TCP, they work only inside a single server/machine. >>> >>> Also you don't need to forward multiple ports, just 80 and 443 (if >>> ssl) and have name-based virtualhosts. >>> >>> rr >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.mueller.83 at gmail.com Sat Oct 13 08:16:22 2018 From: stefan.mueller.83 at gmail.com (=?UTF-8?Q?Stefan_M=c3=bcller?=) Date: Sat, 13 Oct 2018 10:16:22 +0200 Subject: [nginx-user] Please add [nginx] to subject Message-ID: <66ff7f13-cb4f-a581-c576-f566b3b595df@gmail.com> Hallo, Is it possible to configure this list to automatically prepend [nginx] to the subject line?? I feel like I miss messages sometimes because I don't realize it's from the list! I subscribe to several different mailing lists and the**only mailing list that wasn't already putting the mailing list title in the Subject line was nginx. Sure /nginx at nginx.org/ is fine for actively filtering, or for putting everything into another folder, but every other mailing list I'm on tags the list name in the subject.? I'm missing messages because they're too easy to ignore without the word 'nginx' in the subject. If you subscribe to several mailing lists you may want to know which email comes from which lists at a glance.?So you can easier priorities which to read now and which later. Sorting emails to separate folders depending on the source does have some advantages, but personally I find it considerably more convenient to categorize by folder after reading new emails, not before. thank you Stefan -------------- next part -------------- An HTML attachment was scrubbed... URL: From m16+nginx at monksofcool.net Sat Oct 13 09:57:08 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Sat, 13 Oct 2018 11:57:08 +0200 Subject: [nginx-user] Please DO NOT add [nginx] to subject In-Reply-To: <66ff7f13-cb4f-a581-c576-f566b3b595df@gmail.com> References: <66ff7f13-cb4f-a581-c576-f566b3b595df@gmail.com> Message-ID: On 13.10.18 10:16, Stefan M?ller wrote: > Sure /nginx at nginx.org/ is fine for actively filtering, or for putting > everything into another folder, but every other mailing list I'm on > tags the list name in the subject. Then you are not subscribed the right mailing lists. ;-) Seriously, this discussion is almost as old as mailing lists. Rewriting the subject adds clutter, messes with some MUAs, wastes screen estate on devices with small screens (e.g. smartphones), and breaks DKIM. Filtering by the existing "List-Id" header works just fine. Also, you can use address extensions with your Google Mail (and many others), meaning that you can have your mailing list messages automatically filtered/labelled properly. Look at my address in this message, it uses "+nginx" as an extension. -Ralph From mdounin at mdounin.ru Sun Oct 14 05:11:02 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 14 Oct 2018 08:11:02 +0300 Subject: download logging not captured in access.log In-Reply-To: <5d40e6e4ec3b8f8ae5a64153811f4b64.NginxMailingListEnglish@forum.nginx.org> References: <5d40e6e4ec3b8f8ae5a64153811f4b64.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181014051102.GA56558@mdounin.ru> Hello! On Thu, Oct 11, 2018 at 10:47:56AM -0400, wld75 wrote: > I have a server running for client to download and upload files, however i > am only able to see the upload request logs and download requests not > captured in the access.log file. In nginx, two things should be kept in mind when looking into access logs: 1. Requests are logged into access log once the request is complete. This may take a while in some cases. 2. Logging details can be configured in nginx.conf, and may vary widely from a configuration to configuration. In particular, logging can be disabled for particular servers or locations, or logging can be configured to use different log files for different requests. If you don't see download requests being logged, most likely the problem is that nginx was configured to avoid logging of such requests, or you are looking into the wrong log file. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Sun Oct 14 06:40:06 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 14 Oct 2018 09:40:06 +0300 Subject: nginx introduces extra delay when talking to slow backend (probably FreeBSD kevent specific) In-Reply-To: <20181012203126.GH79805@hades.panopticon> References: <20181012195101.GG79805@hades.panopticon> <20181012203126.GH79805@hades.panopticon> Message-ID: <20181014064006.GB56558@mdounin.ru> Hello! On Fri, Oct 12, 2018 at 11:31:26PM +0300, Dmitry Marakasov wrote: > * Dmitry Marakasov (amdmi3 at amdmi3.ru) wrote: > > I've gathered ktrace dumps for both cases, and it really looks that > the problem is related to kevent. After nginx sends response back > to client, it calls kevent(2) on client fd (which is 5). > > When there is a bug (FreeBSD 11.2), the following happens: > > 49365 nginx 3.099362 CALL kevent(0x7,0x8022a2000,0,0x8023005c0,0x200,0x7fffffffe598) > 49365 nginx 3.099419 STRU struct kevent[] = { } > 49365 nginx 3.194695 STRU struct kevent[] = { { ident=5, filter=EVFILT_WRITE, flags=0x20, fflags=0, data=0xbf88, udata=0x8023633d1 } } > 49365 nginx 3.194733 RET kevent 1 > ... > 49365 nginx 3.194858 CALL kevent(0x7,0x8022a2000,0,0x8023005c0,0x200,0x7fffffffe598) > 49365 nginx 3.194875 STRU struct kevent[] = { } > 49365 nginx 3.835259 STRU struct kevent[] = { { ident=5, filter=EVFILT_READ, flags=0x8020, fflags=0, data=0, udata=0x802346111 } } > 49365 nginx 3.835299 RET kevent 1 > > E.g. read and write events come separately, both with huge delays. > > On FreeBSD-CURRENT which doesn't experience the problem, kdump looks this way: > > 8049 nginx 3.081367 CALL kevent(0x7,0x8012d1b40,0,0x8012da040,0x200,0x7fffffffe598) > 8049 nginx 3.081371 STRU struct kevent[] = { } > 8049 nginx 3.081492 STRU struct kevent[] = { { ident=5, filter=EVFILT_WRITE, flags=0x20, fflags=0, data=0xbf88, udata=0x801341f11 } > { ident=5, filter=EVFILT_READ, flags=0x8020, fflags=0, data=0, udata=0x801324e51 } } > 8049 nginx 3.081498 RET kevent 2 > > E.g. both events come immediately and at the same time. > > Not sure if that's problem of kevent or something it relies on or the > way nginx uses it. Have you tried looking into what happens in the client? These events are client-related, and seems to match what client does as per tcpdump of traffic between nginx and the client. Also, at least on my box (FreeBSD 10.4) this issue can be reproduced with curl, but not with fetch or wget. Seems to be something curl-specific. I'm not familiar with curl source code, but it seems to be sitting in a poll() call without any file descriptors for some reason: 8862 curl 0.013972 GIO fd 3 wrote 78 bytes "GET / HTTP/1.1\r Host: localhost:8080\r User-Agent: curl/7.61.0\r Accept: */*\r \r " 8862 curl 0.013977 RET sendto 78/0x4e 8862 curl 0.013984 CALL poll(0xbfbfe610,0x1,0) 8862 curl 0.013987 RET poll 0 8862 curl 0.013992 CALL poll(0xbfbfe768,0x1,0x1) 8862 curl 0.016042 RET poll 0 8862 curl 0.016118 CALL poll(0xbfbfe610,0x1,0) 8862 curl 0.016137 RET poll 0 8862 curl 0.016197 CALL poll(0xbfbfe768,0x1,0xc5) 8862 curl 0.228557 RET poll 0 8862 curl 0.228605 CALL poll(0xbfbfe610,0x1,0) 8862 curl 0.228617 RET poll 0 8862 curl 0.228631 CALL poll(0xbfbfe768,0x1,0x3e8) 8862 curl 1.246374 RET poll 0 8862 curl 1.246420 CALL poll(0,0,0x3e8) 8862 curl 2.298297 RET poll 0 8862 curl 2.298410 CALL poll(0xbfbfe610,0x1,0) 8862 curl 2.298452 RET poll 1 8862 curl 2.298517 CALL recvfrom(0x3,0x28ca0000,0x19000,0,0,0) 8862 curl 2.298584 GIO fd 3 read 171 bytes "HTTP/1.1 200 OK\r Server: nginx/1.15.5\r Note these lines: 8862 curl 1.246420 CALL poll(0,0,0x3e8) 8862 curl 2.298297 RET poll 0 This is a call without any file descriptors and with 1000 millisecond timeout, so it will result in an unconditional 1 second delay. Not sure why you are seeing the problem with some FreeBSD version but not others, but different curl versions or curl compilation flags may explain things. In my case curl version is as follows: $ curl --version curl 7.61.0 (i386-portbld-freebsd10.4) libcurl/7.61.0 OpenSSL/1.0.1u zlib/1.2.11 nghttp2/1.32.0 Release-Date: 2018-07-11 Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smtp smtps telnet tftp Features: AsynchDNS IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB SSL libz TLS-SRP HTTP2 UnixSockets HTTPS-proxy Upgrading curl to 7.61.1 doesn't fix things. -- Maxim Dounin http://mdounin.ru/ From stefan.mueller.83 at gmail.com Sun Oct 14 19:14:31 2018 From: stefan.mueller.83 at gmail.com (=?UTF-8?Q?Stefan_M=c3=bcller?=) Date: Sun, 14 Oct 2018 21:14:31 +0200 Subject: [nginx] Please add [nginx] to subject In-Reply-To: References: <66ff7f13-cb4f-a581-c576-f566b3b595df@gmail.com> Message-ID: An HTML attachment was scrubbed... URL: From hitman at itglowz.com Sun Oct 14 19:59:42 2018 From: hitman at itglowz.com (Matthew VK3EVL) Date: Mon, 15 Oct 2018 06:59:42 +1100 Subject: [nginx] Please add [nginx] to subject In-Reply-To: References: <66ff7f13-cb4f-a581-c576-f566b3b595df@gmail.com> Message-ID: +1. Prepend would make it so much easier. > On 15 Oct 2018, at 06:14, Stefan M?ller wrote: > > Hallo, > > getting a separate email per mailing list gets messy, as there are to many. > > Currently I'm subscribed to some mailing lists covering similar things compared with nginx. If all of them would work without a prepend [list title] it would be a mess to distinguish them and focus on them which are currently important. > When I open an email app or get email notification, the notification focuses on the subject, so I have to open the mail to know to which list it belongs to. > > A year ago there was the same discussion on digikam-users at kde.org, resulting in a prepend [digiKam-users]. It could be a bit shorter but it enables me to see what email is digikam related at a glance and the remaining space for the subject title gives me enough context to decided if I shall read or delete it. I'm doing that on a mid class phone. > > [nginx] would be enough instead of [nginx-user], as nearly any mailing list do. > > thank you > > Stefan > >> On 13.10.2018 11:57, Ralph Seichter wrote: >>> On 13.10.18 10:16, Stefan M?ller wrote: >>> >>> Sure /nginx at nginx.org/ is fine for actively filtering, or for putting >>> everything into another folder, but every other mailing list I'm on >>> tags the list name in the subject. >> Then you are not subscribed the right mailing lists. ;-) >> >> Seriously, this discussion is almost as old as mailing lists. Rewriting >> the subject adds clutter, messes with some MUAs, wastes screen estate on >> devices with small screens (e.g. smartphones), and breaks DKIM. >> >> Filtering by the existing "List-Id" header works just fine. Also, you >> can use address extensions with your Google Mail (and many others), >> meaning that you can have your mailing list messages automatically >> filtered/labelled properly. Look at my address in this message, it >> uses "+nginx" as an extension. >> >> -Ralph >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.mueller.83 at gmail.com Sun Oct 14 20:00:18 2018 From: stefan.mueller.83 at gmail.com (=?UTF-8?Q?Stefan_M=c3=bcller?=) Date: Sun, 14 Oct 2018 22:00:18 +0200 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: References: <1abfde26-8aef-5e22-5c35-671d81826b6a@gmail.com> <1b6f8646-9b10-5cc8-993b-68ed92275ed9@gmail.com> <000101d45587$0e87be30$2b973a90$@roze.lv> <000001d4564a$e0981960$a1c84c20$@roze.lv> <000001d4575b$f4fc8fa0$def5aee0$@roze.lv> <000601d45a6e$cee8e960$6cbabc20$@roze.lv> <000001d45aad$4d488f70$e7d9ae50$@roze.lv> <7f86eb5e-0916-589f-e5b8-c08999fb59e9@gmail.com> Message-ID: <2ee655bc-c755-9ee5-d595-43c9b6a27691@gmail.com> An HTML attachment was scrubbed... URL: From m16+nginx at monksofcool.net Sun Oct 14 21:57:25 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Sun, 14 Oct 2018 23:57:25 +0200 Subject: Please DO NOT add [nginx] to subject In-Reply-To: References: <66ff7f13-cb4f-a581-c576-f566b3b595df@gmail.com> Message-ID: On 14.10.18 21:14, Stefan M?ller wrote: > getting a separate email per mailing list gets messy, as there are to > many. It does not get messy at all. I have been using mailing lists since the 1980s, and if you're having trouble you are simply not doing it right. I already explained that you can use mail address extensions with your existing GMail address, but I can repeat: Subscribe to this mailing list as stefan.mueller.83+nginx at gmail.com and you're done. Add a GMail rule "Set label nginx, skip Inbox" if you like. Dovecot can use extensions to deliver into folders automatically. I think that trying to make the list owners solve *your* issues is a sad thing to do. Use labels or folders, that's what they are for. Ideally, use List-xyz headers, because that's the reason they exist. -Ralph From nginx-forum at forum.nginx.org Mon Oct 15 07:08:42 2018 From: nginx-forum at forum.nginx.org (anish10dec) Date: Mon, 15 Oct 2018 03:08:42 -0400 Subject: Nginx as LB to redirect/return to upstream server instead of Proxy Message-ID: <5a4a746c8b7f7f28770df7a33e1981f7.NginxMailingListEnglish@forum.nginx.org> We want to use Nginx as LB in a way so that Nginx can return 301 or 302 redirect to client instead of Proxying request to backend/upstream servers. It is required as Server which is configured as LB is having limited throughput of 1 Gbps while upstream servers are having throughput of 10Gbps . We want users to directly connect to Upstream Server for Data delivery. Nginx LB Server to make sure that all upstream are up and functional before giving 301 or 302 redirect to any of upstream server Example: http://nginxlb.com/data/download Nginx LB Returns Redirected URL to Client 301 or 302 ( That upstream should be up) http://upstreamserver1.com/data/download http://upstreamserver2.com/data/download Is this possible by : return 301 http://$upstream_addr/data/download Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281590,281590#msg-281590 From stefan.mueller.83 at gmail.com Mon Oct 15 10:35:00 2018 From: stefan.mueller.83 at gmail.com (Stefan Mueller) Date: Mon, 15 Oct 2018 12:35:00 +0200 Subject: [nginx] Please add [nginx] to subject In-Reply-To: References: <66ff7f13-cb4f-a581-c576-f566b3b595df@gmail.com> Message-ID: In answer to Ralph's reply. That is a very Gmail specific solution but, thanks god, not anyone is using Gmail. For them it needs other solutions. Anyway we should not focus on labeling / filtering what should possible in any email application but I cannot tell how much effort is needed to make it, it could be a real hassle. My main goal is to improve readability of my inbox. Besides being subscribed to mailing lists also member of a dozen discourse forums, what can be abused as mailing lists as well. If each of them wouldn't use some kind of prepend label my inbox and the the one of many others would be cluttered. Thanks to the label I see at a glance which list emailed me. The label can be easily incorporated in the subject. Making it part of the sentence is feasible. Stefan Sent from a fair mobile Le dim. 14 oct. 2018 ? 21:59, Matthew VK3EVL a ?crit : > +1. Prepend would make it so much easier. > > On 15 Oct 2018, at 06:14, Stefan M?ller > wrote: > > Hallo, > > getting a separate email per mailing list gets messy, as there are to many. > > Currently I'm subscribed to some mailing lists covering similar things > compared with nginx. If all of them would work without a prepend [*list > title*] it would be a mess to distinguish them and focus on them which > are currently important. > When I open an email app or get email notification, the notification > focuses on the subject, so I have to open the mail to know to which list it > belongs to. > > A year ago there was the same discussion on digikam-users at kde.org, > resulting in a prepend [digiKam-users]. It could be a bit shorter but it > enables me to see what email is digikam related at a glance and the > remaining space for the subject title gives me enough context to decided if > I shall read or delete it. I'm doing that on a mid class phone. > > [nginx] would be enough instead of [nginx-user], as nearly any mailing > list do. > > thank you > > Stefan > On 13.10.2018 11:57, Ralph Seichter wrote: > > On 13.10.18 10:16, Stefan M?ller wrote: > > > Sure /nginx at nginx.org/ is fine for actively filtering, or for putting > everything into another folder, but every other mailing list I'm on > tags the list name in the subject. > > Then you are not subscribed the right mailing lists. ;-) > > Seriously, this discussion is almost as old as mailing lists. Rewriting > the subject adds clutter, messes with some MUAs, wastes screen estate on > devices with small screens (e.g. smartphones), and breaks DKIM. > > Filtering by the existing "List-Id" header works just fine. Also, you > can use address extensions with your Google Mail (and many others), > meaning that you can have your mailing list messages automatically > filtered/labelled properly. Look at my address in this message, it > uses "+nginx" as an extension. > > -Ralph > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From gryzli.the.bugbear at gmail.com Mon Oct 15 10:39:26 2018 From: gryzli.the.bugbear at gmail.com (Gryzli Bugbear) Date: Mon, 15 Oct 2018 13:39:26 +0300 Subject: [nginx] Please add [nginx] to subject In-Reply-To: References: <66ff7f13-cb4f-a581-c576-f566b3b595df@gmail.com> Message-ID: <7d589362-1c2b-1c40-dfcd-e78942008df7@gmail.com> +1 from me as well. I also think this will be of help to many users. On 10/15/18 1:35 PM, Stefan Mueller wrote: > In answer to Ralph's reply. > That is a very Gmail specific solution but, thanks god, not anyone is > using Gmail. > For them it needs other solutions. > Anyway we should not focus on labeling / filtering what should > possible in any email application but I cannot tell how much effort is > needed to make it, it could be a real hassle. > > My main goal is to improve readability of my inbox. Besides being > subscribed to mailing lists also member of a dozen discourse forums, > what can be abused as mailing lists as well. > If each of them wouldn't use some kind of prepend label my inbox and > the the one of many others would be cluttered. > Thanks to the label I see at a glance which list emailed me. > The label can be easily incorporated in the subject. Making it part of > the sentence is feasible. > > Stefan > > Sent from a fair mobile > > > Le dim. 14 oct. 2018 ? 21:59, Matthew VK3EVL > a ?crit?: > > +1. Prepend would make it so much easier. > > On 15 Oct 2018, at 06:14, Stefan M?ller > > > wrote: > >> Hallo, >> >> getting a separate email per mailing list gets messy, as there >> are to many. >> >> Currently I'm subscribed to some mailing lists covering similar >> things compared with nginx. If all of them would work without a >> prepend [/list title/] it would be a mess to distinguish them and >> focus on them which are currently important. >> When I open an email app or get email notification, the >> notification focuses on the subject, so I have to open the mail >> to know to which list it belongs to. >> >> A year ago there was the same discussion on digikam-users at kde.org >> , resulting in a prepend >> [digiKam-users]. It could be a bit shorter but it enables me to >> see what email is digikam related at a glance and the remaining >> space for the subject title gives me enough context to decided if >> I shall read or delete it. I'm doing that on a mid class phone. >> >> [nginx] would be enough instead of [nginx-user], as nearly any >> mailing list do. >> >> thank you >> >> Stefan >> >> On 13.10.2018 11:57, Ralph Seichter wrote: >>> On 13.10.18 10:16, Stefan M?ller wrote: >>> >>>> Sure /nginx at nginx.org / is fine for actively filtering, or for putting >>>> everything into another folder, but every other mailing list I'm on >>>> tags the list name in the subject. >>> Then you are not subscribed the right mailing lists. ;-) >>> >>> Seriously, this discussion is almost as old as mailing lists. Rewriting >>> the subject adds clutter, messes with some MUAs, wastes screen estate on >>> devices with small screens (e.g. smartphones), and breaks DKIM. >>> >>> Filtering by the existing "List-Id" header works just fine. Also, you >>> can use address extensions with your Google Mail (and many others), >>> meaning that you can have your mailing list messages automatically >>> filtered/labelled properly. Look at my address in this message, it >>> uses "+nginx" as an extension. >>> >>> -Ralph >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- -- Gryzli https://gryzli.info -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Mon Oct 15 11:44:22 2018 From: r at roze.lv (Reinis Rozitis) Date: Mon, 15 Oct 2018 14:44:22 +0300 Subject: [nginx] Please add [nginx] to subject In-Reply-To: References: <66ff7f13-cb4f-a581-c576-f566b3b595df@gmail.com> Message-ID: <000c01d4647c$6ac45100$404cf300$@roze.lv> > That is a very Gmail specific solution but, thanks god, not anyone is using Gmail. It's not gmail specific option.. Most MTAs (if not all) (like for example Postfix (recipient_delimiter) / exim (local_part_suffix) etc ) support the 'user+tag at ..' feature. > My main goal is to improve readability of my inbox. Is the subject the only way? rr From m16+nginx at monksofcool.net Mon Oct 15 12:29:15 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Mon, 15 Oct 2018 14:29:15 +0200 Subject: Please DO NOT add [nginx] to subject In-Reply-To: References: <66ff7f13-cb4f-a581-c576-f566b3b595df@gmail.com> Message-ID: <9b714623-811a-e96b-34dd-8e6e905bc1ac@monksofcool.net> On 15.10.18 12:35, Stefan Mueller wrote: > In answer to Ralph's reply. Why not reply to my message then? > That is a very Gmail specific solution but, thanks god, not anyone is > using Gmail. For them it needs other solutions. No, it does not. Besides, are you trying to spin this to you looking out for others? :-) Address extensions are not GMail specific. Postfix, Sendmail, Qmail, maildrop, Dovecot, Courier - the list goes on - all support extensions. E-Mail existed long before Google Mail (which I don't use). > Anyway we should not focus on labeling / filtering what should > possible in any email application but I cannot tell how much effort > is needed to make it, it could be a real hassle. On the contrary, we should focus on filtering, and you indeed cannot tell, and it is no hassle. You're making more baseless assumptions here. > My main goal is to improve readability of my inbox. There we go; it is about your preferences, not some greater good. Use folders/labels to keep your Inbox clean, use List-Id and similar headers, and it all works. Frankly, existing mailing lists conventions have proven useful for decades, and I don't care about your Inbox in the greater scheme of things. ;-) -Ralph From stefan.mueller.83 at gmail.com Mon Oct 15 12:59:18 2018 From: stefan.mueller.83 at gmail.com (=?UTF-8?Q?Stefan_M=c3=bcller?=) Date: Mon, 15 Oct 2018 14:59:18 +0200 Subject: Please DO NOT add [nginx] to subject In-Reply-To: <9b714623-811a-e96b-34dd-8e6e905bc1ac@monksofcool.net> References: <66ff7f13-cb4f-a581-c576-f566b3b595df@gmail.com> <9b714623-811a-e96b-34dd-8e6e905bc1ac@monksofcool.net> Message-ID: <6ee56178-ee93-72dd-19a1-5b6530c324b9@gmail.com> > I don't care about your Inbox in the greater > scheme of things but is seems others do or at least agree with me On 15.10.2018 14:29, Ralph Seichter wrote: > On 15.10.18 12:35, Stefan Mueller wrote: > >> In answer to Ralph's reply. > Why not reply to my message then? > >> That is a very Gmail specific solution but, thanks god, not anyone is >> using Gmail. For them it needs other solutions. > No, it does not. Besides, are you trying to spin this to you looking out > for others? :-) > > Address extensions are not GMail specific. Postfix, Sendmail, Qmail, > maildrop, Dovecot, Courier - the list goes on - all support extensions. > E-Mail existed long before Google Mail (which I don't use). > >> Anyway we should not focus on labeling / filtering what should >> possible in any email application but I cannot tell how much effort >> is needed to make it, it could be a real hassle. > On the contrary, we should focus on filtering, and you indeed cannot > tell, and it is no hassle. You're making more baseless assumptions here. > >> My main goal is to improve readability of my inbox. > There we go; it is about your preferences, not some greater good. Use > folders/labels to keep your Inbox clean, use List-Id and similar headers, > and it all works. Frankly, existing mailing lists conventions have proven > useful for decades, and I don't care about your Inbox in the greater > scheme of things. ;-) > > -Ralph > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From al-nginx at none.at Mon Oct 15 13:15:39 2018 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 15 Oct 2018 15:15:39 +0200 Subject: Nginx as LB to redirect/return to upstream server instead of Proxy In-Reply-To: <5a4a746c8b7f7f28770df7a33e1981f7.NginxMailingListEnglish@forum.nginx.org> References: <5a4a746c8b7f7f28770df7a33e1981f7.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi. Am 15.10.2018 um 09:08 schrieb anish10dec: > We want to use Nginx as LB in a way so that Nginx can return 301 or 302 > redirect to client instead of Proxying request to backend/upstream servers. > > It is required as Server which is configured as LB is having limited > throughput of 1 Gbps while upstream servers are having throughput of 10Gbps > . > > We want users to directly connect to Upstream Server for Data delivery. > Nginx LB Server to make sure that all upstream are up and functional before > giving 301 or 302 redirect to any of upstream server > > Example: > > http://nginxlb.com/data/download > > Nginx LB Returns Redirected URL to Client 301 or 302 ( That upstream should > be up) > > http://upstreamserver1.com/data/download > http://upstreamserver2.com/data/download > > Is this possible by : > > return 301 http://$upstream_addr/data/download I would do this with maps, rewrite and upstream variables. https://nginx.org/en/docs/http/ngx_http_map_module.html https://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables https://nginx.org/en/docs/http/ngx_http_rewrite_module.html Untested: ### map $upstream_addr $direct_domain { default nginxlb.com; IP1 upstreamserver1.com; IP2 upstreamserver2.com; } return 301 http://$direct_domain/data/download ### Regards Aleks > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281590,281590#msg-281590 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From m16+nginx at monksofcool.net Mon Oct 15 13:16:56 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Mon, 15 Oct 2018 15:16:56 +0200 Subject: Please DO NOT add [nginx] to subject In-Reply-To: <6ee56178-ee93-72dd-19a1-5b6530c324b9@gmail.com> References: <66ff7f13-cb4f-a581-c576-f566b3b595df@gmail.com> <9b714623-811a-e96b-34dd-8e6e905bc1ac@monksofcool.net> <6ee56178-ee93-72dd-19a1-5b6530c324b9@gmail.com> Message-ID: On 15.10.18 14:59, Stefan M?ller wrote: > but is seems others do or at least agree with me So what if "others" agree with you? People agree with me as well, check existing discussions about this issue. If you challenge conventions that have been around for good reason, for longer than some mailing list subscribers lived on this fair planet, you better make a damn good case of it, based on evidence and not on your limited personal experience in this particular matter (which is not something to be ashamed of, just a learning opportunity). You have no case, so why not accept the advice you have been offered? -Ralph From stefan.mueller.83 at gmail.com Mon Oct 15 13:32:37 2018 From: stefan.mueller.83 at gmail.com (=?UTF-8?Q?Stefan_M=c3=bcller?=) Date: Mon, 15 Oct 2018 15:32:37 +0200 Subject: Please DO NOT add [nginx] to subject In-Reply-To: References: <66ff7f13-cb4f-a581-c576-f566b3b595df@gmail.com> <9b714623-811a-e96b-34dd-8e6e905bc1ac@monksofcool.net> <6ee56178-ee93-72dd-19a1-5b6530c324b9@gmail.com> Message-ID: <6ea82327-f564-a169-6176-bac48545e86e@gmail.com> > why not accept the advice you have been offered? I read up on email extension on Gizmodo and Wikipedia and I'm very familiar with filtering and labeling (all list based mails are labeled automatically) but I still believe that a adding [nginx] would make the situation more comfortable. > You have no case My case is that when I open my email application on a phone or desktop occasionally throughout the day I want to see what I've got today at a glance without the need clicking / tabbing into sub folders. I open the app see what I came in and decide if it is important or can it be done later. In order to make this decision quicker a label in the subject would improve it enormously as you focus only on the subject during such actions. Anyone else what to share her/his thoughts bedsides me and Ralph? https://en.wikipedia.org/wiki/Email_address#Subaddressing On 15.10.2018 15:16, Ralph Seichter wrote: > On 15.10.18 14:59, Stefan M?ller wrote: > >> but is seems others do or at least agree with me > So what if "others" agree with you? People agree with me as well, check > existing discussions about this issue. > > If you challenge conventions that have been around for good reason, for > longer than some mailing list subscribers lived on this fair planet, you > better make a damn good case of it, based on evidence and not on your > limited personal experience in this particular matter (which is not > something to be ashamed of, just a learning opportunity). You have no > case, so why not accept the advice you have been offered? > > -Ralph > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From m16+nginx at monksofcool.net Mon Oct 15 13:48:34 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Mon, 15 Oct 2018 15:48:34 +0200 Subject: Please DO NOT add [nginx] to subject In-Reply-To: <6ea82327-f564-a169-6176-bac48545e86e@gmail.com> References: <66ff7f13-cb4f-a581-c576-f566b3b595df@gmail.com> <9b714623-811a-e96b-34dd-8e6e905bc1ac@monksofcool.net> <6ee56178-ee93-72dd-19a1-5b6530c324b9@gmail.com> <6ea82327-f564-a169-6176-bac48545e86e@gmail.com> Message-ID: <7cde39e5-614c-5ed3-7157-7a3d4fae69f3@monksofcool.net> On 15.10.18 15:32, Stefan M?ller wrote: > I read up on email extension [...] Thanks, I appreciate that. > I still believe that a adding [nginx] would make the situation more > comfortable. And I firmly believe the disadvantages of subject rewriting outweigh the percieved "comfort", based on providing email related services for more than 30 years (and counting). > when I open my email application on a phone or desktop occasionally > throughout the day I want to see what I've got today at a glance > without the need clicking / tabbing into sub folders. Yeah, I got that, but I personally don't consider you not wanting to perform an extra click a valid reason at all. I am not trying to be dismissive in any way, I simply don't care. All you need to organise your mailing list subscriptions is already available. It is *your* problem, not anybody else's, to make use of that data/process. -Ralph From lucas at lucasrolff.com Mon Oct 15 13:55:20 2018 From: lucas at lucasrolff.com (Lucas Rolff) Date: Mon, 15 Oct 2018 13:55:20 +0000 Subject: Please DO NOT add [nginx] to subject In-Reply-To: <6ea82327-f564-a169-6176-bac48545e86e@gmail.com> References: <66ff7f13-cb4f-a581-c576-f566b3b595df@gmail.com> <9b714623-811a-e96b-34dd-8e6e905bc1ac@monksofcool.net> <6ee56178-ee93-72dd-19a1-5b6530c324b9@gmail.com> , <6ea82327-f564-a169-6176-bac48545e86e@gmail.com> Message-ID: Might be important to mention that services such as exchange doesn?t support subaddressing, so it?s a bit harder there :) With that said, I?d love [nginx] in the header, regardless if it breaks DKIM or similar, I have mailing lists whitelisted anyway for that exact reason, because there?s already plenty of lists that break DKIM or SPF for that matter. In my case, I don?t filter on mailing lists and put them in specific directories, they all end up in my inbox, and I click through them, if the subject interests me - and I see the email of the list, and know which list it?s from. Additionally, I know most common names that post on the mailing list, so it?s easy to see which list it comes from ^_^ Get Outlook for iOS ________________________________ From: 20306775700n behalf of Sent: Monday, October 15, 2018 3:32 PM To: nginx at nginx.org; Ralph Seichter Subject: Re: Please DO NOT add [nginx] to subject why not accept the advice you have been offered? I read up on email extension on Gizmodo and Wikipedia and I'm very familiar with filtering and labeling (all list based mails are labeled automatically) but I still believe that a adding [nginx] would make the situation more comfortable. You have no case My case is that when I open my email application on a phone or desktop occasionally throughout the day I want to see what I've got today at a glance without the need clicking / tabbing into sub folders. I open the app see what I came in and decide if it is important or can it be done later. In order to make this decision quicker a label in the subject would improve it enormously as you focus only on the subject during such actions. Anyone else what to share her/his thoughts bedsides me and Ralph? https://en.wikipedia.org/wiki/Email_address#Subaddressing On 15.10.2018 15:16, Ralph Seichter wrote: On 15.10.18 14:59, Stefan M?ller wrote: but is seems others do or at least agree with me So what if "others" agree with you? People agree with me as well, check existing discussions about this issue. If you challenge conventions that have been around for good reason, for longer than some mailing list subscribers lived on this fair planet, you better make a damn good case of it, based on evidence and not on your limited personal experience in this particular matter (which is not something to be ashamed of, just a learning opportunity). You have no case, so why not accept the advice you have been offered? -Ralph _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From m16+nginx at monksofcool.net Mon Oct 15 14:29:04 2018 From: m16+nginx at monksofcool.net (Ralph Seichter) Date: Mon, 15 Oct 2018 16:29:04 +0200 Subject: Please DO NOT add [nginx] to subject In-Reply-To: References: <66ff7f13-cb4f-a581-c576-f566b3b595df@gmail.com> <9b714623-811a-e96b-34dd-8e6e905bc1ac@monksofcool.net> <6ee56178-ee93-72dd-19a1-5b6530c324b9@gmail.com> <6ea82327-f564-a169-6176-bac48545e86e@gmail.com> Message-ID: <0f3d62cd-8228-c86e-a2a0-67dbe91b88e2@monksofcool.net> On 15.10.18 15:55, Lucas Rolff wrote: > Might be important to mention that services such as exchange doesn?t support > subaddressing, so it?s a bit harder there :) Well, Microsoft... Server-side filtering is simple enough, e.g. an "Inbox rule" based on the List-Id header containing the string . > Additionally, I know most common names that post on the mailing list, so it?s > easy to see which list it comes from ^_^ Yeah, some people tend to stand out (cough). ;-) > Get Outlook for iOS [...] Thanks, but no thanks. :-) -Ralph From mdounin at mdounin.ru Mon Oct 15 14:45:39 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Oct 2018 17:45:39 +0300 Subject: Nginx as LB to redirect/return to upstream server instead of Proxy In-Reply-To: References: <5a4a746c8b7f7f28770df7a33e1981f7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181015144539.GD56558@mdounin.ru> Hello! On Mon, Oct 15, 2018 at 03:15:39PM +0200, Aleksandar Lazic wrote: > Hi. > > Am 15.10.2018 um 09:08 schrieb anish10dec: > > We want to use Nginx as LB in a way so that Nginx can return 301 or 302 > > redirect to client instead of Proxying request to backend/upstream servers. > > > > It is required as Server which is configured as LB is having limited > > throughput of 1 Gbps while upstream servers are having throughput of 10Gbps > > . > > > > We want users to directly connect to Upstream Server for Data delivery. > > Nginx LB Server to make sure that all upstream are up and functional before > > giving 301 or 302 redirect to any of upstream server > > > > Example: > > > > http://nginxlb.com/data/download > > > > Nginx LB Returns Redirected URL to Client 301 or 302 ( That upstream should > > be up) > > > > http://upstreamserver1.com/data/download > > http://upstreamserver2.com/data/download > > > > Is this possible by : > > > > return 301 http://$upstream_addr/data/download > > I would do this with maps, rewrite and upstream variables. > > https://nginx.org/en/docs/http/ngx_http_map_module.html > https://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables > https://nginx.org/en/docs/http/ngx_http_rewrite_module.html > > Untested: > ### > map $upstream_addr $direct_domain { > default nginxlb.com; > IP1 upstreamserver1.com; > IP2 upstreamserver2.com; > } > > return 301 http://$direct_domain/data/download > ### This won't work, because the $upstream_addr variable won't be available without an actual upstream connection. To balance traffic to multiple servers without proxying, the split_clients module can be used. For example, something like this should world, balancing based on client's address (untested though): split_clients $remote_addr $domain { 50% upstreamserver1.com; * upstreamserver2.com; } return 302 http://$domain/data/download; Docs are available here: http://nginx.org/en/docs/http/ngx_http_split_clients_module.html Note thought that this won't do any checks on whether the upstream server is up or not. If checks are needed, a better approach might be to use some more sophisticated logic to return such redirects. Most simple solution would be to actually proxy requests to the upstream servers, and let these servers to return actual redirects to themselves. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon Oct 15 18:23:09 2018 From: nginx-forum at forum.nginx.org (benzimmer) Date: Mon, 15 Oct 2018 14:23:09 -0400 Subject: Proxying setup delivering wrong cache entry in some edge cases Message-ID: We've been using Nginx as a caching proxy for quite a while in different scenarios now. Since a few weeks and especially in the last couple of days we continue to encounter a strange behaviour in one of our scenarios leading to wrong content being delivered. In that case we use Nginx as a caching proxy for a bunch of subdomains on a kind of multitenancy application. We established the setup 4 months ago and never had any problems until recently. For example, a request to https://test.example.org/bla/fasel would deliver the content for https://foo.example.org/bla/fasel. So basically it delivers content for the wrong subdomain. Those occasions are very, very rare and totally random in regards to the subdomain from which the content gets delivered. We currently use openresty 1.13.6.1. Our config is quite large, so I will put it into a gist if that's OK: https://gist.github.com/benzimmer/a4ee7b43ae4ade24a570301dfd0c12c2 This seems to be working fine for the most part, but every now and then we see the described behaviour without being able to consistently reproduce it. If anyone has any clue why this might be happening, we'd be very grateful. If you need any additional information, please feel free to ask away! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281606,281606#msg-281606 From stefan.mueller.83 at gmail.com Mon Oct 15 18:29:37 2018 From: stefan.mueller.83 at gmail.com (=?UTF-8?Q?Stefan_M=c3=bcller?=) Date: Mon, 15 Oct 2018 20:29:37 +0200 Subject: Please DO NOT add [nginx] to subject In-Reply-To: <0f3d62cd-8228-c86e-a2a0-67dbe91b88e2@monksofcool.net> References: <66ff7f13-cb4f-a581-c576-f566b3b595df@gmail.com> <9b714623-811a-e96b-34dd-8e6e905bc1ac@monksofcool.net> <6ee56178-ee93-72dd-19a1-5b6530c324b9@gmail.com> <6ea82327-f564-a169-6176-bac48545e86e@gmail.com> <0f3d62cd-8228-c86e-a2a0-67dbe91b88e2@monksofcool.net> Message-ID: we tried our best anyone else trying not to burn one's fingers? On 15.10.2018 16:29, Ralph Seichter wrote: > On 15.10.18 15:55, Lucas Rolff wrote: > >> Might be important to mention that services such as exchange doesn?t support >> subaddressing, so it?s a bit harder there :) > Well, Microsoft... Server-side filtering is simple enough, e.g. an "Inbox rule" > based on the List-Id header containing the string . > >> Additionally, I know most common names that post on the mailing list, so it?s >> easy to see which list it comes from ^_^ > Yeah, some people tend to stand out (cough). ;-) > >> Get Outlook for iOS [...] > Thanks, but no thanks. :-) > > -Ralph > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From rpaprocki at fearnothingproductions.net Mon Oct 15 18:31:03 2018 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Mon, 15 Oct 2018 11:31:03 -0700 Subject: Please DO NOT add [nginx] to subject In-Reply-To: References: <66ff7f13-cb4f-a581-c576-f566b3b595df@gmail.com> <9b714623-811a-e96b-34dd-8e6e905bc1ac@monksofcool.net> <6ee56178-ee93-72dd-19a1-5b6530c324b9@gmail.com> <6ea82327-f564-a169-6176-bac48545e86e@gmail.com> <0f3d62cd-8228-c86e-a2a0-67dbe91b88e2@monksofcool.net> Message-ID: I think this thread has run it's course. Let's please move this discussion of this mailing list. On Mon, Oct 15, 2018 at 11:29 AM Stefan M?ller wrote: > we tried our best anyone else trying not to burn one's fingers? > > On 15.10.2018 16:29, Ralph Seichter wrote: > > On 15.10.18 15:55, Lucas Rolff wrote: > > > >> Might be important to mention that services such as exchange doesn?t > support > >> subaddressing, so it?s a bit harder there :) > > Well, Microsoft... Server-side filtering is simple enough, e.g. an > "Inbox rule" > > based on the List-Id header containing the string . > > > >> Additionally, I know most common names that post on the mailing list, > so it?s > >> easy to see which list it comes from ^_^ > > Yeah, some people tend to stand out (cough). ;-) > > > >> Get Outlook for iOS [...] > > Thanks, but no thanks. :-) > > > > -Ralph > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Oct 15 20:23:59 2018 From: francis at daoine.org (Francis Daly) Date: Mon, 15 Oct 2018 21:23:59 +0100 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: References: <000001d4564a$e0981960$a1c84c20$@roze.lv> <000001d4575b$f4fc8fa0$def5aee0$@roze.lv> <000601d45a6e$cee8e960$6cbabc20$@roze.lv> <000001d45aad$4d488f70$e7d9ae50$@roze.lv> <7f86eb5e-0916-589f-e5b8-c08999fb59e9@gmail.com> Message-ID: <20181015202359.afvumr777zbbeokb@daoine.org> On Fri, Oct 12, 2018 at 11:59:48PM +0200, Stefan M?ller wrote: Hi there, I've read over this mail thread, and I confess that I'm quite confused as to what your remaining specific nginx question is. If it's not too awkward, could you repeat just exactly what you now wish to know? It may make it easier for others to give a useful direct response. > 4. *nginx* > this is the only one remaining. How can I isolate the servers? I'm not sure what you mean by "isolate the servers", that was not already answered. ("already answered" was approximately: for each server, run one nginx as user this-server-user, listening on a unix domain socket. Then run one nginx initially as user root, which does proxy_pass to the appropriate unix-domain-socket-server.) Have I missed something; or are you asking how to do it; or are you asking why to do it? Thanks, f -- Francis Daly francis at daoine.org From stefan.mueller.83 at gmail.com Tue Oct 16 07:20:33 2018 From: stefan.mueller.83 at gmail.com (=?UTF-8?Q?Stefan_M=c3=bcller?=) Date: Tue, 16 Oct 2018 09:20:33 +0200 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: <20181015202359.afvumr777zbbeokb@daoine.org> References: <000001d4564a$e0981960$a1c84c20$@roze.lv> <000001d4575b$f4fc8fa0$def5aee0$@roze.lv> <000601d45a6e$cee8e960$6cbabc20$@roze.lv> <000001d45aad$4d488f70$e7d9ae50$@roze.lv> <7f86eb5e-0916-589f-e5b8-c08999fb59e9@gmail.com> <20181015202359.afvumr777zbbeokb@daoine.org> Message-ID: <9bd403d9-b8ae-0ad4-85aa-ae6671ec999d@gmail.com> Good morning Francis, thank you coming back on this. In the very beginning Reinis wrote: > Well you configure each individual nginx to listen (https://nginx.org/en/docs/http/ngx_http_core_module.html#listen ) on a unix socket: > > Config on nginx1: > .. > events { } > http { > server { > listen unix:/some/path/user1.sock; > .. > } > } > > Config on nginx2: > .. > server { > listen unix:/some/path/user2.sock; > ... > } > > > And then on the main server you configure the per-user virtualhosts to be proxied to particular socket: > > server { > listen 80; > server_name user1.domain; > location / { > proxy_passhttp://unix:/some/path/user1.sock; > } > } > server { > listen 80; > server_name user2.domain; > location / { > proxy_passhttp://unix:/some/path/user2.sock; > } > } so I asked > that is all put in the same http{} block. and he answered > If you put everything (both the user unix sockets and also the > parent proxy server) under the same http{} block then it makes no > sense since a single instance of nginx always runs under the same > user (and beats the whole user/app isolation). so I wonder, if I need to work with multiple .conf files or shall I put multiple http{} blocks in the general configuration of nginx /etc/nginx/nginx.conf? I assume that Reinis told me indirectly to run multiple instances of nginx, but I haven't understood yet how. There is the master process, properly taking care about the proxy server but how to I start the instance (if I need to work with instances) per /virtual host/? Stefan On 15.10.2018 22:23, Francis Daly wrote: > On Fri, Oct 12, 2018 at 11:59:48PM +0200, Stefan M?ller wrote: > > Hi there, > > I've read over this mail thread, and I confess that I'm quite confused > as to what your remaining specific nginx question is. > > If it's not too awkward, could you repeat just exactly what you now wish > to know? > > It may make it easier for others to give a useful direct response. > >> 4. *nginx* >> this is the only one remaining. How can I isolate the servers? > I'm not sure what you mean by "isolate the servers", that was not > already answered. > > ("already answered" was approximately: for each server, run one nginx as > user this-server-user, listening on a unix domain socket. Then run one > nginx initially as user root, which does proxy_pass to the appropriate > unix-domain-socket-server.) > > Have I missed something; or are you asking how to do it; or are you > asking why to do it? > > Thanks, > > f -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Oct 16 07:56:22 2018 From: francis at daoine.org (Francis Daly) Date: Tue, 16 Oct 2018 08:56:22 +0100 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: <9bd403d9-b8ae-0ad4-85aa-ae6671ec999d@gmail.com> References: <000001d4575b$f4fc8fa0$def5aee0$@roze.lv> <000601d45a6e$cee8e960$6cbabc20$@roze.lv> <000001d45aad$4d488f70$e7d9ae50$@roze.lv> <7f86eb5e-0916-589f-e5b8-c08999fb59e9@gmail.com> <20181015202359.afvumr777zbbeokb@daoine.org> <9bd403d9-b8ae-0ad4-85aa-ae6671ec999d@gmail.com> Message-ID: <20181016075622.tr245ocpe3cuypgd@daoine.org> On Tue, Oct 16, 2018 at 09:20:33AM +0200, Stefan M?ller wrote: Hi there, > so I wonder, if I need to work with multiple .conf files or shall I put > multiple http{} blocks in the general configuration of nginx > /etc/nginx/nginx.conf? I assume that Reinis told me indirectly to run > multiple instances of nginx, but I haven't understood yet how. There is the > master process, properly taking care about the proxy server but how to I > start the instance (if I need to work with instances) per /virtual host/? In this design, you run multiple instances of nginx. That is: multiple individual system processes that are totally independent of each other. So: nginx-user1.conf includes something like http { server { listen unix:/some/path/user1.sock; } } and refers to log files and tmp files and a pid file that user1 can write, and to a document root that user1 can read (if necessary), and you run the command "/usr/sbin/nginx -c nginx-user1.conf" as system user user1. And then you do the same for user2, user3, etc. And then you have one other "nginx-main.conf" which includes "listen 443 ssl" and includes proxy_pass to the individual unix:/some/path/userN.sock "backend" servers; and you run the command "/usr/sbin/nginx -c nginx-main.conf" as user root. Note: the actual file names involved are irrelevant. All that matters is that when the nginx binary is run with a "-c" option, it can read the named file which contains the config that this instance will use. If the nginx process starts as user root, it will change itself to run as the other configured user-id as soon as it can; if it starts as non-root, it will not. In the above design, all of the user-specific backend nginx servers run as non-root. And - the term "virtual host" usually refers to different server{} blocks within the configuration of a single nginx instance. You (generally) don't care about those -- the nginx binary will start the appropriate child system-level processes to deal with the configuration that it was given. If you are running multiple nginx system-level processes, each one has its own idea of the virtual hosts from its configuration. With the above design, all of the "user" nginx instances have just one server{} block, while the "root" nginx instance probably has multiple server{} blocks. Good luck with it, f -- Francis Daly francis at daoine.org From r at roze.lv Tue Oct 16 10:02:34 2018 From: r at roze.lv (Reinis Rozitis) Date: Tue, 16 Oct 2018 13:02:34 +0300 Subject: Proxying setup delivering wrong cache entry in some edge cases In-Reply-To: References: Message-ID: <000201d46537$5c2e1e80$148a5b80$@roze.lv> > For example, a request to https://test.example.org/bla/fasel would deliver > the content for https://foo.example.org/bla/fasel. So basically it delivers > content for the wrong subdomain. Those occasions are very, very rare and > totally random in regards to the subdomain from which the content gets > delivered. Your configuration has: proxy_cache_key $ae$scheme://$http_host$request_uri; and proxy_cache_use_stale error timeout invalid_header http_502; I would start with either disabling the proxy_cache_use_stale and/or inspecting the $http_host. If I'm not wrong $http_host doesn't get the same treatment as $host which also comes from user request headers but if not present or empty gets set as $server_name matching the request. So if a client doesn't send the 'Host:' header there might be cache entries basically just with the $request_uri which are served in some specific cases. Nginx always stores and returns whatever the backend sent as a response. Since you change the Host header: proxy_set_header Host $upstream_endpoint; proxy_set_header X-Forwarded-Host $http_host; If possible you could add some debug headers on the backends - to see if request actually landing on the nginx proxy is correctly passed on the backend (like you could again be missing the X-Forwarded-Host header). Also you can do a simple MD5 on the problematic request (like md5 https://foo.example.org/bla/fasel -> 4DFDF87BB2FD82629ACB91BB1B1B2A1C (obviously for the gzipped content you have to add 'gzip' at the beginning) and then check if the cache file in /opt/example-org-cache/c/a1/4dfdf87bb2fd82629acb91bb1b1b2a1c actually exists and what's the content of it. rr From nginx-forum at forum.nginx.org Tue Oct 16 13:01:16 2018 From: nginx-forum at forum.nginx.org (shivramg94) Date: Tue, 16 Oct 2018 09:01:16 -0400 Subject: Idle Timeout during the HTTP request response phase Message-ID: <9dc82a62ddf1ef123ef6328b2cbf7d3f.NginxMailingListEnglish@forum.nginx.org> Hi, Is there any directive available in Nginx to set a timeout between two successive receive or two successive send network input/output operations during the HTTP request response phase? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281614,281614#msg-281614 From r at roze.lv Tue Oct 16 13:16:35 2018 From: r at roze.lv (Reinis Rozitis) Date: Tue, 16 Oct 2018 16:16:35 +0300 Subject: Idle Timeout during the HTTP request response phase In-Reply-To: <9dc82a62ddf1ef123ef6328b2cbf7d3f.NginxMailingListEnglish@forum.nginx.org> References: <9dc82a62ddf1ef123ef6328b2cbf7d3f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <000d01d46552$770dbe20$65293a60$@roze.lv> > Is there any directive available in Nginx to set a timeout between two > successive receive or two successive send network input/output operations > during the HTTP request response phase? For send: http://nginx.org/en/docs/http/ngx_http_core_module.html#send_timeout for read: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout (just for http headers) http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout p.s. for backend/upstream connections there are different directives rr From dennis.obrycki at gmail.com Tue Oct 16 17:43:23 2018 From: dennis.obrycki at gmail.com (dennis obrycki) Date: Tue, 16 Oct 2018 13:43:23 -0400 Subject: Ingress Controller Issue Message-ID: We are using the community edition of NginX Ingress. When autoscaling is in process, we are receiving 502 errors for a period of time . After a while ( 2-3 minutes ) the 502 errors subside and processing continues . Has anyone encountered a similar issue? Any thought on what could be causing this ? Thanks in Advance Dennis -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.mueller.83 at gmail.com Tue Oct 16 19:23:33 2018 From: stefan.mueller.83 at gmail.com (=?UTF-8?Q?Stefan_M=c3=bcller?=) Date: Tue, 16 Oct 2018 21:23:33 +0200 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: <20181016075622.tr245ocpe3cuypgd@daoine.org> References: <000001d4575b$f4fc8fa0$def5aee0$@roze.lv> <000601d45a6e$cee8e960$6cbabc20$@roze.lv> <000001d45aad$4d488f70$e7d9ae50$@roze.lv> <7f86eb5e-0916-589f-e5b8-c08999fb59e9@gmail.com> <20181015202359.afvumr777zbbeokb@daoine.org> <9bd403d9-b8ae-0ad4-85aa-ae6671ec999d@gmail.com> <20181016075622.tr245ocpe3cuypgd@daoine.org> Message-ID: Hallo Francis, thank you for? the liberating response :). Unfortunately that rise some questions: 1. documentation Is there any additional document for the -c command. I find only: 1. http://nginx.org/en/docs/switches.html 2. https://stackoverflow.com/questions/19910042/locate-the-nginx-conf-file-my-nginx-is-actually-using but none of them says that it will start an independent instances of nginx. 2. command line I assume, that the command line parameters refer to a single instance environment. How do I use the command line parameters for a specific instance? Is it like this nginx -V "pid /var/run/nginx-user1.pid"? 3. root and non-root only the master / proxy server instance need root access in order to bind to ports <1024 and change its user-id to the one defined in the|user | directive in the main context of its .conf file. The other / backend instances don't have to be started as root as they don't need to bind to ports, they communicate via UNIX sockets so all permission are managed by the user account management. That is the same, what you said, isn't it? 4. all in all there two layers of isolation 1. dynamic content provide such as PHP each "virtual host" / server{} blocks has its own PHP pool. So the user for pool server{}/user1/ cannot see? the pool server{}/user2/. If /user1/ gets hacked, the hacker won't get immidate acceass to /user2/ or the nginx? master process, correct? 2. independent instances of nginx. In case the master process is breach for what ever reason, the hacker cannot see the other serves as long as he won't get root privileges of the machine and there is the same exploit in the other servers, correct? Stefan On 16.10.2018 09:56, Francis Daly wrote: > On Tue, Oct 16, 2018 at 09:20:33AM +0200, Stefan M?ller wrote: > > Hi there, > >> so I wonder, if I need to work with multiple .conf files or shall I put >> multiple http{} blocks in the general configuration of nginx >> /etc/nginx/nginx.conf? I assume that Reinis told me indirectly to run >> multiple instances of nginx, but I haven't understood yet how. There is the >> master process, properly taking care about the proxy server but how to I >> start the instance (if I need to work with instances) per /virtual host/? > In this design, you run multiple instances of nginx. That is: multiple > individual system processes that are totally independent of each other. > > So: nginx-user1.conf includes something like > > http { > server { > listen unix:/some/path/user1.sock; > } > } > > and refers to log files and tmp files and a pid file that user1 can write, > and to a document root that user1 can read (if necessary), and you run > the command "/usr/sbin/nginx -c nginx-user1.conf" as system user user1. > > And then you do the same for user2, user3, etc. > > And then you have one other "nginx-main.conf" which includes "listen 443 > ssl" and includes proxy_pass to the individual unix:/some/path/userN.sock > "backend" servers; and you run the command "/usr/sbin/nginx -c > nginx-main.conf" as user root. > > > Note: the actual file names involved are irrelevant. All that matters > is that when the nginx binary is run with a "-c" option, it can read > the named file which contains the config that this instance will use. > > If the nginx process starts as user root, it will change itself to run as > the other configured user-id as soon as it can; if it starts as non-root, > it will not. In the above design, all of the user-specific backend nginx > servers run as non-root. > > > And - the term "virtual host" usually refers to different server{} blocks > within the configuration of a single nginx instance. You (generally) don't > care about those -- the nginx binary will start the appropriate child > system-level processes to deal with the configuration that it was given. > > If you are running multiple nginx system-level processes, each one has > its own idea of the virtual hosts from its configuration. With the above > design, all of the "user" nginx instances have just one server{} block, > while the "root" nginx instance probably has multiple server{} blocks. > > > Good luck with it, > > f -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Oct 16 20:40:43 2018 From: nginx-forum at forum.nginx.org (shivramg94) Date: Tue, 16 Oct 2018 16:40:43 -0400 Subject: Idle Timeout during the HTTP request response phase In-Reply-To: <000d01d46552$770dbe20$65293a60$@roze.lv> References: <000d01d46552$770dbe20$65293a60$@roze.lv> Message-ID: <3902c6f457cfd115122c99ad8c3b4aa2.NginxMailingListEnglish@forum.nginx.org> Thanks for the pointers. For backend/upstream servers does they translate to the below two directives For read : proxy_read_timeout For send: proxy_send_timeout Please correct me if I am wrong Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281614,281618#msg-281618 From kgorlo at gmail.com Wed Oct 17 08:55:30 2018 From: kgorlo at gmail.com (Kamil Gorlo) Date: Wed, 17 Oct 2018 10:55:30 +0200 Subject: Draining keepalive connections during reload Message-ID: Hi, according to https://trac.nginx.org/nginx/ticket/1022#comment:1 and in this mailing list archives, when there is configuration reload, keep-alive connections are closed. Is there a way to "drain" them? What I would like to have is to provide some hard limit in configuration (in seconds) to let the client close the connection on their side, if that doesn't happen in given timeout ONLY THEN connections are closed. This of course will make reload procedure longer, but in our case it would be really useful and prevent us from reconnect storm during upgrades on our edge infrastructure. OR maybe somebody already have some patch to do that (I saw some attempts in the archives)? Cheers, Kamil -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Oct 17 09:13:15 2018 From: nginx-forum at forum.nginx.org (drookie) Date: Wed, 17 Oct 2018 05:13:15 -0400 Subject: nginx caching proxy Message-ID: <38b13213547d28d23eb53e11c8b683d5.NginxMailingListEnglish@forum.nginx.org> Hello, I did't find the answer in documentation, but am I right, assuming from my observation, that when the proxy_cache is enabled for a location, and the client requests the file that isn't in the cache yet, nginx starts transmitting this file only after it's fully received from the upstream ? Because I'm seeing the lags equal to the request_time from the upstream. If I'm right, is there a way to enable the transmitting without waiting for the end of the file ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281621,281621#msg-281621 From arut at nginx.com Wed Oct 17 11:24:47 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 17 Oct 2018 14:24:47 +0300 Subject: nginx caching proxy In-Reply-To: <38b13213547d28d23eb53e11c8b683d5.NginxMailingListEnglish@forum.nginx.org> References: <38b13213547d28d23eb53e11c8b683d5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181017112447.GD24675@Romans-MacBook-Air.local> Hello, On Wed, Oct 17, 2018 at 05:13:15AM -0400, drookie wrote: > Hello, > > I did't find the answer in documentation, but am I right, assuming from my > observation, that when the proxy_cache is enabled for a location, and the > client requests the file that isn't in the cache yet, nginx starts > transmitting this file only after it's fully received from the upstream ? > Because I'm seeing the lags equal to the request_time from the upstream. The short answer is no, nginx cache does not introduce any delays here. Maybe your client waits until it receives the full file before letting you know. When a client requests a file missing in the cache, then the file is requested from the upstream and it is sent to the client SIMULTANEOUSLY with saving it in the cache. However if another client requests a file currently being received and cached in the first client's context, and proxy_cache_lock is enabled, then this second client will wait for the file to be fully cached by nginx and only receives it from the cache after that. [..] -- Roman Arutyunyan From francis at daoine.org Wed Oct 17 20:59:43 2018 From: francis at daoine.org (Francis Daly) Date: Wed, 17 Oct 2018 21:59:43 +0100 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: References: <000601d45a6e$cee8e960$6cbabc20$@roze.lv> <000001d45aad$4d488f70$e7d9ae50$@roze.lv> <7f86eb5e-0916-589f-e5b8-c08999fb59e9@gmail.com> <20181015202359.afvumr777zbbeokb@daoine.org> <9bd403d9-b8ae-0ad4-85aa-ae6671ec999d@gmail.com> <20181016075622.tr245ocpe3cuypgd@daoine.org> Message-ID: <20181017205943.i3rl2aewksyibwyo@daoine.org> On Tue, Oct 16, 2018 at 09:23:33PM +0200, Stefan M?ller wrote: Hi there, > 1. documentation > Is there any additional document for the -c command. I find only: > 1. http://nginx.org/en/docs/switches.html That page indicates that "-c" means that this nginx instance uses this named config file instead of the compiled-in default one. I'm not sure what other documentation is possible. > but none of them says that it will start an independent instances of > nginx. Every time you run the nginx binary, you run a new instance of nginx, independent of any other. Not all command-line argument combinations to nginx mean that it should run a web server that never exits; many combinations mean "do this one thing and exit". The one command-line argument that is used to interact with an already-running nginx -- -s -- does that interaction by sending a signal to the appropriate process-id, in exactly the same way that the "kill" or "pkill" binaries would. The new short-term nginx instance is still independent of the already-running one, in the same way that "kill" would be independent of it. > 2. command line > I assume, that the command line parameters refer to a single > instance environment. How do I use the command line parameters for a > specific instance? Is it like this nginx -V "pid > /var/run/nginx-user1.pid"? Command-line parameters are used when you start a new instance of nginx. They do not refer to any other instance (with the one "-s" exception mentioned above). > 3. root and non-root > only the master / proxy server instance need root access in order to > bind to ports <1024 and change its user-id to the one defined in > the|user | > directive in the main context of its .conf file. > The other / backend instances don't have to be started as root as > they don't need to bind to ports, they communicate via UNIX sockets > so all permission are managed by the user account management. > That is the same, what you said, isn't it? Yes. An nginx that wants to "listen" on a place that only root can "listen" on, or write to places that only root can write to, needs to start as root. Any other nginx does not need to. (Again, excepting "-s".) > 4. all in all there two layers of isolation > 1. dynamic content provide such as PHP nginx does not "do" PHP. Or much in the way of "dynamic" content (in the sense that it seems to be meant here). > each "virtual host" / server{} blocks has its own PHP pool. So That's not an nginx thing, but it is a thing that you can configure if you want to. > the user for pool server{}/user1/ cannot see? the pool > server{}/user2/. If /user1/ gets hacked, the hacker won't get > immidate acceass to /user2/ or the nginx? master process, correct? "PHP" is (probably) run under a fastcgi server, as whatever system user you want that service to run as. You can run multiple fastcgi servers if you want to, each under their own user account. nginx can be a client of that server if you use "fastcgi_pass" in your nginx config that points to the server. You have nginx-main, starts as root then switches to userM. "The browser" is a client of this server. You have (multiple) nginx-userN, runs as userN. nginx-main is a client of this server. You have (multiple) php-userP, runs as userP. nginx-userN is a client of this server. What specific threat model are you concerned with here? When a thing gets hacked, is that thing running under the control of root, or userM, or one of the multiple userNs, or one of the multiple userPs? The access the outsider gains will (presumably) not exceed the access that that user has. "root" will be able to access lots of things. "userM" must be able to access the "userN" listening socket. "userN" must be able to access the matching "userP" listening socket. "userP" must be able to access the php files. > 2. independent instances of nginx. > In case the master process is breach for what ever reason, the > hacker cannot see the other serves as long as he won't get root > privileges of the machine and there is the same exploit in the > other servers, correct? Again, I'm unsure what threat model you are concerned about here. If someone breaks the main nginx to get "root" access, they have root access. If they break the main nginx to get "userM" access, they will be able to access nginx-userN (because userM can do that). Is that enough access to also break nginx-userN so that they can get "userN" access? (nginx-main and nginx-userN run the same binary. Is the breakage due to configuration, which might be different between the two? Or due to something inherent, which will be the same between the two?) Without trying to be dismissive: if someone can break nginx to gain user access, it is unlikely to be you or me that they will be attacking first. I think that based on history, "badly written CGI scripts" (which in this case corresponds to "badly written PHP") is the most likely way that web things will be broken. In this design, that PHP runs under the control of the fastcgi server, as a user userP. If that happens, the outsider will have access as userP to do whatever the PHP script and fastcgi server allow them to do. nginx is not involved except as (probably) an initial pass-through tunnel. If userP has access to turn off your fridge or reconfigure your nginx-main or send a million emails or read secret files on your filesystem, then the outsider will probably have access to do those things too. Only you can decide what level of risk you're happy with. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Oct 18 12:12:00 2018 From: nginx-forum at forum.nginx.org (wld75) Date: Thu, 18 Oct 2018 08:12:00 -0400 Subject: download logging not captured in access.log In-Reply-To: <20181014051102.GA56558@mdounin.ru> References: <20181014051102.GA56558@mdounin.ru> Message-ID: Hello, Thanks for your support. I have verified the nginx.conf and found no abnormality in logging section: please see below from the nginx.conf. i can see all requests for upload requests only in the access.log, but download request not appeared. ## # Logging Settings ## log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' 'bytes_send="$bytes_sent" content_length="$content_length" content_type="$content_type" request_body="$request_body" ' 'rt=$request_time uct="$upstream_connect_time" server_addr="$server_addr" request_url="$request_uri" uht="$upstream_header_time" urt=" $upstream_response_time" rqlength="$request_length"' access_log /var/log/nginx/access.log main; error_log /var/log/nginx/error.log; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281572,281631#msg-281631 From mdounin at mdounin.ru Thu Oct 18 13:19:10 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Oct 2018 16:19:10 +0300 Subject: download logging not captured in access.log In-Reply-To: References: <20181014051102.GA56558@mdounin.ru> Message-ID: <20181018131909.GM56558@mdounin.ru> Hello! On Thu, Oct 18, 2018 at 08:12:00AM -0400, wld75 wrote: > I have verified the nginx.conf and found no abnormality in logging section: > please see below from the nginx.conf. > i can see all requests for upload requests only in the access.log, but > download request not appeared. As previously suggested, you should check all logging settings in nginx configuration, as logging can be disabled on a per-location basis. Try "nginx -T | grep access_log". Also, it might be a good idea to make sure that requests you are looking for are actually handled by the nginx server you are looking at. In particular, using tcpdump on the server to check that requests are actually coming might be a good idea. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Oct 18 13:29:31 2018 From: nginx-forum at forum.nginx.org (kimown) Date: Thu, 18 Oct 2018 09:29:31 -0400 Subject: How can I remove backslash when log format use escape=json Message-ID: <51c0caa893cb26844bf5fcc417985f15.NginxMailingListEnglish@forum.nginx.org> Hi, I want to log my entire request_body, but access.log contains some strange backslash, how can I remove these backslash before doube quote? Here is my nginx.conf, ``` log_format main escape=json '$request_body'; access_log logs/access.log main; ``` This is my request code: ``` fetch('http://localhost:8080/njs',{ method:'POST', body:JSON.stringify({ text:'message with backslash' }) }).then(res=>res.json()).then((res)=>{ console.info(res) }) ``` And access.log ``` {\"text\":\"message with backslash\"} ``` But I think it should be ``` {"text":"message with backslash"} ``` Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281634,281634#msg-281634 From mdounin at mdounin.ru Thu Oct 18 13:41:02 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Oct 2018 16:41:02 +0300 Subject: How can I remove backslash when log format use escape=json In-Reply-To: <51c0caa893cb26844bf5fcc417985f15.NginxMailingListEnglish@forum.nginx.org> References: <51c0caa893cb26844bf5fcc417985f15.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20181018134102.GO56558@mdounin.ru> Hello! On Thu, Oct 18, 2018 at 09:29:31AM -0400, kimown wrote: > Hi, I want to log my entire request_body, but access.log contains some > strange backslash, how can I remove these backslash before doube quote? > > Here is my nginx.conf, > ``` > log_format main escape=json '$request_body'; > > access_log logs/access.log main; > ``` > > This is my request code: > ``` > fetch('http://localhost:8080/njs',{ > method:'POST', > body:JSON.stringify({ > text:'message with backslash' > }) > }).then(res=>res.json()).then((res)=>{ > console.info(res) > }) > ``` > > And access.log > ``` > {\"text\":\"message with backslash\"} > ``` > > But I think it should be > ``` > {"text":"message with backslash"} > ``` With "escape=json", nginx will escape variables to be usable in JSON structures, so you will be able to use them in log format with JSON formatting, e.g.: log_format main escape=json '{ "body": "$request_body" }'; If you want nginx to do not escape anything in your logs, you can use "escape=none". Note though that without escaping you completely depend on data the variable contain - in particular, in your case a malicious client will be able to supply arbitrary data, including multiple log entries or broken records. -- Maxim Dounin http://mdounin.ru/ From Garbage at gmx.de Thu Oct 18 14:18:50 2018 From: Garbage at gmx.de (Garbage at gmx.de) Date: Thu, 18 Oct 2018 16:18:50 +0200 Subject: Encoding URL before passing it on to reverse proxied application Message-ID: We are in the process of replacing a rather old web application. The old application used links that look like this: https://nameofapp/do?param1=A&id={ABC-DEF-GHI}¶m2=B I know that as far as RFC is concerned { and } are not valid characters but unfortunately the application was built that way and we have 100s of 1000s of links that were stored by users and we can't replace them (the links ;-)) The new application is based on Spring Boot and has to be prefixed by a nginx webserver. Nginx takes the URL and proxies it to the Spring Boot application like this: location / { proxy_pass http://localhost:8080; } While doing this nginx keeps the { and } in place, then Spring Boots built in security firewall complains about those characters and sends a http status 400. I do not want to switch off this feature (although it would be possible). Instead I search for a way to achieve one of these: - have nginx "process" the URL and encode the { and } before passing them on. For example this could look like "/do?param1=A&id=%7BABC-DEF-GHI%7D¶m2=B" - have nginx "hide" the complete URL, for example this could like like "/processhiddenvalue?value=ZG8/cGFyYW0xPUEmaWQ9e0FCQy1ERUYtR0hJfSZwYXJhbTI9Qg==" (which is just the base64 encoded URL) Is one of these approaches possible ? From mdounin at mdounin.ru Thu Oct 18 14:36:05 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Oct 2018 17:36:05 +0300 Subject: Encoding URL before passing it on to reverse proxied application In-Reply-To: References: Message-ID: <20181018143605.GP56558@mdounin.ru> Hello! On Thu, Oct 18, 2018 at 04:18:50PM +0200, Garbage at gmx.de wrote: > We are in the process of replacing a rather old web application. The old application used links that look like this: > > https://nameofapp/do?param1=A&id={ABC-DEF-GHI}¶m2=B > > I know that as far as RFC is concerned { and } are not valid characters but unfortunately the application was built that way and we have 100s of 1000s of links that were stored by users and we can't replace them (the links ;-)) > > The new application is based on Spring Boot and has to be prefixed by a nginx webserver. Nginx takes the URL and proxies it to the Spring Boot application like this: > > location / { > proxy_pass http://localhost:8080; > } > > While doing this nginx keeps the { and } in place, then Spring Boots built in security firewall complains about those characters and sends a http status 400. I do not want to switch off this feature (although it would be possible). Instead I search for a way to achieve one of these: > > - have nginx "process" the URL and encode the { and } before passing them on. For example this could look like "/do?param1=A&id=%7BABC-DEF-GHI%7D¶m2=B" > - have nginx "hide" the complete URL, for example this could like like "/processhiddenvalue?value=ZG8/cGFyYW0xPUEmaWQ9e0FCQy1ERUYtR0hJfSZwYXJhbTI9Qg==" (which is just the base64 encoded URL) > > Is one of these approaches possible ? You can easily do something like this: rewrite ^ /foo? break; proxy_pass http://localhost:8080; proxy_set_header X-Original-URI $request_uri; This will replace the URI as seen by the upstream server to "/foo" without any request arguments, and the original request URI will be sent in the X-Original-URI header. Replacing "{" and "}" characters should be also possible, though will require some more sophisticated scripting. For example, the following code: if ($args ~ "(.*){(.*)") { set $args $1%7B$2; } if ($args ~ "(.*){(.*)") { set $args $1%7B$2; } if ($args ~ "(.*)}(.*)") { set $args $1%7D$2; } if ($args ~ "(.*)}(.*)") { set $args $1%7D$2; } will replace up to two occurences of "{" and "}" in the request arguments. -- Maxim Dounin http://mdounin.ru/ From linux at cmadams.net Thu Oct 18 15:39:36 2018 From: linux at cmadams.net (Chris Adams) Date: Thu, 18 Oct 2018 10:39:36 -0500 Subject: POP3/IMAP proxy support for XCLIENT/ID Message-ID: <20181018153936.GB21774@cmadams.net> I am setting up an nginx SMTP proxy and using XCLIENT to get the real client info to the backend Postfix servers. I'm interested in also using it for POP3 and IMAP to backend Dovecot servers - it looks like Dovecot supports XCLIENT in POP3 and ID in IMAP to pass the same real info. Is there any support from nginx for that? -- Chris Adams From nginx-forum at forum.nginx.org Thu Oct 18 16:50:29 2018 From: nginx-forum at forum.nginx.org (dhallam) Date: Thu, 18 Oct 2018 12:50:29 -0400 Subject: Trigger alternate response based on file presence or other URL response Message-ID: <8cb7ad2c757cff466b3da814fa8eda9d.NginxMailingListEnglish@forum.nginx.org> Hi. I'll try to make this make sense. I have a few web applications that I can't change. They are behind an nginx+ loadbalancer. I have separate database state that indicates whether or not requests should be forwarded to the upstream, or a status page should be returned. There is a web service that I can invoke that gives me access to the database state. I tried using that service as a health_check uri to change the upstream to "failed", but that doesn't work as the uri is just a path appended to the host+port. I could have something monitoring the database state web service running on the nginx server instance that could, for example, create a flag file on the nginx server instance that could indicate to nginx how it should deal with the requests, but I'm not sure if/how I could configure that in nginx. Would anyone have any suggestions on how I can control how nginx responds based on state that isn't the upstream? Many thanks for your help. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281639,281639#msg-281639 From nginx-forum at forum.nginx.org Thu Oct 18 17:59:59 2018 From: nginx-forum at forum.nginx.org (dhallam) Date: Thu, 18 Oct 2018 13:59:59 -0400 Subject: Trigger alternate response based on file presence or other URL response In-Reply-To: <8cb7ad2c757cff466b3da814fa8eda9d.NginxMailingListEnglish@forum.nginx.org> References: <8cb7ad2c757cff466b3da814fa8eda9d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <742decf0d69a8d0c18cdd77264e0f089.NginxMailingListEnglish@forum.nginx.org> So i can see that I can do something like the maintenance mode example, e.g. if (-f $document_root/under_maintenance.html) { return 503; } Wondering if there is a way to use the URL endpoint to check. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281639,281640#msg-281640 From r at roze.lv Thu Oct 18 22:43:40 2018 From: r at roze.lv (Reinis Rozitis) Date: Fri, 19 Oct 2018 01:43:40 +0300 Subject: Trigger alternate response based on file presence or other URL response In-Reply-To: <742decf0d69a8d0c18cdd77264e0f089.NginxMailingListEnglish@forum.nginx.org> References: <8cb7ad2c757cff466b3da814fa8eda9d.NginxMailingListEnglish@forum.nginx.org> <742decf0d69a8d0c18cdd77264e0f089.NginxMailingListEnglish@forum.nginx.org> Message-ID: <000001d46734$043b1c10$0cb15430$@roze.lv> > Wondering if there is a way to use the URL endpoint to check. One way to do it would be with: http://nginx.org/en/docs/http/ngx_http_auth_request_module.html You said you have nginx+ which has the ability to reconfigure the the backends on the fly: http://nginx.org/en/docs/http/ngx_http_upstream_conf_module.html So your application which can read the state from the database could just set the upstream(s) as down so nginx won't forward any requests. Another (more advanced) way would be to do it inside nginx with lua ( https://github.com/openresty/lua-nginx-module ) wich can connect directly to the database (mysql/pgsql etc) and decide what response to return. Then again it's more complicated. rr From nginx-forum at forum.nginx.org Fri Oct 19 02:23:37 2018 From: nginx-forum at forum.nginx.org (kimown) Date: Thu, 18 Oct 2018 22:23:37 -0400 Subject: How can I remove backslash when log format use escape=json In-Reply-To: <20181018134102.GO56558@mdounin.ru> References: <20181018134102.GO56558@mdounin.ru> Message-ID: I see, really thanks to your advice. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281635,281642#msg-281642 From Garbage at gmx.de Fri Oct 19 08:31:33 2018 From: Garbage at gmx.de (Garbage at gmx.de) Date: Fri, 19 Oct 2018 10:31:33 +0200 Subject: Aw: Re: Encoding URL before passing it on to reverse proxied application In-Reply-To: <20181018143605.GP56558@mdounin.ru> References: <20181018143605.GP56558@mdounin.ru> Message-ID: > > Is one of these approaches possible ? > > You can easily do something like this: > > rewrite ^ /foo? break; > proxy_pass http://localhost:8080; > proxy_set_header X-Original-URI $request_uri; > > This will replace the URI as seen by the upstream server to > "/foo" without any request arguments, and the original request URI > will be sent in the X-Original-URI header. Thanks a lot, this looks promising. I will give it a try From xeioex at nginx.com Fri Oct 19 11:27:27 2018 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 19 Oct 2018 14:27:27 +0300 Subject: Nginx NJS fs.writeFileSync is atomic writing and race condition prevention ? In-Reply-To: References: Message-ID: On 19.10.2018 06:33, HENG wrote: > Hello: > > I am new to Nginx NJS, and I want to write a website with NJS. > > I want to write a simple JSON database with NJS fs.writeFileSync ,just > like Node.js LowDB. > > but I have no idea . Does?NJS fs.writeFileSync is atomic writing and > race condition prevention ? http://hg.nginx.org/njs/file/tip/njs/njs_fs.c#l649 It simply opens the file, makes several write syscalls into it, and closes the file. So, it is not atomic. Probably we need here something like https://www.npmjs.com/package/write-file-atomic > > If?NJS fs.writeFileSync is NOT atomic writing, it can NOT be treate as a > normal database file write. > > Thank you ! > > -- > -------------------------------------------------------------------- > Heng > --------------------------------------------------------------------- > -- > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > From stefan.mueller.83 at gmail.com Fri Oct 19 21:27:09 2018 From: stefan.mueller.83 at gmail.com (=?UTF-8?Q?Stefan_M=c3=bcller?=) Date: Fri, 19 Oct 2018 23:27:09 +0200 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: <20181017205943.i3rl2aewksyibwyo@daoine.org> References: <000601d45a6e$cee8e960$6cbabc20$@roze.lv> <000001d45aad$4d488f70$e7d9ae50$@roze.lv> <7f86eb5e-0916-589f-e5b8-c08999fb59e9@gmail.com> <20181015202359.afvumr777zbbeokb@daoine.org> <9bd403d9-b8ae-0ad4-85aa-ae6671ec999d@gmail.com> <20181016075622.tr245ocpe3cuypgd@daoine.org> <20181017205943.i3rl2aewksyibwyo@daoine.org> Message-ID: <608741cc-223d-56fb-059e-131ca4a726b5@gmail.com> Hi Francis, thank you for the update 1. documentation I've read up and down the documentation and other sources. I could not find anything saying that if you run the nginx binary, you run a new instance of nginx, independent of any other. Where is it mentioned that nginx is multi instance capable? application only indirectly in Upgrading To a New Binary On The Fly in Starting, Stopping, and Restarting NGINX and How To Upgrade Nginx In-Place Without Dropping Client Connections . I found rather sources telling me to install it twice as here: How to install Multiple Nginx instances in same Server 2. command line Slowly I being understanding reading Starting, Stopping, and Restarting NGINX and Controlling NGINX Processes at Runtime . If you run multiple instances you use the signal -s with a reference to the desired PID otherwise it will pick the default on what is /var/run/nginx.pid , isn't? e.g. I have a master and two user nginx instances relaoding each of them would be: master: nginx -s reload user1:? nginx -s reload "pid /var/run/nginx_user1.pid" user2:? nginx -s reload "pid /var/run/nginx_user2.pid" or by its PIDs listed by ps aux -P | grep nginx master: nginx -s reload user1:? nginx -s reload PID_NoOfUser1Instance user2:? nginx -s reload PID_NoOfUser1Instance Making sure that I understood correctly nginx -g "pid /var/run/nginx.pid; worker_processes `sysctl -n hw.ncpu`;" means to replace the number of workers by the count of CPUs in nginx master process with the process id given in /var/run/nginx.pid, doesn't it? 3. root and non-root done :-) 4. ?all in all there two layers of isolation I didn't mean the nginx does not do the e.g. php interpretation, you just tell nginx where to find the php interpreter, what will do the job and feeds the content through a CGI. That what you described is more or less what I meat but I'm bit confused as you say, that if the main nginx is compromised the hacker could gain root privileges? Is this also possible after the main process changed its user after the binding of the ports >1024? After all, I just want to have a clear picture of what is at risk when some part of the entire server environment is compromised and how can I minimize the risk by isolation of all involved parts. I agree that not many of us will be a target to hack into nginx and if someone tries to hack our servers there are properly weaker parts as the so often mentioned? "badly written PHP" scripts. All that I want is to have a good idea about the risks to put me in a position to do a proper effort vs. risk assessment. I reckon the information gathered so far put me in a quite good position, doesn't it? thx Stefan On 17.10.2018 22:59, Francis Daly wrote: > On Tue, Oct 16, 2018 at 09:23:33PM +0200, Stefan M?ller wrote: > > Hi there, > >> 1. documentation >> Is there any additional document for the -c command. I find only: >> 1.http://nginx.org/en/docs/switches.html > That page indicates that "-c" means that this nginx instance uses this > named config file instead of the compiled-in default one. I'm not sure > what other documentation is possible. > >> but none of them says that it will start an independent instances of >> nginx. > Every time you run the nginx binary, you run a new instance of nginx, > independent of any other. > > Not all command-line argument combinations to nginx mean that it should > run a web server that never exits; many combinations mean "do this one > thing and exit". > > The one command-line argument that is used to interact with an > already-running nginx -- -s -- does that interaction by sending a signal > to the appropriate process-id, in exactly the same way that the "kill" > or "pkill" binaries would. The new short-term nginx instance is still > independent of the already-running one, in the same way that "kill" > would be independent of it. > >> 2. command line >> I assume, that the command line parameters refer to a single >> instance environment. How do I use the command line parameters for a >> specific instance? Is it like this nginx -V "pid >> /var/run/nginx-user1.pid"? > Command-line parameters are used when you start a new instance of > nginx. They do not refer to any other instance (with the one "-s" > exception mentioned above). > >> 3. root and non-root >> only the master / proxy server instance need root access in order to >> bind to ports <1024 and change its user-id to the one defined in >> the|user| >> directive in the main context of its .conf file. >> The other / backend instances don't have to be started as root as >> they don't need to bind to ports, they communicate via UNIX sockets >> so all permission are managed by the user account management. >> That is the same, what you said, isn't it? > Yes. > > An nginx that wants to "listen" on a place that only root can "listen" > on, or write to places that only root can write to, needs to start as > root. Any other nginx does not need to. (Again, excepting "-s".) > >> 4. all in all there two layers of isolation >> 1. dynamic content provide such as PHP > nginx does not "do" PHP. Or much in the way of "dynamic" content (in > the sense that it seems to be meant here). > >> each "virtual host" / server{} blocks has its own PHP pool. So > That's not an nginx thing, but it is a thing that you can configure if > you want to. > >> the user for pool server{}/user1/ cannot see? the pool >> server{}/user2/. If /user1/ gets hacked, the hacker won't get >> immidate acceass to /user2/ or the nginx? master process, correct? > "PHP" is (probably) run under a fastcgi server, as whatever system user > you want that service to run as. You can run multiple fastcgi servers > if you want to, each under their own user account. > > nginx can be a client of that server if you use "fastcgi_pass" in your > nginx config that points to the server. > > > You have nginx-main, starts as root then switches to userM. "The browser" > is a client of this server. > > You have (multiple) nginx-userN, runs as userN. nginx-main is a client > of this server. > > You have (multiple) php-userP, runs as userP. nginx-userN is a client > of this server. > > What specific threat model are you concerned with here? > > When a thing gets hacked, is that thing running under the control of root, > or userM, or one of the multiple userNs, or one of the multiple userPs? > > The access the outsider gains will (presumably) not exceed the access > that that user has. "root" will be able to access lots of things. "userM" > must be able to access the "userN" listening socket. "userN" must be > able to access the matching "userP" listening socket. "userP" must be > able to access the php files. > >> 2. independent instances of nginx. >> In case the master process is breach for what ever reason, the >> hacker cannot see the other serves as long as he won't get root >> privileges of the machine and there is the same exploit in the >> other servers, correct? > Again, I'm unsure what threat model you are concerned about here. > > If someone breaks the main nginx to get "root" access, they have root > access. If they break the main nginx to get "userM" access, they will > be able to access nginx-userN (because userM can do that). Is that > enough access to also break nginx-userN so that they can get "userN" > access? (nginx-main and nginx-userN run the same binary. Is the breakage > due to configuration, which might be different between the two? Or due > to something inherent, which will be the same between the two?) > > Without trying to be dismissive: if someone can break nginx to gain user > access, it is unlikely to be you or me that they will be attacking first. > > I think that based on history, "badly written CGI scripts" (which in > this case corresponds to "badly written PHP") is the most likely way > that web things will be broken. In this design, that PHP runs under > the control of the fastcgi server, as a user userP. If that happens, > the outsider will have access as userP to do whatever the PHP script > and fastcgi server allow them to do. nginx is not involved except as > (probably) an initial pass-through tunnel. > > If userP has access to turn off your fridge or reconfigure your nginx-main > or send a million emails or read secret files on your filesystem, then > the outsider will probably have access to do those things too. > > Only you can decide what level of risk you're happy with. > > Good luck with it, > > f -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Mon Oct 22 08:50:11 2018 From: r at roze.lv (Reinis Rozitis) Date: Mon, 22 Oct 2018 11:50:11 +0300 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: <608741cc-223d-56fb-059e-131ca4a726b5@gmail.com> References: <000601d45a6e$cee8e960$6cbabc20$@roze.lv> <000001d45aad$4d488f70$e7d9ae50$@roze.lv> <7f86eb5e-0916-589f-e5b8-c08999fb59e9@gmail.com> <20181015202359.afvumr777zbbeokb@daoine.org> <9bd403d9-b8ae-0ad4-85aa-ae6671ec999d@gmail.com> <20181016075622.tr245ocpe3cuypgd@daoine.org> <20181017205943.i3rl2aewksyibwyo@daoine.org> <608741cc-223d-56fb-059e-131ca4a726b5@gmail.com> Message-ID: <000001d469e4$3e74a9c0$bb5dfd40$@roze.lv> > 2. command line > Slowly I being understanding reading Starting, Stopping, and Restarting NGINX and Controlling NGINX Processes at Runtime. If you run > multiple instances you use the signal -s with a reference to the desired PID otherwise it will pick the default on what is /var/run/nginx.pid , > isn't? > e.g. I have a master and two user nginx instances relaoding each of them would be: > master: nginx -s reload > user1: nginx -s reload "pid /var/run/nginx_user1.pid" > user2: nginx -s reload "pid /var/run/nginx_user2.pid" > > Making sure that I understood correctly nginx -g "pid /var/run/nginx.pid; worker_processes `sysctl -n hw.ncpu`;" means to replace the > number of workers by the count of CPUs in nginx master process with the process id given in /var/run/nginx.pid, doesn't it? While you can specify the pid (process id) from the command line imo it's more simple to put the pid file directive in each nginx configuration ( http://nginx.org/en/docs/ngx_core_module.html#pid ). For example nginx_user1.conf could have pid logs/nginx_user1.pid; , nginx_user2.conf - pid logs/nginx_user2.pid; etc.. Then for the cli commands you can just specify particular configuration file and nginx will pick up the pid automatically. For example to reload user2 nginx instance the cli command would be: nginx -s reload -c /path/to/user2_nginx.conf rr From al-nginx at none.at Mon Oct 22 12:08:56 2018 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 22 Oct 2018 14:08:56 +0200 Subject: NGINX with CGI on Alpine Linux In-Reply-To: <7786AB78-F0B9-465D-89A9-3B59249582B1@hubject.net> References: <43EB1F9C-98EE-4924-A597-70792283F47A@hubject.net> <53EAC8D3-5365-4D3C-BC13-5A9C5D4AAB10@hubject.net> <7786AB78-F0B9-465D-89A9-3B59249582B1@hubject.net> Message-ID: <72806a94-abd7-67ca-2c14-26262db5b905@none.at> Hi Postmaster (a.k.a. Bernard) I think this is a normal user user question not a development question so let's switch to nginx at nginx.org instead of nginx-devel at nginx.org. Am 22.10.2018 um 13:57 schrieb Postmaster: > Hi Team, > > I?m looking for the easiest way to allow CGI Script in my web site ? >> Any strait forward procedure available ? A short search shows at least two good options. https://stackoverflow.com/questions/11667489/how-to-run-cgi-scripts-on-nginx https://stackoverflow.com/questions/10252306/nginx-uwsgi-and-cgi-python-script Both recommend uwsgi, It's a robust solution. > Best Regards > > Bernard Best regards. Aleks > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > From shahzaib.cb at gmail.com Tue Oct 23 09:03:14 2018 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Tue, 23 Oct 2018 14:03:14 +0500 Subject: Split single file! Message-ID: Hi, We have a js file which is 1.1Mb large. This file needs to be loaded 100% in order to load the website which sometimes take bit time. So we were thinking if there's a way we can split this file request into multiple requests and make it load parallel with help of nginx? Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Oct 23 18:01:03 2018 From: francis at daoine.org (Francis Daly) Date: Tue, 23 Oct 2018 19:01:03 +0100 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: <608741cc-223d-56fb-059e-131ca4a726b5@gmail.com> References: <000001d45aad$4d488f70$e7d9ae50$@roze.lv> <7f86eb5e-0916-589f-e5b8-c08999fb59e9@gmail.com> <20181015202359.afvumr777zbbeokb@daoine.org> <9bd403d9-b8ae-0ad4-85aa-ae6671ec999d@gmail.com> <20181016075622.tr245ocpe3cuypgd@daoine.org> <20181017205943.i3rl2aewksyibwyo@daoine.org> <608741cc-223d-56fb-059e-131ca4a726b5@gmail.com> Message-ID: <20181023180103.f2hefbd4kvne5slt@daoine.org> On Fri, Oct 19, 2018 at 11:27:09PM +0200, Stefan M?ller wrote: Hi there, > thank you for the update You're welcome. > 1. documentation > I've read up and down the documentation and other sources. I could > not find anything saying that if you run the nginx binary, you run a > new instance of nginx, independent of any other. In my experience, pretty much every time I run a command, it starts a new instance of the command that I just ran. That's what I expect to happen. The few exceptions are the ones that should be documented as exceptional. So the documentation as-is reads fine to me, and I am not in a position to suggest alternate words that might be clearer for others. > 2. command line > Slowly I being understanding reading Starting, Stopping, and > Restarting NGINX > e.g. I have a master and two user nginx instances relaoding each of > them would be: > master: nginx -s reload > user1:? nginx -s reload "pid /var/run/nginx_user1.pid" > user2:? nginx -s reload "pid /var/run/nginx_user2.pid" > or by its PIDs listed by ps aux -P | grep nginx > master: nginx -s reload > user1:? nginx -s reload PID_NoOfUser1Instance > user2:? nginx -s reload PID_NoOfUser1Instance My reading of that documentation does not lead me to the same conclusion that you apparently came to. That's ok; it should be straightforward enough for you to test. Run "nginx -c conf1", "nginx -c conf2", and "nginx". Check "ps" or read log files, so that you can see which instances are running. Then run "nginx -s stop" with whatever other arguments look right, and see what the response is, and what nginx instances are running afterwards. I suspect that the system is much simpler than you might think it is. If you want to stop the nginx that you started with "nginx -c conf1", you can run "nginx -c conf1 -s stop". > Making sure that I understood correctly nginx -g "pid > /var/run/nginx.pid; worker_processes `sysctl -n hw.ncpu`;" means to > replace the number of workers by the count of CPUs in nginx master > process with the process id given in /var/run/nginx.pid, doesn't it? I'm afraid I really don't know how to answer that, without just saying "no" and repeating my previous mail. What happened when you tried to run that command? > 4. ?all in all there two layers of isolation > After all, I just want to have a clear picture of what is at risk > when some part of the entire server environment is compromised and > how can I minimize the risk by isolation of all involved parts. That is a good thing to aim for. Only you can do your risk assessment. You have access to exactly what nginx does -- it's in the directory "src". If that's not something you want to read, then you will have to accept someone else's summary of it, and put your trust in them. If I were to write "the entire system is now, and will forever remain, totally unbreakable, no matter what configuration you use", would that change your opinion of it? Why should you accept random words in a stranger's email? (The quoted statement is almost certainly incorrect, by the way.) Some people and some companies seem willing to run the software. Maybe they did their own risk assessments and decided it was ok, or it was ok so long as they avoid some specific parts; or maybe they just assumed that it was good enough because other people were using it. I'm happy to use nginx for my purposes. One nginx which can read all relevant static files, plus one fastcgi server for each php application that I don't want to fully assess, is enough for me. > I reckon the information gathered so far put me in a quite good > position, doesn't it? I hope so. f -- Francis Daly francis at daoine.org From brian at brianwhalen.net Tue Oct 23 22:09:21 2018 From: brian at brianwhalen.net (Brian W.) Date: Tue, 23 Oct 2018 15:09:21 -0700 Subject: Nginx and token in localstorage Message-ID: There is one issue I am still seeing. If I use the provided ldap tools and auth with this, and set the cookie to expire, when it expires and a user goes there they get a 200 error from a consul server. It looks like a small localstorage based token is being left behind, and if it is deleted, or if I simply refresh the page, this all works. Has thi0s been seen before? Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Oct 24 16:47:16 2018 From: nginx-forum at forum.nginx.org (bbalban) Date: Wed, 24 Oct 2018 12:47:16 -0400 Subject: Nginx as Load Balancer AND Reverse Proxy Message-ID: <2fbd3e41b3b2f8bd02a086e7e790848a.NginxMailingListEnglish@forum.nginx.org> Hello, I am using Nginx successfully as a Load Balancer in AWS where http is redirected to https, and correct SSL certificate is used based on the Host parameter. Behind the Load balancer are 4 stateless NodeJS servers. What I want to do now is that when a customer points their domain to my Load balancer through CNAME, I _also_ want to serve a particular Path as their root. For example: Their directory is at myservice.com/c/company path. When someone visits www.company.com this host will get recognized and I will serve myservice.com/c/company as the root of www.company.com Is it possible to configure the load balancer to also do this? Thanks, Bahadir Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281681,281681#msg-281681 From michael.friscia at yale.edu Wed Oct 24 17:25:40 2018 From: michael.friscia at yale.edu (Friscia, Michael) Date: Wed, 24 Oct 2018 17:25:40 +0000 Subject: Nginx Plus Dashboard - memory pages Message-ID: <9C88DAF0-4B57-4AB7-9619-B9DD668CC6AC@yale.edu> I?m not really sure I understand what memory pages are in the Nginx Plus dashboard. I?ve been poking around the documentation and must be missing it. But it seems like every zone is using 5090 and then there is just one zone that is using 2544. My questions: 1. What are ?Memory Pages?? 2. Are they configurable? 3. Is it odd that one out of 8 zones is basically half what all the other zones are? Any help to push me into the right direction for documentation would be appreciated. ___________________________________________ Michael Friscia Office of Communications Yale School of Medicine (203) 737-7932 - office (203) 931-5381 - mobile http://web.yale.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Thu Oct 25 10:41:10 2018 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Thu, 25 Oct 2018 13:41:10 +0300 Subject: Nginx Plus Dashboard - memory pages In-Reply-To: <9C88DAF0-4B57-4AB7-9619-B9DD668CC6AC@yale.edu> References: <9C88DAF0-4B57-4AB7-9619-B9DD668CC6AC@yale.edu> Message-ID: <982f27a3-3468-6237-432e-0861548d7585@nginx.com> Michael, Please send your Nginx Plus related questions to plus-support at nginx.com. Thank you! Regards, Igor On 24.10.2018 20:25, Friscia, Michael wrote: > > I?m not really sure I understand what memory pages are in the Nginx > Plus dashboard. I?ve been poking around the documentation and must be > missing it. But it seems like every zone is using 5090 and then there > is just one zone that is using 2544. > > My questions: > > 1. What are ?Memory Pages?? > 2. Are they configurable? > 3. Is it odd that one out of 8 zones is basically half what all the > other zones are? > > Any help to push me into the right direction for documentation would > be appreciated. > > ___________________________________________ > > Michael Friscia > > Office of Communications > > Yale School of Medicine > > (203) 737-7932 - office > > (203) 931-5381 - mobile > > http://web.yale.edu > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Oct 25 11:21:35 2018 From: nginx-forum at forum.nginx.org (vizl) Date: Thu, 25 Oct 2018 07:21:35 -0400 Subject: nginx caching proxy In-Reply-To: <20181017112447.GD24675@Romans-MacBook-Air.local> References: <20181017112447.GD24675@Romans-MacBook-Air.local> Message-ID: And how about result cache file ? Will it be only 1 object in cache when 2 client GET 1 file SIMULTANEOUSLY or 2 different objects in Nginx proxy_cache_path ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281621,281686#msg-281686 From arut at nginx.com Thu Oct 25 13:20:57 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 25 Oct 2018 16:20:57 +0300 Subject: nginx caching proxy In-Reply-To: References: <20181017112447.GD24675@Romans-MacBook-Air.local> Message-ID: <20181025132057.GK24675@Romans-MacBook-Air.local> On Thu, Oct 25, 2018 at 07:21:35AM -0400, vizl wrote: > And how about result cache file ? > Will it be only 1 object in cache when 2 client GET 1 file SIMULTANEOUSLY or > 2 different objects in Nginx proxy_cache_path ? Only one response can be cached for a single key. Once a new response is cached, previous one is discarded. -- Roman Arutyunyan From nginx-forum at forum.nginx.org Thu Oct 25 14:18:58 2018 From: nginx-forum at forum.nginx.org (aliakseilisai@gmail.com) Date: Thu, 25 Oct 2018 10:18:58 -0400 Subject: HTTP Status Code 463 In-Reply-To: <20171004142443.GO16067@mdounin.ru> References: <20171004142443.GO16067@mdounin.ru> Message-ID: <256c6cdda53a07fcc94bee11f469eb79.NginxMailingListEnglish@forum.nginx.org> HTTP 463 It is a Amazon balancer error code. See https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-troubleshooting.html Looks like your requests in a loop. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,276667,281688#msg-281688 From anoopalias01 at gmail.com Thu Oct 25 14:34:36 2018 From: anoopalias01 at gmail.com (Anoop Alias) Date: Thu, 25 Oct 2018 20:04:36 +0530 Subject: high memory usage Message-ID: Hi, On a shared server with a large number of accounts ################ sites-enabled]# grep "server {" *|wc -l 11877 ################ The memory usage of nginx is very high -------------------------------------------------------------------------- Private + Shared = RAM used Program 1.6 GiB + 4.9 GiB = 6.5 GiB nginx (3) ----------------------------------------------------------------------------- # cat /proc/2068600/maps 00400000-006d6000 r-xp 00000000 09:7d 105657122 /usr/sbin/nginx 008d5000-008d6000 r--p 002d5000 09:7d 105657122 /usr/sbin/nginx 008d6000-008fe000 rw-p 002d6000 09:7d 105657122 /usr/sbin/nginx 008fe000-00921000 rw-p 00000000 00:00 0 0218b000-a2217000 rw-p 00000000 00:00 0 [heap] a2217000-13fbc7000 rw-p 00000000 00:00 0 [heap] ------------------------------------------------------------------------------------ pmap 2068600 2068600: nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf 0000000000400000 2904K r-x-- nginx 00000000008d5000 4K r---- nginx 00000000008d6000 160K rw--- nginx 00000000008fe000 140K rw--- [ anon ] 000000000218b000 2622000K rw--- [ anon ] 00000000a2217000 2582208K rw--- [ anon ] --------------------------------------------------------------------------------------- It looks like the heap is 2.6GB in size. Is there a way to reduce this? The configuration is not the problem ( which is why I am not attaching it) as systems will a smaller number of vhosts using the same config consume less ram -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From gryzli.the.bugbear at gmail.com Thu Oct 25 16:35:54 2018 From: gryzli.the.bugbear at gmail.com (Gryzli Bugbear) Date: Thu, 25 Oct 2018 19:35:54 +0300 Subject: nginx caching proxy In-Reply-To: <20181025132057.GK24675@Romans-MacBook-Air.local> References: <20181017112447.GD24675@Romans-MacBook-Air.local> <20181025132057.GK24675@Romans-MacBook-Air.local> Message-ID: <943a35a2-79e6-714e-f170-f9bfe70e7356@gmail.com> Except the case when your site is returning "Vary: Accept-Encoding" for example and the 2 clients are using different Accept-Encoding headers? (that's also true for whatever the Vary header is)? :) On 10/25/18 4:20 PM, Roman Arutyunyan wrote: > On Thu, Oct 25, 2018 at 07:21:35AM -0400, vizl wrote: >> And how about result cache file ? >> Will it be only 1 object in cache when 2 client GET 1 file SIMULTANEOUSLY or >> 2 different objects in Nginx proxy_cache_path ? > Only one response can be cached for a single key. Once a new response is > cached, previous one is discarded. > -- -- Gryzli https://gryzli.info From pg151 at dev-mail.net Thu Oct 25 16:56:27 2018 From: pg151 at dev-mail.net (pg151 at dev-mail.net) Date: Thu, 25 Oct 2018 09:56:27 -0700 Subject: BasicAuth config question Message-ID: <1540486587.3648875.1554561320.179D68F6@webmail.messagingengine.com> If I define nginx.conf ... server { ... include includes/conf1.inc; include includes/conf2.inc; ... } ... cat includes/conf1.inc; location ~ ^/sec($|/$) { deny all; } cat includes/conf2.inc; location = /sec/status { auth_basic 'Secure Access'; auth_basic_user_file /etc/nginx/sec/users; stub_status on; } @ https://example.com/sec/status displays, as intended, a HTTP Basic Auth challenge. But, if I move the auth_basic* into the immediately prior config file, cat includes/conf1.inc; location ~ ^/sec($|/$) { deny all; } + location ~ ^/sec { + auth_basic 'Secure Access'; + auth_basic_user_file /etc/nginx/sec/users; + } cat includes/conf2.inc; location = /sec/status { - auth_basic 'Secure Access'; - auth_basic_user_file /etc/nginx/sec/users; stub_status on; } @ https://example.com/sec/status displays server status immediately, WITHOUT any HTTP Basic Auth challenge. What's wrong with my 2nd config that's causing it to NOT invoke Basic Auth challenge? From mdounin at mdounin.ru Thu Oct 25 17:23:01 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Oct 2018 20:23:01 +0300 Subject: BasicAuth config question In-Reply-To: <1540486587.3648875.1554561320.179D68F6@webmail.messagingengine.com> References: <1540486587.3648875.1554561320.179D68F6@webmail.messagingengine.com> Message-ID: <20181025172301.GH56558@mdounin.ru> Hello! On Thu, Oct 25, 2018 at 09:56:27AM -0700, pg151 at dev-mail.net wrote: > If I define > > nginx.conf > ... > server { > ... > include includes/conf1.inc; > include includes/conf2.inc; > ... > } > ... > > cat includes/conf1.inc; > location ~ ^/sec($|/$) { > deny all; > } > > cat includes/conf2.inc; > location = /sec/status { > auth_basic 'Secure Access'; > auth_basic_user_file /etc/nginx/sec/users; > stub_status on; > } > > @ https://example.com/sec/status > > displays, as intended, a HTTP Basic Auth challenge. > > But, if I move the auth_basic* into the immediately prior config file, > > cat includes/conf1.inc; > location ~ ^/sec($|/$) { > deny all; > } > + location ~ ^/sec { > + auth_basic 'Secure Access'; > + auth_basic_user_file /etc/nginx/sec/users; > + } > > cat includes/conf2.inc; > location = /sec/status { > - auth_basic 'Secure Access'; > - auth_basic_user_file /etc/nginx/sec/users; > stub_status on; > } > > @ https://example.com/sec/status > > displays server status immediately, WITHOUT any HTTP Basic Auth challenge. > > What's wrong with my 2nd config that's causing it to NOT invoke Basic Auth challenge? In your second config, auth_basic is only configured for location "~ ^/sec", but not for location "= /sec/status". Since the request to /sec/status is handled in the latter, auth_basic won't apply. Note that location matching selects only one location to handle a request. If there are many matching locations, most specific will be used (see http://nginx.org/r/location for details). If you want to configure auth_basic for anything under /sec/, consider using nested prefix locations instead. For example: location /sec/ { auth_basic 'Secure Access'; auth_basic_user_file /etc/nginx/sec/users; location = /sec/ { deny all; } location = /sec/status { stub_status on; } } This way, auth_basic is inherited into all nested locations, and will be configured in "location = /sec/status" as well. Note well that "location ~ ^/sec" in your configuration will also match requests to "/security", "/second-version", and so on. Most likely this is not what you want, so the above example configuration uses "/sec/" prefix instead. -- Maxim Dounin http://mdounin.ru/ From vbart at nginx.com Thu Oct 25 17:29:02 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 25 Oct 2018 20:29:02 +0300 Subject: Unit 1.5 release Message-ID: <1564175.NYa3obWIOI@vbart-workstation> Hello, I'm glad to announce a new release of NGINX Unit. This release introduces preliminary Node.js support. Currently it lacks WebSockets, and there's a known problem with "promises". However, our admirable users have already started testing it even before the release: - https://medium.com/house-organ/what-an-absolute-unit-a36851e72554 Now it even easier, since Node.js package is published in npm: - https://www.npmjs.com/package/unit-http So feel free to try it and give us feedback on: - Github: https://github.com/nginx/unit/issues - Mailing list: https://mailman.nginx.org/mailman/listinfo/unit We will continue improving Node.js support in future releases. Among other features we are working on right now: WebSockets, Java module, flexible request routing, and serving of static media assets. Changes with Unit 1.5 25 Oct 2018 *) Change: the "type" of application object for Go was changed to "external". *) Feature: initial version of Node.js package with basic HTTP request-response support. *) Feature: compatibility with LibreSSL. *) Feature: --libdir and --incdir ./configure options to install libunit headers and static library. *) Bugfix: connection might be closed prematurely while sending response; the bug had appeared in 1.3. *) Bugfix: application processes might have stopped handling requests, producing "last message send failed: Resource temporarily unavailable" alerts in log; the bug had appeared in 1.4. *) Bugfix: Go applications didn't work when Unit was built with musl C library. wbr, Valentin V. Bartenev From pg151 at dev-mail.net Thu Oct 25 17:34:00 2018 From: pg151 at dev-mail.net (pg151 at dev-mail.net) Date: Thu, 25 Oct 2018 10:34:00 -0700 Subject: BasicAuth config question In-Reply-To: <20181025172301.GH56558@mdounin.ru> References: <1540486587.3648875.1554561320.179D68F6@webmail.messagingengine.com> <20181025172301.GH56558@mdounin.ru> Message-ID: <1540488840.3658488.1554603152.01FFC539@webmail.messagingengine.com> On Thu, Oct 25, 2018, at 10:23 AM, Maxim Dounin wrote: > In your second config, auth_basic is only configured for location > "~ ^/sec", but not for location "= /sec/status". Since the request > to /sec/status is handled in the latter, auth_basic won't apply. > > Note that location matching selects only one location to handle > a request. If there are many matching locations, most specific > will be used (see http://nginx.org/r/location for details). Ok, got that. Thx. > If you want to configure auth_basic for anything under /sec/, > consider using nested prefix locations instead. For example: > > location /sec/ { > auth_basic 'Secure Access'; > auth_basic_user_file /etc/nginx/sec/users; > > location = /sec/ { > deny all; > } > > location = /sec/status { > stub_status on; > } > } > > This way, auth_basic is inherited into all nested locations, and > will be configured in "location = /sec/status" as well. I get the nesting. I'd _like_ to split that config across two files: one that I can include in EVERY config that deals with "auth_basic under /sec/", and the other that i can "drop-in" (include) just for sites where I want to use "status pages" (here, just the nginx-status). Can you 'nest' across separate configs? > Note well that "location ~ ^/sec" Yep, thx. From quintinpar at gmail.com Thu Oct 25 20:58:15 2018 From: quintinpar at gmail.com (Quintin Par) Date: Thu, 25 Oct 2018 13:58:15 -0700 Subject: Why is my server response time over 400 ms for a fully cached site? Message-ID: Cross-posted at Stackoverflow I have a website with all of the pages served from nginx?s http cache and rarely invalidated or expired. The average total page download size is around 2 MB But despite being a static site with no funny logic my server response is around a second https://d.pr/NUxu2a I recorded nginx?s $request_time and it comes to around 400 milliseconds from the server https://d.pr/HGhZZy and each file at 20-30 KB average https://d.pr/kSSUkB 400 millisecond seems to be absurd. I am behind *Cloudflare* and sendfile on; tcp_nopush off; tcp_nodelay on; keepalive_timeout 300s; keepalive_requests 10000; What should I be doing to bring down the response time to the 150-millisecond range? - Quintin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mahesh at darkbears.com Fri Oct 26 05:09:56 2018 From: mahesh at darkbears.com (Mahesh Biloniya) Date: Fri, 26 Oct 2018 10:39:56 +0530 Subject: Nginx image_filter problem. Message-ID: <166aec76aca.ac390910105142.7534381597407456358@darkbears.com> Dear Sir/Madam, ? ? ? ? ? ? ? ? ? I am currently setup a nginx server and in the process i am facing a problem related to "Module ngx_http_image_filter_module". I did all configuration as mention below : ? ? ? # apt-get install libgd-dev ? ? ? #./configure?--sbin-path=/usr/bin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-? ? ?path=/var/run/nginx.pid --with-pcre --with-select_module --with-http_image_filter_module=dynamic --modules-path=/etc/nginx/modules ? ? ? # make ? ? ? # make install? But now i am getting "nginx: [emerg] unknown directive "image_filter" in /etc/nginx/nginx.conf:38" in nginx.conf file. How can i remove it. Please provide a suitable solution.? ?Thanks? Regards Mahesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Oct 26 06:16:52 2018 From: nginx-forum at forum.nginx.org (alweiss) Date: Fri, 26 Oct 2018 02:16:52 -0400 Subject: Variable scope in javascript module Message-ID: <813894e000d1e72e2b7b2c41dd6d989d.NginxMailingListEnglish@forum.nginx.org> Hi team !, Regarding the sample here : https://www.nginx.com/blog/batching-api-requests-nginx-plus-javascript-module/ I have an issue trying to use JS module : variable hoisting and global/local scope doesn't behave as expected. When i try to reassign the resp variable (as you do after declaring it), the value assigned in the done function is not brought outside of the done function to the bacthAPI function. So if resp is initialised with 0 and reassign as 1 in the done funtion, at the end, resp would = 0. I had a look to explanation here https://www.sitepoint.com/demystifying-javascript-variable-scope-hoisting/ and seems to behave differently in nginx implementation of JS. Would it be some OS settings outside of NGINX preventing this to work as it should normally work with javascript ? Any dependency on an OS package ? Thanks ! BR Alex Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281699,281699#msg-281699 From xeioex at nginx.com Fri Oct 26 09:02:20 2018 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 26 Oct 2018 12:02:20 +0300 Subject: Variable scope in javascript module In-Reply-To: <813894e000d1e72e2b7b2c41dd6d989d.NginxMailingListEnglish@forum.nginx.org> References: <813894e000d1e72e2b7b2c41dd6d989d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2E34F1DD-F51D-4121-8C93-8A04E62C6927@nginx.com> Hi Alex, Can you, please, share your code? You can also try to play with njs code in the command line interface. For example: njs interactive njs 0.2.3 v. -> the properties and prototype methods of v. type console.help() for more information >>function my(){var g = 'init'; console.log(g); (function() {g = 'anon'})(); console.log(g) }; my() 'init' 'anon' undefined CLI, is also available in docker docker run -i -t nginx:latest /usr/bin/njs > On 26 Oct 2018, at 09:16, alweiss wrote: > > Hi team !, > Regarding the sample here : > https://www.nginx.com/blog/batching-api-requests-nginx-plus-javascript-module/ > > I have an issue trying to use JS module : variable hoisting and global/local > scope doesn't behave as expected. When i try to reassign the resp variable > (as you do after declaring it), the value assigned in the done function is > not brought outside of the done function to the bacthAPI function. > So if resp is initialised with 0 and reassign as 1 in the done funtion, at > the end, resp would = 0. > I had a look to explanation here > https://www.sitepoint.com/demystifying-javascript-variable-scope-hoisting/ > and seems to behave differently in nginx implementation of JS. > > Would it be some OS settings outside of NGINX preventing this to work as it > should normally work with javascript ? Any dependency on an OS package ? > > Thanks ! > BR > Alex > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281699,281699#msg-281699 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Fri Oct 26 09:16:40 2018 From: francis at daoine.org (Francis Daly) Date: Fri, 26 Oct 2018 10:16:40 +0100 Subject: Nginx image_filter problem. In-Reply-To: <166aec76aca.ac390910105142.7534381597407456358@darkbears.com> References: <166aec76aca.ac390910105142.7534381597407456358@darkbears.com> Message-ID: <20181026091640.sepa2qjv64b5lajv@daoine.org> On Fri, Oct 26, 2018 at 10:39:56AM +0530, Mahesh Biloniya wrote: Hi there, > #./configure?--sbin-path=/usr/bin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-? ? ?path=/var/run/nginx.pid --with-pcre --with-select_module --with-http_image_filter_module=dynamic --modules-path=/etc/nginx/modules > But now i am getting "nginx: [emerg] unknown directive "image_filter" in /etc/nginx/nginx.conf:38" in nginx.conf file. That message means that the matching module is not available in the nginx being used. For a dynamic module, you must "load_module" it (http://nginx.org/r/load_module) to let nginx know that you want to use this module. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Oct 29 20:26:30 2018 From: nginx-forum at forum.nginx.org (alang) Date: Mon, 29 Oct 2018 16:26:30 -0400 Subject: New compilation failure Message-ID: <40020b11fde120bab04f7abe2b1dc9c1.NginxMailingListEnglish@forum.nginx.org> Hello, I have an nginx and modsecurity that is installed via our ansible environment. It has been running fine for over half a year with no issues. This morning I ran it and worked fine. This afternoon I ran it and it is erroring. The environment is git version controlled, so nothing changed in the scripts. The following command is run: cd /root/nginx-1.13.4 && ./configure --user=www-data --group=www-data --prefix=/opt/nginx --with-debug --with-ipv6 --with-http_ssl_module --with-http_gzip_static_module --with-http_stub_status_module --with-cc-opt=-Wno-error --with-ld-opt= --add-module=/usr/lib/modsecurity-nginx-connector --add-module=/opt/ecolane/rails/.rvm/gems/ruby-2.3.1/gems/passenger-5.0.28/src/nginx_module && make && make install And the, looks to be, relevant error generated during the make is: 5.0.28/buildout/common/libboost_oxt.a -lstdc++ -lpthread -lm -lrt -lpcre -lssl -lcrypto -ldl -lz \ -Wl,-E objs/addon/src/ngx_http_modsecurity_module.o: In function `ngx_http_modsecurity_create_ctx': /usr/lib/modsecurity-nginx-connector/src/ngx_http_modsecurity_module.c:258: undefined reference to `msc_new_transaction_with_id' collect2: error: ld returned 1 exit status make[1]: *** [objs/nginx] Error 1 make[1]: Leaving directory `/root/nginx-1.13.4' make: *** [build] Error 2 Any ideas what the issue is or where to dig? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281731,281731#msg-281731 From nginx-forum at forum.nginx.org Mon Oct 29 20:28:01 2018 From: nginx-forum at forum.nginx.org (alang) Date: Mon, 29 Oct 2018 16:28:01 -0400 Subject: New compilation failure In-Reply-To: <40020b11fde120bab04f7abe2b1dc9c1.NginxMailingListEnglish@forum.nginx.org> References: <40020b11fde120bab04f7abe2b1dc9c1.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thinking there may have been a change with the ModSecurity Nginx Connector ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281731,281732#msg-281732 From jonathan.esterhazy at gmail.com Mon Oct 29 20:44:17 2018 From: jonathan.esterhazy at gmail.com (Jonathan Esterhazy) Date: Mon, 29 Oct 2018 13:44:17 -0700 Subject: large request body in njs Message-ID: Hello! I am trying to use njs (ngx_http_js_module) to modify POSTed request data before sending to an upstream api. Using the req.requestBody function works fine for small requests, but for larger ones causes this error: [error] 14#14: *18 js exception: Error: request body is in a file If I was using the Lua module, I could use ngx.req.get-body_file function to get this data, but there doesn't seem to be any way to do that in njs. Did I miss something? Is there a way to access the data or find out the filename? -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Mon Oct 29 23:38:51 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 30 Oct 2018 02:38:51 +0300 Subject: nginx.conf-2018 In-Reply-To: <76d41c7d-f8b3-5577-8fdd-86809253ccde@nginx.com> References: <76d41c7d-f8b3-5577-8fdd-86809253ccde@nginx.com> Message-ID: On 28/08/2018 19:37, Maxim Konovalov wrote: > Hello, > > As some of you probably know we are doing a conference in Atlanta in > October 8 - 11 this year. You can find its full agenda here: > > https://www.nginx.com/nginxconf/2018/agenda/ > [...] Here is a channel where you can find videos from the conf: https://www.youtube.com/user/NginxInc/videos -- Maxim Konovalov From xeioex at nginx.com Tue Oct 30 09:13:58 2018 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 30 Oct 2018 12:13:58 +0300 Subject: large request body in njs In-Reply-To: References: Message-ID: <326B361B-24E3-4C31-BA2F-C28C364ADF8E@nginx.com> > On 29 Oct 2018, at 23:44, Jonathan Esterhazy wrote: > > Hello! > > I am trying to use njs (ngx_http_js_module) to modify POSTed request data before sending to an upstream api. Using the req.requestBody function works fine for small requests, but for larger ones causes this error: > > [error] 14#14: *18 js exception: Error: request body is in a file > > If I was using the Lua module, I could use ngx.req.get-body_file function to get this data, but there doesn't seem to be any way to do that in njs. Did I miss something? Is there a way to access the data or find out the filename? Hi Jonathan! You have two options here: 1) you can increase the client buffers According to the documentation: http://nginx.org/en/docs/njs/reference.html#http r.requestBody returns the client request body if it has not been written to a temporary file. To ensure that the client request body is in memory, its size should be limited by client_max_body_size, and a sufficient buffer size should be set using client_body_buffer_size. 2) you can open the file with the client?s request using request_body_file variable (http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_body_file) var fs = require(?fs?); var large_body = fs.readFileSync(r.variables.request_body_file) > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.esterhazy at gmail.com Tue Oct 30 14:08:24 2018 From: jonathan.esterhazy at gmail.com (Jonathan Esterhazy) Date: Tue, 30 Oct 2018 07:08:24 -0700 Subject: large request body in njs In-Reply-To: <326B361B-24E3-4C31-BA2F-C28C364ADF8E@nginx.com> References: <326B361B-24E3-4C31-BA2F-C28C364ADF8E@nginx.com> Message-ID: <02e23caa-2954-40c0-b013-ec2a8b473db9@Spark> Yes, these suggestions worked. Thanks! On Oct 30, 2018, 2:14 AM -0700, Dmitry Volyntsev , wrote: > > > > On 29 Oct 2018, at 23:44, Jonathan Esterhazy wrote: > > > > Hello! > > > > I am trying to use njs (ngx_http_js_module) to modify POSTed request data before sending to an upstream api. Using the req.requestBody function works fine for small requests, but for larger ones causes this error: > > > > [error] 14#14: *18 js exception: Error: request body is in a file > > > > If I was using the Lua module, I could use ngx.req.get-body_file function to get this data, but there doesn't seem to be any way to do that in njs. Did I miss something? Is there a way to access the data or find out the filename? > > > Hi Jonathan! > > You have two options here: > > 1) you can increase the client buffers > > According to the documentation: > http://nginx.org/en/docs/njs/reference.html#http > > r.requestBody > returns the client request body if it has not been written to a temporary file. To ensure that the client request body is in memory, its size should be limited by client_max_body_size, and a sufficient buffer size should be set using client_body_buffer_size. > > 2) you can open the file with the client?s request using request_body_file variable (http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_body_file) > > var fs = require(?fs?); > var large_body = fs.readFileSync(r.variables.request_body_file) > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xeioex at nginx.com Tue Oct 30 15:07:40 2018 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 30 Oct 2018 18:07:40 +0300 Subject: njs 0.2.5 release Message-ID: <4a551dc1-b094-f02f-9503-d050ad653bf2@nginx.com> Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release proceeds to extend the coverage of ECMAScript 5.1 specification. - arguments object is added. So, it is possible to write functions which can take the arbitrary number of arguments as well as wrappers for njs built-in functions. function concat(sep) { var args = Array.prototype.slice.call(arguments, 1); return args.join(sep); } > concat(' ', 'Hello', 'World', '!') 'Hello World !' You can learn more about njs: - Overview and introduction: http://nginx.org/en/docs/njs/ - Presentation: https://youtu.be/Jc_L6UffFOs Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.2.5 30 Oct 2018 nginx modules: *) Bugfix: fixed counting pending events in stream module. *) Bugfix: fixed s.off() in stream module. *) Bugfix: fixed processing of data chunks in js_filter in stream module. *) Bugfix: fixed http status and contentType getter in http module. *) Bugfix: fixed http response and parent getters in http module. Core: *) Feature: arguments object support. *) Feature: non-integer fractions support. *) Improvement: handling non-array values in Array.prototype.slice(). *) Bugfix: fixed Array.prototype.length setter. *) Bugfix: fixed njs_array_alloc() for length > 2**31. *) Bugfix: handling int overflow in njs_array_alloc() on 32bit archs. *) Bugfix: fixed code size mismatch error message. *) Bugfix: fixed delete operator in a loop. *) Bugfix: fixed Object.getOwnPropertyDescriptor() for complex object (inherited from Array and string values). *) Bugfix: fixed Object.prototype.hasOwnProperty() for non-object properties. *) Bugfix: miscellaneous additional bugs have been fixed. From nginx-forum at forum.nginx.org Tue Oct 30 16:07:50 2018 From: nginx-forum at forum.nginx.org (alweiss) Date: Tue, 30 Oct 2018 12:07:50 -0400 Subject: Variable scope in javascript module In-Reply-To: <2E34F1DD-F51D-4121-8C93-8A04E62C6927@nginx.com> References: <2E34F1DD-F51D-4121-8C93-8A04E62C6927@nginx.com> Message-ID: Hi Dmitry, thanks for your reply. Here is my code called using js_content authorize; from a location {} I want to say "if at least one subrequest answers HTTP/200, set the final_status to 200 and at then end, return 200" First thing is that i must declare requestCount with var. If i don't, i get a "requestCount" is not defined I thought in javascript, you could declare a variable just using its name with the "var" keyword to make it global. Same for final_status if not declared with var. But using var, i'm not able to update the final status from the "done" function. I think i missing something in the hoisting behaviour of javascript ? Thanks ! request for REST answers 200 as well as OASS so we should "break" after the first subrequest but final_status next to the subrequest is always evaluated to 403 even if previous subrequest returned 200. The only case were it works, it is if the last subrequest return 200 which doesn't rely on final_status value. CODE : function authorize(req, res) { req.warn('Variables init ...'); var n = 0; var final_status = 403; var servicesCodes = ['rest','oass']; var requestCount = servicesCodes.length; function done(reply) { //Callback for completed subrequests ... n++; reply.warn('at start of done function, n is :' + n); if (n < requestCount) { //| final_status !=200) { reply.warn('status of subrequest is :' + reply.status); if (reply.status == 200) { reply.warn('lets set final_status to 200'); final_status = 200; reply.warn('Value of final_status :' + final_status); reply.warn('!!! We return 200 because we have this one at least, no matter if other are 404 !!!'); res.return(200); } } else { // last response reply.warn('status of last subrequest is :' + reply.status); if (reply.status == 200) { reply.warn('lets set final_status for last subrequest to 200'); final_status = 200; reply.warn('Value of final_status after last subrequest:' + final_status); reply.warn('!!! We return the final 200 !!!'); res.return(200); } else { // we did not get any 200 reply.warn('!!! We return the final 403 !!!'); res.return(403); } } }; req.warn('n is :' + n); req.warn('final_status is : ' + final_status); req.warn('servicesCodes is : ' + servicesCodes); req.warn('requestCount is : ' + requestCount); for (var i = 0; i < requestCount; i++) { req.warn('Final status before sending subrequest to next service is '+ final_status) if (final_status == 200) { req.warn('We stop here because we have the 200 !!!'); break; } else { req.warn('subrequest is on : ' + servicesCodes[i]); req.subrequest("/" + servicesCodes[i] + "/TestDevice1.html", '', done); } } } OUTPUT : 2018/10/30 16:00:10 [warn] 9220#9220: *15 js: Variables init ... 2018/10/30 16:00:10 [warn] 9220#9220: *15 js: n is :0 2018/10/30 16:00:10 [warn] 9220#9220: *15 js: final_status is : 403 2018/10/30 16:00:10 [warn] 9220#9220: *15 js: servicesCodes is : rest,oass 2018/10/30 16:00:10 [warn] 9220#9220: *15 js: requestCount is : 2 2018/10/30 16:00:10 [warn] 9220#9220: *15 js: Final status before sending subrequest to next service is 403 2018/10/30 16:00:10 [warn] 9220#9220: *15 js: subrequest is on : rest 2018/10/30 16:00:10 [warn] 9220#9220: *15 js: Final status before sending subrequest to next service is 403 2018/10/30 16:00:10 [warn] 9220#9220: *15 js: subrequest is on : oass 2018/10/30 16:00:10 [warn] 9220#9220: *15 js: at start of done function, n is :1 2018/10/30 16:00:10 [warn] 9220#9220: *15 js: status of subrequest is :200 2018/10/30 16:00:10 [warn] 9220#9220: *15 js: lets set final_status to 200 2018/10/30 16:00:10 [warn] 9220#9220: *15 js: Value of final_status :200 2018/10/30 16:00:10 [warn] 9220#9220: *15 js: !!! We return 200 because we have this one at least, no matter if other are 404 !!! 2018/10/30 16:00:10 [warn] 9220#9220: *15 js: at start of done function, n is :2 2018/10/30 16:00:10 [warn] 9220#9220: *15 js: status of last subrequest is :404 2018/10/30 16:00:10 [warn] 9220#9220: *15 js: !!! We return the final 403 !!! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281699,281746#msg-281746 From nginx-forum at forum.nginx.org Tue Oct 30 16:58:53 2018 From: nginx-forum at forum.nginx.org (alweiss) Date: Tue, 30 Oct 2018 12:58:53 -0400 Subject: Variable scope in javascript module In-Reply-To: References: <2E34F1DD-F51D-4121-8C93-8A04E62C6927@nginx.com> Message-ID: <20292965e6b788d3c3f3f0b2d7dbdfe2.NginxMailingListEnglish@forum.nginx.org> Here is a sample that works with Java but not with njs : function my() { resp = "Start"; console.log ('Initial resp is ' + resp); function done() { resp += "AndContinue"; console.log('In loop resp is ' + resp) } done(); } resp = 'empty' my(); console.log('End Resp is : ' + resp) With java : root at linux3:/etc/nginx# js test.js Initial resp is Start In loop resp is StartAndContinue End Resp is : StartAndContinue With NJS : root at linux3:/etc/nginx# njs test.js ReferenceError: "resp" is not defined in 16 Don't know why it doesn't work the same in both runtime. Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281699,281747#msg-281747 From xeioex at nginx.com Tue Oct 30 17:09:22 2018 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 30 Oct 2018 20:09:22 +0300 Subject: Variable scope in javascript module In-Reply-To: <20292965e6b788d3c3f3f0b2d7dbdfe2.NginxMailingListEnglish@forum.nginx.org> References: <2E34F1DD-F51D-4121-8C93-8A04E62C6927@nginx.com> <20292965e6b788d3c3f3f0b2d7dbdfe2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <905b9aeb-8656-6cba-92dc-376264706924@nginx.com> On 30.10.2018 19:58, alweiss wrote: > Here is a sample that works with Java but not with njs : > > function my() { > resp = "Start"; > console.log ('Initial resp is ' + resp); > > function done() { > resp += "AndContinue"; > console.log('In loop resp is ' + resp) > } > > done(); > } > resp = 'empty' > my(); > console.log('End Resp is : ' + resp) > > With java : > > root at linux3:/etc/nginx# js test.js > Initial resp is Start > In loop resp is StartAndContinue > End Resp is : StartAndContinue > > > With NJS : > > root at linux3:/etc/nginx# njs test.js > ReferenceError: "resp" is not defined in 16 > > > Don't know why it doesn't work the same in both runtime. Currently, njs requires all the variables to be declared. Adding 'var resp;' at the beginning of the file helps. docker run -i -t nginx:latest /usr/bin/njs interactive njs 0.2.4 v. -> the properties and prototype methods of v. type console.help() for more information >> var resp; function my() { resp = "Start"; console.log ('Initial resp is ' + resp); function done() { resp += "AndContinue"; console.log('In loop resp is ' + resp)}; done()}; resp = 'empty'; my(); console.log('End Resp is : ' + resp) 'Initial resp is Start' 'In loop resp is StartAndContinue' 'End Resp is : StartAndContinue' undefined >> > > Thanks > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281699,281747#msg-281747 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From vbart at nginx.com Tue Oct 30 17:09:50 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 30 Oct 2018 20:09:50 +0300 Subject: Variable scope in javascript module In-Reply-To: <20292965e6b788d3c3f3f0b2d7dbdfe2.NginxMailingListEnglish@forum.nginx.org> References: <2E34F1DD-F51D-4121-8C93-8A04E62C6927@nginx.com> <20292965e6b788d3c3f3f0b2d7dbdfe2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8811175.D9n8V8J923@vbart-workstation> On Tuesday 30 October 2018 12:58:53 alweiss wrote: > Here is a sample that works with Java but not with njs : > > function my() { > resp = "Start"; > console.log ('Initial resp is ' + resp); > > function done() { > resp += "AndContinue"; > console.log('In loop resp is ' + resp) > } > > done(); > } > resp = 'empty' > my(); > console.log('End Resp is : ' + resp) > > With java : > > root at linux3:/etc/nginx# js test.js > Initial resp is Start > In loop resp is StartAndContinue > End Resp is : StartAndContinue > > > With NJS : > > root at linux3:/etc/nginx# njs test.js > ReferenceError: "resp" is not defined in 16 > > > Don't know why it doesn't work the same in both runtime. > [..] njs implements "strict mode" of JS: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Strict_mode It's said in the beginning of documentation: http://nginx.org/en/docs/njs/ wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Tue Oct 30 21:55:20 2018 From: nginx-forum at forum.nginx.org (alweiss) Date: Tue, 30 Oct 2018 17:55:20 -0400 Subject: Variable scope in javascript module In-Reply-To: <8811175.D9n8V8J923@vbart-workstation> References: <8811175.D9n8V8J923@vbart-workstation> Message-ID: <1f7f944accf214074aa88d39a7b48098.NginxMailingListEnglish@forum.nginx.org> Thanks guys, this is a first step. I moved declaration to the top of my script : ********START SCRIPT ************* var n = 0; var final_status = 403; var servicesCodes = ['rest','oass']; var requestCount = servicesCodes.length; function authorize(req, res) { req.warn('Variables init ...'); function done(reply) { //Callback for completed subrequests ... n++; reply.warn('at start of done function, n is :' + n); if (n < requestCount) { //| final_status !=200) { reply.warn('status of subrequest is :' + reply.status); if (reply.status == 200) { reply.warn('lets set final_status to 200'); final_status = 200; reply.warn('Value of final_status :' + final_status); reply.warn('!!! We return 200 because we have this one at least, no matter if other are 404 !!!'); res.return(200); } } else { // last response reply.warn('status of last subrequest is :' + reply.status); if (reply.status == 200) { reply.warn('lets set final_status for last subrequest to 200'); final_status = 200; reply.warn('Value of final_status after last subrequest:' + final_status); reply.warn('!!! We return the final 200 !!!'); res.return(200); } else { // we did not get any 200 reply.warn('!!! We return the final 403 !!!'); res.return(403); } } }; req.warn('n is :' + n); req.warn('final_status is : ' + final_status); req.warn('servicesCodes is : ' + servicesCodes); req.warn('requestCount is : ' + requestCount); for (var i = 0; i < requestCount; i++) { req.warn('Final status before sending subrequest to next service is '+ final_status) if (final_status == 200) { req.warn('We stop here because we have the 200 !!!'); break; } else { req.warn('subrequest is on : ' + servicesCodes[i]); req.subrequest("/" + servicesCodes[i] + "/TestDevice1.html", '', done); } } } ********END SCRIPT ************* However, when i run it, the result is as below : The suprising thing is the order it is logged : it seems : as we go for async, perhaps both request are started at the same time so each one get a starting of 403 (no yet updated). Could this be the pb ? What could be the solution ? Run subrequest without giving the done function as callback and directly test the return status ? 2018/10/30 21:43:06 [warn] 9220#9220: *203 js: Variables init ... 2018/10/30 21:43:06 [warn] 9220#9220: *203 js: n is :0 2018/10/30 21:43:06 [warn] 9220#9220: *203 js: final_status is : 403 // This is set by the variable declaration 2018/10/30 21:43:06 [warn] 9220#9220: *203 js: servicesCodes is : rest,oass // Loop to subrequest for each services 2018/10/30 21:43:06 [warn] 9220#9220: *203 js: requestCount is : 2 2018/10/30 21:43:06 [warn] 9220#9220: *203 js: Final status before sending subrequest to next service is 403 // Start of the loop with rest : final_status is still original value of 403 2018/10/30 21:43:06 [warn] 9220#9220: *203 js: subrequest is on : rest // it returns an HTTP/200 so final_status is configured to 200 2018/10/30 21:43:06 [warn] 9220#9220: *203 js: Final status before sending subrequest to next service is 403 // However, here, final_status is seen as being 403 so we still go over the loop for the second service oass even if there is a break statement based on final_status == 200 2018/10/30 21:43:06 [warn] 9220#9220: *203 js: subrequest is on : oass 2018/10/30 21:43:06 [warn] 9220#9220: *203 js: at start of done function, n is :1 2018/10/30 21:43:06 [warn] 9220#9220: *203 js: status of subrequest is :200 2018/10/30 21:43:06 [warn] 9220#9220: *203 js: lets set final_status to 200 2018/10/30 21:43:06 [warn] 9220#9220: *203 js: Value of final_status :200 2018/10/30 21:43:06 [warn] 9220#9220: *203 js: !!! We return 200 because we have this one at least, no matter if other are 404 !!! 2018/10/30 21:43:06 [warn] 9220#9220: *203 js: at start of done function, n is :2 2018/10/30 21:43:06 [warn] 9220#9220: *203 js: status of last subrequest is :200 2018/10/30 21:43:06 [warn] 9220#9220: *203 js: lets set final_status for last subrequest to 200 2018/10/30 21:43:06 [warn] 9220#9220: *203 js: Value of final_status after last subrequest:200 2018/10/30 21:43:06 [warn] 9220#9220: *203 js: !!! We return the final 200 !!! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281699,281750#msg-281750 From vbart at nginx.com Tue Oct 30 22:27:40 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 31 Oct 2018 01:27:40 +0300 Subject: Variable scope in javascript module In-Reply-To: <1f7f944accf214074aa88d39a7b48098.NginxMailingListEnglish@forum.nginx.org> References: <8811175.D9n8V8J923@vbart-workstation> <1f7f944accf214074aa88d39a7b48098.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1955051.CHRA2nj8tr@vbart-laptop> On Wednesday, 31 October 2018 00:55:20 MSK you wrote: [..] > However, when i run it, the result is as below : > The suprising thing is the order it is logged : it seems : as we go for > async, perhaps both request are started at the same time so each one get a > starting of 403 (no yet updated). Could this be the pb ? What could be the > solution ? Run subrequest without giving the done function as callback and > directly test the return status ? [..] Subrequests are async. That allows you to do a lot of interesting stuff, like in this example where two subrequests are run in parallel: http://nginx.org/en/docs/njs/examples.html#fast_response If you want to schedule the second subrequest only after the first one is finished, then simply put your second subrequest call inside the done callback of the first one. wbr, Valentin V. Bartenev From maxim at nginx.com Tue Oct 30 22:30:52 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 31 Oct 2018 01:30:52 +0300 Subject: Variable scope in javascript module In-Reply-To: <1955051.CHRA2nj8tr@vbart-laptop> References: <8811175.D9n8V8J923@vbart-workstation> <1f7f944accf214074aa88d39a7b48098.NginxMailingListEnglish@forum.nginx.org> <1955051.CHRA2nj8tr@vbart-laptop> Message-ID: On 31/10/2018 01:27, Valentin V. Bartenev wrote: > On Wednesday, 31 October 2018 00:55:20 MSK you wrote: > [..] > >> However, when i run it, the result is as below : >> The suprising thing is the order it is logged : it seems : as we go for >> async, perhaps both request are started at the same time so each one get a >> starting of 403 (no yet updated). Could this be the pb ? What could be the >> solution ? Run subrequest without giving the done function as callback and >> directly test the return status ? > [..] > > Subrequests are async. That allows you to do a lot of interesting stuff, > like in this example where two subrequests are run in parallel: > http://nginx.org/en/docs/njs/examples.html#fast_response > Some additional stuff to this topic https://github.com/nginxinc/nginx-openid-connect -- Maxim Konovalov From nginx-forum at forum.nginx.org Tue Oct 30 22:47:49 2018 From: nginx-forum at forum.nginx.org (alweiss) Date: Tue, 30 Oct 2018 18:47:49 -0400 Subject: Variable scope in javascript module In-Reply-To: <1955051.CHRA2nj8tr@vbart-laptop> References: <1955051.CHRA2nj8tr@vbart-laptop> Message-ID: My problem is that services can be one, two ? ten etc ? so not easy to place in the callback of the previous subrequest ... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281699,281753#msg-281753 From nginx-forum at forum.nginx.org Tue Oct 30 23:22:28 2018 From: nginx-forum at forum.nginx.org (alweiss) Date: Tue, 30 Oct 2018 19:22:28 -0400 Subject: Variable scope in javascript module In-Reply-To: References: Message-ID: <4ea8bced6ec0cf12e1490811eca0df79.NginxMailingListEnglish@forum.nginx.org> Ok, got it ! Thanks much Valentin, Maxim and Dmitry for your guidance. I decided to go as the nginx sample (https://www.nginx.com/blog/batching-api-requests-nginx-plus-javascript-module/) with the resp var which is concatened with result of each subrequest : my final_status starts with 403 at init time and I add (+=) each subrequest result so with two subrequest ending in 200, we endup with final_status = 403200200. At the end of the loop, i call the evaluateStatus function (to make the evaluation after all subrequests end) to test if final_status contains 200 and we set the res.return(200) or (403) based on that. Here is the code (i still need to make some cleanup with logging). By the way, do you have any advice on tuning for better perfs ? var n = 0; var final_status = '403'; var servicesCodes = ['rest','oass']; var requestCount = servicesCodes.length; function authorize(req, res) { req.warn('Variables init ...'); function done(reply) { //Callback for completed subrequests ... n++; reply.warn('at start of done function, n is : ' + n); if (n < requestCount) { //| final_status !=200) { reply.warn('status of subrequest is : ' + reply.status); if (reply.status == 200) { reply.warn('lets set final_status to 200'); final_status += '200'; reply.warn('Value of final_status : ' + final_status); // reply.warn('!!! We return 200 because we have this one at least, no matter if other are 404 !!!'); // res.return(200); } } else { // last response reply.warn('status of last subrequest is :' + reply.status); if (reply.status == 200) { reply.warn('lets set final_status for last subrequest to 200'); final_status += '200'; reply.warn('Value of final_status after last subrequest: ' + final_status); } else { // we did not get any 200 reply.warn('!!! We dont insert 200 as we dont have ... '); reply.warn('Value of final_status after last subrequest: ' + final_status); } // Send the final result evaluateStatus(final_status); } function evaluateStatus(status) { if (final_status.includes('200')) { res.return(200) } else { res.return(403) } } }; req.warn('n is :' + n); req.warn('final_status is : ' + final_status); req.warn('servicesCodes is : ' + servicesCodes); req.warn('requestCount is : ' + requestCount); for (var i = 0; i < requestCount; i++) { req.warn('Entering loop for ' + servicesCodes[i]) req.subrequest("/" + servicesCodes[i] + "/TestDevice1.html", '', done); } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281699,281754#msg-281754 From vbart at nginx.com Tue Oct 30 23:56:28 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 31 Oct 2018 02:56:28 +0300 Subject: Variable scope in javascript module In-Reply-To: References: <1955051.CHRA2nj8tr@vbart-laptop> Message-ID: <5525120.auL3T4ATUT@vbart-laptop> On Wednesday, 31 October 2018 01:47:49 MSK alweiss wrote: > My problem is that services can be one, two ? ten etc ? so not easy to place > in the callback of the previous subrequest ... > Actually it's not a problem. Here's an example: function authorize(r) { var n = 0; var svcs = ['one', 'two', 'three']; var callNextService = function() { function done(reply) { if (reply.status == 200) { r.return(200); return; } callNextService(); } if (n == svcs.length) { r.return(403); return; } r.subrequest("/" + svcs[n++], '', done); } callNextService(); } wbr, Valentin V. Bartenev