From dukedougal at gmail.com Wed May 1 00:06:12 2019 From: dukedougal at gmail.com (Duke Dougal) Date: Wed, 1 May 2019 10:06:12 +1000 Subject: Cannot get secure link with expires to work In-Reply-To: <20190430235414.GA9796@haller.ws> References: <20190430235414.GA9796@haller.ws> Message-ID: should be: curl http://127.0.0.1/html/index.html?md5=FsRb_uu5NsagF0hA_Z-OQg&expires=2147483647 i.e. curl "http://127.0.0.1/html/index.html?md5=${md5}&expires=${expiry}" Patrick Yes you?re correct there was a missing ampersand in the curl query but it still doesn?t work. Any further ideas? thanks On Wed, May 1, 2019 at 9:50 AM Patrick <201904-nginx at jslf.app> wrote: > On 2019-05-01 09:14, Duke Dougal wrote: > > ubuntu at ip-172-31-34-191:/var/www$ curl > > > http://127.0.0.1/html/index.html?md5=FsRb_uu5NsagF0hA_Z-OQgexpires=2147483647 > > should be: > > curl > http://127.0.0.1/html/index.html?md5=FsRb_uu5NsagF0hA_Z-OQg&expires=2147483647 > > i.e. > curl "http://127.0.0.1/html/index.html?md5=${md5}&expires=${expiry}" > > > Patrick > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From 201904-nginx at jslf.app Wed May 1 00:56:32 2019 From: 201904-nginx at jslf.app (Patrick) Date: Wed, 1 May 2019 08:56:32 +0800 Subject: Cannot get secure link with expires to work In-Reply-To: References: <20190430235414.GA9796@haller.ws> Message-ID: <20190501005632.GA11469@haller.ws> On 2019-05-01 10:06, Duke Dougal wrote: > Any further ideas? 1) The URL returns 200 when the secure-link config is disabled? url="http://127.0.0.1/html/index.html" curl -sI $url 2) The secret, expiry, and uri are the same from md5 generation to the cURL request? Patrick From forum at ruhnke.cloud Thu May 2 14:17:29 2019 From: forum at ruhnke.cloud (forum at ruhnke.cloud) Date: Thu, 02 May 2019 14:17:29 +0000 Subject: Mailman is giving me 554 5.7.1 because of using Mail-Relay In-Reply-To: <20190429125253.GT1877@mdounin.ru> References: <20190429125253.GT1877@mdounin.ru> Message-ID: <9ee8ebef6eea1ffb2335e552b1ea009a@ruhnke.cloud> Hi, thx for your reply. You say > [...] To post to > the mailing list, you have to be subscribed [...] I am subscribed. I did it per mail to nginx-request at nginx.org so mailman knows my envelope-from and my from-header. And I can talk to nginx-request@ without problems. From nginx-forum at forum.nginx.org Thu May 2 23:12:19 2019 From: nginx-forum at forum.nginx.org (jarstewa) Date: Thu, 02 May 2019 19:12:19 -0400 Subject: Max_fails for proxy_pass without an upstream block Message-ID: <01f10e410294bf9bcc1e756153217596.NginxMailingListEnglish@forum.nginx.org> Is there an equivalent of max_fails (http://nginx.org/en/docs/http/ngx_http_upstream_module.html#max_fails) if I'm using proxy_pass without an upstream block? E.g. http { server { resolver 10.0.0.2 valid=5s; set $upstream_server http://foo.bar:80; location ~* \.(html)$ { proxy_pass $upstream_server; } } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284004,284004#msg-284004 From dukedougal at gmail.com Thu May 2 23:33:01 2019 From: dukedougal at gmail.com (Duke Dougal) Date: Fri, 3 May 2019 09:33:01 +1000 Subject: Cannot get secure link with expires to work In-Reply-To: <20190501005632.GA11469@haller.ws> References: <20190430235414.GA9796@haller.ws> <20190501005632.GA11469@haller.ws> Message-ID: >>>>On 1 May 2019, at 10:56 am, Patrick <201904-nginx at jslf.app> wrote: >>>>On 2019-05-01 10:06, Duke Dougal wrote: >>>>Any further ideas? >>>>1) The URL returns 200 when the secure-link config is disabled? >>>> url="http://127.0.0.1/html/index.html" >>>> curl -sI $url >>>>2) The secret, expiry, and uri are the same from md5 generation to the cURL request? Yes the URL returns 200 when the secure-link config is disabled. >> The secret, expiry, and uri are the same from md5 generation to the cURL request? Could you please explain the question further? - I?m not sure how to check this thanks. On Wed, May 1, 2019 at 10:52 AM Patrick <201904-nginx at jslf.app> wrote: > On 2019-05-01 10:06, Duke Dougal wrote: > > Any further ideas? > > 1) The URL returns 200 when the secure-link config is disabled? > > url="http://127.0.0.1/html/index.html" > curl -sI $url > > 2) The secret, expiry, and uri are the same from md5 generation to the > cURL request? > > > Patrick > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From 201904-nginx at jslf.app Fri May 3 00:39:59 2019 From: 201904-nginx at jslf.app (Patrick) Date: Fri, 3 May 2019 08:39:59 +0800 Subject: Cannot get secure link with expires to work In-Reply-To: References: <20190430235414.GA9796@haller.ws> <20190501005632.GA11469@haller.ws> Message-ID: <20190503003959.GB24400@haller.ws> On 2019-05-03 09:33, Duke Dougal wrote: > > The secret, expiry, and uri are the same from md5 generation to the > > cURL request? > > Could you please explain the question further? - I?m not sure how to check > this thanks. Sure. Use shell variables -- e.g. #!/bin/bash secret="w00w00" uri="/html/index.html" md5_format="%s %s %s" # must match format of secure_link_md5 now=$( date +%s ) expires=$(( $now + 3600 )) md5=$(printf "$md5_format" $expires $uri $secret | openssl md5 -binary | openssl base64 | tr +/ -_ | tr -d = ) url="http://127.0.0.1${uri}?md5=${md5}&expires=${expires}" echo $url curl -I "$url" # nginx config root /srv/www/localhost; location /html/ { secure_link $arg_md5,$arg_expires; secure_link_md5 "$secure_link_expires $uri w00w00"; if ($secure_link = "") { return 403; } if ($secure_link = "0") { return 410; } } From henson at acm.org Fri May 3 00:52:21 2019 From: henson at acm.org (Paul B. Henson) Date: Thu, 2 May 2019 17:52:21 -0700 Subject: custom 502 error for stacked proxies Message-ID: <20190503005221.GF4185@bender.it-sys.cpp.edu> So, I've got a need for a reverse proxy where first it tries server A; if it gets a 404 from server A it should try server B, and then just return whatever happens with server B. I've got this config so far: location /_nginx_/ { internal; root /var/www/localhost/nginx; } location / { proxy_intercept_errors on; error_page 403 /_nginx_/error_403.html; error_page 404 = @server_b; error_page 405 /_nginx_/error_405.html; error_page 500 /_nginx_/error_500.html; error_page 502 /_nginx_/error_503.html; error_page 503 /_nginx_/error_503.html; proxy_pass https://serverA; proxy_redirect http://$host/ /; proxy_set_header Host $host; proxy_http_version 1.1; proxy_connect_timeout 3m; proxy_read_timeout 3m; proxy_buffers 1024 4k; } location @server_b { proxy_intercept_errors off; proxy_pass https://serverB; proxy_redirect http://$host/ /; proxy_set_header Host $host; proxy_http_version 1.1; proxy_connect_timeout 3m; proxy_read_timeout 3m; proxy_buffers 1024 4k; } This seems to work *except* when it fails to connect to server B, in which case it gives a standard nginx 502 error page rather than a custom page. I've tried all kinds of things, from setting proxy_intercept_errors on for the @server_b location and adding error_page configuration like in the / location, and a bunch of other stuff I can't even remember exactly, but no matter what I do I always get the stock nginx 502 rather than the custom error page. Ideally I'd like to just pass through whatever error comes from B, unless nginx fails completely to connect to B, in which case I'd like to pass the local custom error page rather than the default nginx page. What am I missing? Thanks much... From pluknet at nginx.com Fri May 3 10:47:40 2019 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 3 May 2019 13:47:40 +0300 Subject: custom 502 error for stacked proxies In-Reply-To: <20190503005221.GF4185@bender.it-sys.cpp.edu> References: <20190503005221.GF4185@bender.it-sys.cpp.edu> Message-ID: > On 3 May 2019, at 03:52, Paul B. Henson wrote: > > So, I've got a need for a reverse proxy where first it tries server A; > if it gets a 404 from server A it should try server B, and then just > return whatever happens with server B. > > I've got this config so far: > > location /_nginx_/ { > internal; > root /var/www/localhost/nginx; > } > > location / { > > proxy_intercept_errors on; > error_page 403 /_nginx_/error_403.html; > error_page 404 = @server_b; > error_page 405 /_nginx_/error_405.html; > error_page 500 /_nginx_/error_500.html; > error_page 502 /_nginx_/error_503.html; > error_page 503 /_nginx_/error_503.html; > proxy_pass https://serverA; > proxy_redirect http://$host/ /; > proxy_set_header Host $host; > proxy_http_version 1.1; > proxy_connect_timeout 3m; > proxy_read_timeout 3m; > proxy_buffers 1024 4k; > } > > location @server_b { > proxy_intercept_errors off; > proxy_pass https://serverB; > proxy_redirect http://$host/ /; > proxy_set_header Host $host; > proxy_http_version 1.1; > proxy_connect_timeout 3m; > proxy_read_timeout 3m; > proxy_buffers 1024 4k; > } > > This seems to work *except* when it fails to connect to server B, in which > case it gives a standard nginx 502 error page rather than a custom page. Hello, you may want to try recursive error pages in location / {} with error_page 502 in @server_b. See for details: http://nginx.org/r/recursive_error_pages -- Sergey Kandaurov From nginx-forum at forum.nginx.org Fri May 3 15:46:57 2019 From: nginx-forum at forum.nginx.org (blackout) Date: Fri, 03 May 2019 11:46:57 -0400 Subject: NGNIX reverse Proxy with Kerberos auth Message-ID: <723cb1e241048409472fe5448585b7a3.NginxMailingListEnglish@forum.nginx.org> Hi, is it possible to make with NGINX a reverse proxy with kerberos auth? NGINX should do all the authentication stuff, so the user don?t talk directly with the kerberos server. The goal is: - the user connect to webmail.example.com - a authentication Windows pops up - the user types the login information (user and password, no domain) - NGNIX do all the auth stuff - the user directly login to webmail without second authentication. Kerberos and webmail are Windows servers. At the moment we do this with Windows TMG but for future we want a other proxy. thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284012,284012#msg-284012 From henson at acm.org Fri May 3 19:48:30 2019 From: henson at acm.org (Paul B. Henson) Date: Fri, 3 May 2019 12:48:30 -0700 Subject: custom 502 error for stacked proxies In-Reply-To: References: <20190503005221.GF4185@bender.it-sys.cpp.edu> Message-ID: <20190503194829.GG4185@bender.it-sys.cpp.edu> On Fri, May 03, 2019 at 01:47:40PM +0300, Sergey Kandaurov wrote: > you may want to try recursive error pages in location / {} > with error_page 502 in @server_b. Sweet, that did indeed do the trick. Thank you very much for the suggestion. From pgnet.dev at gmail.com Sat May 4 15:11:30 2019 From: pgnet.dev at gmail.com (PGNet Dev) Date: Sat, 4 May 2019 08:11:30 -0700 Subject: after upgrade to nginx 1.16.0, $realpath_root returns incorrect path ? Message-ID: after upgrading my working nginx instance from v1.15.x to nginx -V nginx version: nginx/1.16.0 (local build) built with OpenSSL 1.1.1b 26 Feb 2019 ... running with php-fpm from php -v PHP 7.3.6-dev (cli) (built: Apr 23 2019 19:34:32) ( NTS ) Copyright (c) 1997-2018 The PHP Group Zend Engine v3.3.6-dev, Copyright (c) 1998-2018 Zend Technologies with Zend OPcache v7.3.6-dev, Copyright (c) 1999-2018, by Zend Technologies my local site's no longer accessible. standard log reports, 2019/05/04 07:51:50 [error] 6510#6510: *8 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 127.0.0.1, server: dev01.pgnd.loc, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/run/php-fpm.sock:", host: "dev01.pgnd.loc" in my config, I've got -- as usual, fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; and my expected/target index.php is in its usual path /srv/www/test03/public/index.php but turning on debug, 2019/05/04 07:51:50 [debug] 6510#6510: *8 http script var: "/index.php" 2019/05/04 07:51:50 [debug] 6510#6510: *8 fastcgi param: "SCRIPT_FILENAME: /usr/local/html/index.php" the SCRIPT_FILENAME path is incorrect. there appears to be an issue with $realpath_root. While I'm digging locally for the problem ... question(s): -- has anything changed in usage of $realpath_root? -- are there any php v7.3.6 related issues? -- any other hints? From mikydevel at yahoo.fr Sat May 4 19:50:41 2019 From: mikydevel at yahoo.fr (Mik J) Date: Sat, 4 May 2019 19:50:41 +0000 (UTC) Subject: Capture clear text with Nginx reverse proxy References: <792219505.6916367.1556999441476.ref@mail.yahoo.com> Message-ID: <792219505.6916367.1556999441476@mail.yahoo.com> Hello, I often try to solve problems between Nginx and the server communicating in https client <= https => Nginx <= https => server And I don't have access to the server or it's a source code that is closed so it's not possible to troubleshoot there. Is there a way to see in clear text what is exchanged between the Nginx reverse proxy and the server ? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikydevel at yahoo.fr Sat May 4 21:28:06 2019 From: mikydevel at yahoo.fr (Mik J) Date: Sat, 4 May 2019 21:28:06 +0000 (UTC) Subject: Reverse proxy and 502 bad gateway References: <85167561.6944749.1557005286697.ref@mail.yahoo.com> Message-ID: <85167561.6944749.1557005286697@mail.yahoo.com> Hello, I'm sucessfully accessing a server/site behind my reverse proxy with the following URL https://app.mydomain.org/screens/dashboard.html#/MainDashboard But the following URL gives a 502 Bad Gateway https://app.mydomain.org/screens/webui/resource/swccopolldata.json I don't understand why beyond resource it sends me an error 502. Does anyone has an idea about what's wrong ? My Nginx config looks like this upstream backend-app { server 192.168.0.2:443; } server { ?? listen 80; ?? listen [::]:80; ?? listen 443 ssl; ?? listen 4443 ssl; ?? listen [::]:4443 ssl; ?? listen [::]:443 ssl; ?? server_name server_name app.mydomain.org; ... ?? proxy_ssl_verify off; ?? location / { ??????? try_files $uri @proxy; ??????? proxy_ssl_verify??????? off; ??????? access_log????? /var/log/nginx/app.mydomain.org.access.log; ??????? error_log?????? /var/log/nginx/app.mydomain.org.error.log; ??? } ??? location @proxy { ??????????? proxy_pass https://backend-app; ??? } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgnet.dev at gmail.com Sun May 5 05:14:39 2019 From: pgnet.dev at gmail.com (PGNet Dev) Date: Sat, 4 May 2019 22:14:39 -0700 Subject: after upgrade to nginx 1.16.0, $realpath_root returns incorrect path ? In-Reply-To: References: Message-ID: <1c399683-a750-b182-0f58-76d474ee53c9@gmail.com> On 5/4/19 8:11 AM, PGNet Dev wrote: > but turning on debug, > > 2019/05/04 07:51:50 [debug] 6510#6510: *8 http script var: "/index.php" > 2019/05/04 07:51:50 [debug] 6510#6510: *8 fastcgi param: "SCRIPT_FILENAME: /usr/local/html/index.php" > > the SCRIPT_FILENAME path is incorrect. there appears to be an issue with $realpath_root. > > While I'm digging locally for the problem ... question(s): > > -- has anything changed in usage of $realpath_root? > -- are there any php v7.3.6 related issues? > -- any other hints? I replaced the $realpath_root var with a literal path string, and everything works again as expected. Dropping back to 1.15 branch, all's working again -- with the var. Rebuilding PHP had no effect, neither did dropping back to earlier PHP branch(es). Finally, I trashed all of nginx, and did a clean checkout/build. And it works. Of course. No concrete idea what specifically was the problem ... but gone now. From mikydevel at yahoo.fr Sun May 5 09:38:30 2019 From: mikydevel at yahoo.fr (Mik J) Date: Sun, 5 May 2019 09:38:30 +0000 (UTC) Subject: Capture clear text with Nginx reverse proxy In-Reply-To: <0102016a85b9e5d2-9b4cffb2-080b-4b7d-9605-65e49b80378d-000000@eu-west-1.amazonses.com> References: <792219505.6916367.1556999441476.ref@mail.yahoo.com> <792219505.6916367.1556999441476@mail.yahoo.com> <8E1F00E9-E7BD-4F53-8FB7-A3A18836EE53@supercoders.com.au> <792219505.6916367.1556999441476@mail.yahoo.com> <0102016a85b9e5d2-9b4cffb2-080b-4b7d-9605-65e49b80378d-000000@eu-west-1.amazonses.com> Message-ID: <1247608591.7071483.1557049110063@mail.yahoo.com> Thank you for your answer Stuart. I'm on an Openbsd platform and it's not available for it. It seems to me a bit complicated because I'll have to insert it between the Nginx reverse proxy and the end server. Have you used it ? Le dimanche 5 mai 2019 ? 04:01:54 UTC+2, Andrew Stuart a ?crit : >> Is there a way to see in clear text what is exchanged between the Nginx reverse proxy and the server ? Maybe something like this? https://mitmproxy.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sca at andreasschulze.de Sun May 5 09:41:23 2019 From: sca at andreasschulze.de (A. Schulze) Date: Sun, 5 May 2019 11:41:23 +0200 Subject: after upgrade to nginx 1.16.0, $realpath_root returns incorrect path ? In-Reply-To: <1c399683-a750-b182-0f58-76d474ee53c9@gmail.com> References: <1c399683-a750-b182-0f58-76d474ee53c9@gmail.com> Message-ID: <765cc64b-5cd9-a1f6-cfce-c322d7bee43b@andreasschulze.de> Am 05.05.19 um 07:14 schrieb PGNet Dev: > Dropping back to 1.15 branch, all's working again -- with the var. For example, the diff between 1.15.12 and 1.16.0 is *only* the changed version number. So, be precise about which 1.15 version is working for you. Andreas From pgnet.dev at gmail.com Sun May 5 14:32:18 2019 From: pgnet.dev at gmail.com (PGNet Dev) Date: Sun, 5 May 2019 07:32:18 -0700 Subject: after upgrade to nginx 1.16.0, $realpath_root returns incorrect path ? In-Reply-To: <765cc64b-5cd9-a1f6-cfce-c322d7bee43b@andreasschulze.de> References: <1c399683-a750-b182-0f58-76d474ee53c9@gmail.com> <765cc64b-5cd9-a1f6-cfce-c322d7bee43b@andreasschulze.de> Message-ID: On 5/5/19 2:41 AM, A. Schulze wrote: > > > Am 05.05.19 um 07:14 schrieb PGNet Dev: > >> Dropping back to 1.15 branch, all's working again -- with the var. > For example, the diff between 1.15.12 and 1.16.0 is *only* the changed version number. > So, be precise about which 1.15 version is working for you. Here, I'd not upgraded these couple of boxes to latest -- instead, I had 1.15.10 & 1.15.9 in place. Both exhibited the same behavior re: the var, and both 'recovered' after I did _clean_ checkout & builds. Sounds like the problem was on my end, tho odd that it's 'just' a build issue. In any case, pebkac, I think. From nginx-forum at forum.nginx.org Sun May 5 14:33:43 2019 From: nginx-forum at forum.nginx.org (spraguey) Date: Sun, 05 May 2019 10:33:43 -0400 Subject: Incorrect log location error Message-ID: <0c284b5d5ef7047ee4e7263c7049e74e.NginxMailingListEnglish@forum.nginx.org> I can't seem to get past an error when specifying the log location for a website. In my site.conf for the website... access_log /webspace/mydomain.com/log/access.log; The /webspace/mydomain.com/log folder definitely exists at that location. access.log is not created there. In my error.log file... "/usr/share/nginx//webspace/mydomain.com/log/access.log" failed (2: No such file or directory) while logging request, client: X.X.X.X, server: www.mydomain.com..." It looks like it is trying to place it as a subfolder in the /usr/share folder instead of at the root. Any thoughts on what I am doing wrong here? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284035,284035#msg-284035 From peter_booth at me.com Sun May 5 17:24:57 2019 From: peter_booth at me.com (Peter Booth) Date: Sun, 5 May 2019 13:24:57 -0400 Subject: Capture clear text with Nginx reverse proxy In-Reply-To: <1247608591.7071483.1557049110063@mail.yahoo.com> References: <792219505.6916367.1556999441476.ref@mail.yahoo.com> <792219505.6916367.1556999441476@mail.yahoo.com> <8E1F00E9-E7BD-4F53-8FB7-A3A18836EE53@supercoders.com.au> <792219505.6916367.1556999441476@mail.yahoo.com> <0102016a85b9e5d2-9b4cffb2-080b-4b7d-9605-65e49b80378d-000000@eu-west-1.amazonses.com> <1247608591.7071483.1557049110063@mail.yahoo.com> Message-ID: Mik, I?m not going to get into the openbsd question, but I can tell you some of the different things that I have done to solve this kind of problem in the past. Your environmental constraints will impact which is feasible: 1. Use tcpdump to capture packets 2. Use netcat as an intercepting proxy 3. Use muffin as a http aware proxy 4. Use one or more virtual machines to host proxies 5. Use a cheap ($40) smart switch and a spanning port to mirror traffic 6. Use Goliath as a ruby proxy There are many more ways to get the same results Sent from my iPhone > On May 5, 2019, at 5:38 AM, Mik J via nginx wrote: > > Thank you for your answer Stuart. > I'm on an Openbsd platform and it's not available for it. > > It seems to me a bit complicated because I'll have to insert it between the Nginx reverse proxy and the end server. Have you used it ? > > > Le dimanche 5 mai 2019 ? 04:01:54 UTC+2, Andrew Stuart a ?crit : > > > >> Is there a way to see in clear text what is exchanged between the Nginx reverse proxy and the server ? > > > Maybe something like this? > > https://mitmproxy.org/ > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From julian at jlbprof.com Sun May 5 20:28:11 2019 From: julian at jlbprof.com (Julian Brown) Date: Sun, 5 May 2019 15:28:11 -0500 Subject: More than one host Message-ID: I am having a problem and not sure which side of the ocean it is on (Nginx or Apache). I am internally setting up an Nginx reverse proxy that will eventually go public. I have two domains I want Nginx to proxy for, both go to different machines. The second domain is for a bugzilla host, bugzilla.conf: server { server_name bugzilla.example.com; listen *:80; access_log /var/log/nginx/bugzilla.access.log; error_log /var/log/nginx/bugzilla.error.log debug; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host bugzilla.example.com; proxy_pass https://INTERNAL_IP /; } } It does send the request to the correct machine, but I do not know if it is sending the correct hostname or not. On the machine I am sending to is an Apache instance with multiple development versions of our server and bugzilla. The request is getting handled by what is apparently the default vhost of the Apache server, not the bugzilla vhost. In other words the wrong data is being sent out because it is going to the wrong end point on Apache. In the log for that vhost on Apache I see: 1 192.168.1.249 - - [05/May/2019:14:43:28 -0500] "GET /bugzilla/ HTTP/1.0" 200 4250 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHT 2 Execution Time 8579 the dash after 200 4250 is the 'host" I believe it is seeing or defaulting to "-" and not http://bugzilla.example.com. In my Nginx config I set proxy_set_header Host to what I want it to send as bugzilla.example.com, but I am not sure what is getting sent. Is proxy_set_header Host, the proper way to send it as "bugzilla.example.com" so that Apache sees it coming on that server name to activate the correct vhost? It could be a problem in the Apache vhost config, but if I direct my browser with /etc/hosts directly at Apache it works correctly it is only with proxying from Nginx that I see this behavior. Any comments? Thanx -------------- next part -------------- An HTML attachment was scrubbed... URL: From bee.lists at gmail.com Sun May 5 23:01:34 2019 From: bee.lists at gmail.com (Bee.Lists) Date: Sun, 5 May 2019 19:01:34 -0400 Subject: Incorrect log location error In-Reply-To: <0c284b5d5ef7047ee4e7263c7049e74e.NginxMailingListEnglish@forum.nginx.org> References: <0c284b5d5ef7047ee4e7263c7049e74e.NginxMailingListEnglish@forum.nginx.org> Message-ID: Notice the double // before webspace > On May 5, 2019, at 10:33 AM, spraguey wrote: > > "/usr/share/nginx//webspace/mydomain.com/log/access.log" failed (2: No such > file or directory) while logging request, client: X.X.X.X, server: > www.mydomain.com..." Cheers, Bee From nginx-forum at forum.nginx.org Sun May 5 23:20:19 2019 From: nginx-forum at forum.nginx.org (spraguey) Date: Sun, 05 May 2019 19:20:19 -0400 Subject: Incorrect log location error In-Reply-To: <0c284b5d5ef7047ee4e7263c7049e74e.NginxMailingListEnglish@forum.nginx.org> References: <0c284b5d5ef7047ee4e7263c7049e74e.NginxMailingListEnglish@forum.nginx.org> Message-ID: I was able to resolve this, but I am not sure exactly why. To simplify my post, I removed the code that sets a variable for the base path. That's where the problem is. set $homedir /webspace/mydomain.com; root ${homedir}/www; access_log ${homedir}/log/access.log; The root works, but access_log did not. NGINX would prepend /usr/share/nginx/ to it every time and result in the error from my original post. If I don't use the variable, it works fine. access_log /webspace/mydomain.com/log/access.log; It kind of defeats the purpose of having a variable, but at least it is working now. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284035,284041#msg-284041 From enderulusoy at gmail.com Mon May 6 09:10:26 2019 From: enderulusoy at gmail.com (ender ulusoy) Date: Mon, 6 May 2019 12:10:26 +0300 Subject: redirect to another domain based on IP address Message-ID: Hi Folks, We have a website under heavily development. So we divide the site to 3 branches stage, demo and main. What our developers want from me is : "every request from office ip address to main domain must redirect to stage." For an example if a developer makes a request to www.aaa.com then simply it'll redirect all requests to stage.aaa.com if the request send from office ip address. How can achieve that? Thank you. -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From 201904-nginx at jslf.app Mon May 6 10:27:24 2019 From: 201904-nginx at jslf.app (Patrick) Date: Mon, 6 May 2019 18:27:24 +0800 Subject: redirect to another domain based on IP address In-Reply-To: References: Message-ID: <20190506102724.GA27394@haller.ws> On 2019-05-06 12:10, ender ulusoy wrote: > We have a website under heavily development. So we divide the site to > 3 branches stage, demo and main. What our developers want from me is : > "every request from office ip address to main domain must redirect to > stage." We need more information about your architecture to be able to show you ways to do this. It might be more cost-effective to just chat with an engineer for hire. Patrick From r at roze.lv Mon May 6 11:17:48 2019 From: r at roze.lv (Reinis Rozitis) Date: Mon, 6 May 2019 14:17:48 +0300 Subject: redirect to another domain based on IP address In-Reply-To: References: Message-ID: <000701d503fd$56716960$03543c20$@roze.lv> > We have a website under heavily development. So we divide the site to 3 branches stage, demo and main. What our developers want from me is : "every request from office ip address to main domain must redirect to stage." If there is a single IP you can use the if directive (http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#if ) in case of multiple - map (http://nginx.org/en/docs/http/ngx_http_map_module.html ) or geo will be a better approach. A generic example with if: if ($remote_addr = 127.0.0.1) { return 301 http://stage.aaa.com$request_uri; } rr From anoopalias01 at gmail.com Mon May 6 12:15:54 2019 From: anoopalias01 at gmail.com (Anoop Alias) Date: Mon, 6 May 2019 17:45:54 +0530 Subject: More than one host In-Reply-To: References: Message-ID: Try proxy_set_header Host $host; On Mon, May 6, 2019 at 5:15 PM Julian Brown wrote: > I am having a problem and not sure which side of the ocean it is on (Nginx > or Apache). > > I am internally setting up an Nginx reverse proxy that will eventually go > public. > > I have two domains I want Nginx to proxy for, both go to different > machines. > > The second domain is for a bugzilla host, bugzilla.conf: > > server { > server_name bugzilla.example.com; > > listen *:80; > > access_log /var/log/nginx/bugzilla.access.log; > error_log /var/log/nginx/bugzilla.error.log debug; > > location / { > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $remote_addr; > proxy_set_header Host bugzilla.example.com; > proxy_pass https://INTERNAL_IP /; > } > } > > It does send the request to the correct machine, but I do not know if it > is sending the correct hostname or not. > > On the machine I am sending to is an Apache instance with multiple > development versions of our server and bugzilla. The request is getting > handled by what is apparently the default vhost of the Apache server, not > the bugzilla vhost. In other words the wrong data is being sent out > because it is going to the wrong end point on Apache. > > In the log for that vhost on Apache I see: > > 1 192.168.1.249 - - [05/May/2019:14:43:28 -0500] "GET /bugzilla/ > HTTP/1.0" 200 4250 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) > AppleWebKit/537.36 (KHT > 2 Execution Time 8579 > > the dash after 200 4250 is the 'host" I believe it is seeing or defaulting > to "-" and not http://bugzilla.example.com. > > In my Nginx config I set proxy_set_header Host to what I want it to send > as bugzilla.example.com, but I am not sure what is getting sent. > > Is proxy_set_header Host, the proper way to send it as " > bugzilla.example.com" so that Apache sees it coming on that server name > to activate the correct vhost? > > It could be a problem in the Apache vhost config, but if I direct my > browser with /etc/hosts directly at Apache it works correctly it is only > with proxying from Nginx that I see this behavior. > > Any comments? > > Thanx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Mon May 6 12:35:10 2019 From: r at roze.lv (Reinis Rozitis) Date: Mon, 6 May 2019 15:35:10 +0300 Subject: More than one host In-Reply-To: References: Message-ID: <000501d50408$25458460$6fd08d20$@roze.lv> > 1 192.168.1.249 - - [05/May/2019:14:43:28 -0500] "GET /bugzilla/ HTTP/1.0" 200 4250 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHT > 2 Execution Time 8579 > > the dash after 200 4250 is the 'host" I believe it is seeing or defaulting to "-" and not http://bugzilla.example.com. Are you sure that it is the 'host (as in have you configured LogFormat in the apache that way)? Typically in a combined log format in that spot is Referer (http://httpd.apache.org/docs/current/mod/mod_log_config.html ) which if empty is logged as "-" rr From enderulusoy at gmail.com Mon May 6 13:18:40 2019 From: enderulusoy at gmail.com (ender ulusoy) Date: Mon, 6 May 2019 16:18:40 +0300 Subject: redirect to another domain based on IP address In-Reply-To: <20190506102724.GA27394@haller.ws> References: <20190506102724.GA27394@haller.ws> Message-ID: @Patrick Thanks, here is the config I have (short version) upstream aaa { 192.168.1.1:80; 192.168.1.2:80; } upstream stage { 192.168.1.3:80; 192.168.1.4:80; } server { server_name www.aaa.com; location / { add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_pass http://aaa; } } server { server_name stage.aaa.com; location / { add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_pass http://stage; } } I want if any connection request send to www.aaa.com/$request_uri from office ip (200.100.50.1); then Nginx will redirect all requests to stage.aaa.com/$request_uri All other requests from the world will go www.aaa.com On Mon, May 6, 2019, 1:23 PM Patrick <201904-nginx at jslf.app> wrote: > On 2019-05-06 12:10, ender ulusoy wrote: > > We have a website under heavily development. So we divide the site to > > 3 branches stage, demo and main. What our developers want from me is : > > "every request from office ip address to main domain must redirect to > > stage." > > We need more information about your architecture to be able to show you > ways to do this. > > It might be more cost-effective to just chat with an engineer for hire. > > > Patrick > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From 201904-nginx at jslf.app Mon May 6 13:49:48 2019 From: 201904-nginx at jslf.app (Patrick) Date: Mon, 6 May 2019 21:49:48 +0800 Subject: redirect to another domain based on IP address In-Reply-To: References: <20190506102724.GA27394@haller.ws> Message-ID: <20190506134948.GA360@haller.ws> On 2019-05-06 16:18, ender ulusoy wrote: > @Patrick Thanks, here is the config I have (short version) Ok, now you need to know the IPs that the main office uses for outbound HTTP requests. However, it seems unlikely that *everyone* at the main office wants to be on staging. You're probably better off setting up split-horizon DNS at the main office and forcing staging on only the people who volunteer to be on staging. Though if devs flip back and forth between environments, it's probably worthwhile to show them how to edit their local machine's hosts file. Patrick From enderulusoy at gmail.com Mon May 6 13:47:34 2019 From: enderulusoy at gmail.com (ender ulusoy) Date: Mon, 6 May 2019 16:47:34 +0300 Subject: redirect to another domain based on IP address In-Reply-To: <20190506134948.GA360@haller.ws> References: <20190506102724.GA27394@haller.ws> <20190506134948.GA360@haller.ws> Message-ID: Main office ip 200.100.50.10 And it's shared office. I can not setup any dns services there. All the developers come from this ip. On Mon, May 6, 2019, 4:45 PM Patrick <201904-nginx at jslf.app> wrote: > On 2019-05-06 16:18, ender ulusoy wrote: > > @Patrick Thanks, here is the config I have (short version) > > Ok, now you need to know the IPs that the main office uses for outbound > HTTP requests. > > However, it seems unlikely that *everyone* at the main office wants to > be on staging. You're probably better off setting up split-horizon DNS > at the main office and forcing staging on only the people who volunteer > to be on staging. > > Though if devs flip back and forth between environments, it's probably > worthwhile to show them how to edit their local machine's hosts file. > > > Patrick > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From 201904-nginx at jslf.app Mon May 6 14:12:53 2019 From: 201904-nginx at jslf.app (Patrick) Date: Mon, 6 May 2019 22:12:53 +0800 Subject: redirect to another domain based on IP address In-Reply-To: References: <20190506102724.GA27394@haller.ws> <20190506134948.GA360@haller.ws> Message-ID: <20190506141253.GA991@haller.ws> On 2019-05-06 16:47, ender ulusoy wrote: > Main office ip 200.100.50.10 > > And it's shared office. I can not setup any dns services there. All > the developers come from this ip. map $remote_addr $is_web_dev { 200.100.50.10 1; default 0; } server { server_name www.aaa.com; if ($is_web_dev) { return 301 http://stage.aaa.com$uri ; } # rest of normal prod config This config is probably going to cause something to blow up in the future because it is not doing what the user requested -- if the user wanted staging they should have just used http://stage.aaa.com in the first place. Patrick From francis at daoine.org Mon May 6 15:17:43 2019 From: francis at daoine.org (Francis Daly) Date: Mon, 6 May 2019 16:17:43 +0100 Subject: Cannot get secure link with expires to work In-Reply-To: References: Message-ID: <20190506151743.bcmpcot4ptha7aqz@daoine.org> On Wed, May 01, 2019 at 09:14:11AM +1000, Duke Dougal wrote: Hi there, > Hello I've tried every possible way I can think of to make secure links > work with expires. When I use your config on my test machine, it works for me. So it looks like what you have is fundamentally correct; there is obviously something wrong somewhere, but it is likely something small. > No matter what I try, I cannot get it to work when I try to uses the expire > time. Can you copy-paste the command you use to test things; and perhaps show the log line for that request? > The command that fails: > > ubuntu at ip-172-31-34-191:/var/www$ curl > http://127.0.0.1/html/index.html?md5=FsRb_uu5NsagF0hA_Z-OQgexpires=2147483647 As was pointed out - there should be & in the middle of that. If you "merely" add &, you will probably see mostly-the-same response -- because your shell will read an unescaped & as "end of command". My guess is that your log line will show the request for /html/index.html?md5=FsRb_uu5NsagF0hA_Z-OQg, which will (correctly) return 403. What happens if you do $ curl 'http://127.0.0.1/html/index.html?md5=FsRb_uu5NsagF0hA_Z-OQg&expires=2147483647' (with &, and with the whole argument shell-quoted in '')? Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Mon May 6 15:40:44 2019 From: francis at daoine.org (Francis Daly) Date: Mon, 6 May 2019 16:40:44 +0100 Subject: Capture clear text with Nginx reverse proxy In-Reply-To: <792219505.6916367.1556999441476@mail.yahoo.com> References: <792219505.6916367.1556999441476.ref@mail.yahoo.com> <792219505.6916367.1556999441476@mail.yahoo.com> Message-ID: <20190506154044.335fxqmtzbxqzckj@daoine.org> On Sat, May 04, 2019 at 07:50:41PM +0000, Mik J via nginx wrote: Hi there, > I often try to solve problems between Nginx and the server communicating in https > client <= https => Nginx <= https => server > Is there a way to see in clear text what is exchanged between the Nginx reverse proxy and the server ? Not directly, no. None of these suggestions have been tested by me, so decide how useful they might be before expending effort on trying them. You could try enabling the debug log (it will be big) and seeing what it says that nginx is sending. Or you could possibly try to modify your nginx code to put the plaintext content that nginx decrypts and encrypts somewhere that you can read later. Another possibility would be if you have access to the server's private key -- then you could (in principle) capture the traffic and decrypt it yourself. Since you don't have access to the upstream server, that is less likely to be useful. I suspect that the most likely in-nginx way would be to change your nginx config so that it adds a "http" section, and then use "tcpdump" to watch that traffic. That is: currently, your config is something like server { listen 443 ssl; location / { proxy_pass https://upstream; } } If you change it to instead be something like server { listen 127.0.0.1:8888; location / { proxy_pass https://upstream; } } server { listen 443 ssl; location / { proxy_pass http://127.0.0.1:8888; } } then you could tcpdump to watch traffic to port 8888. That does *not* show what nginx is sending to upstream; but it should show the same sort of things that nginx would send to upstream. Perhaps that is close enough for your purposes. (Of course, if you are neither the ssl client nor the ssl server, the whole point of ssl is that you cannot see the plaintext.) f -- Francis Daly francis at daoine.org From francis at daoine.org Mon May 6 15:57:09 2019 From: francis at daoine.org (Francis Daly) Date: Mon, 6 May 2019 16:57:09 +0100 Subject: More than one host In-Reply-To: References: Message-ID: <20190506155709.4y6cwjwo5jkh3qgy@daoine.org> On Sun, May 05, 2019 at 03:28:11PM -0500, Julian Brown wrote: Hi there, > The second domain is for a bugzilla host, bugzilla.conf: > > server { > location / { > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $remote_addr; > proxy_set_header Host bugzilla.example.com; > proxy_pass https://INTERNAL_IP /; > } > } > > It does send the request to the correct machine, but I do not know if it is > sending the correct hostname or not. It should be sending the correct Host: header within the http request, but it may not be sending that same hostname in the TLS SNI communications. If your INTERNAL_IP web server expects SNI, things may go wrong there. Have a look at http://nginx.org/r/proxy_ssl_server_name, and maybe turn it on. > Is proxy_set_header Host, the proper way to send it as "bugzilla.example.com" > so that Apache sees it coming on that server name to activate the correct > vhost? Yes, unless you share https certs on the same IP:port; in which case you need the extra config. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon May 6 18:08:58 2019 From: nginx-forum at forum.nginx.org (bhagavathula) Date: Mon, 06 May 2019 14:08:58 -0400 Subject: Valgrind reporting issue in connection->addr_text Message-ID: <30e7f254e8b6f0c6042d128d51796115.NginxMailingListEnglish@forum.nginx.org> Hi, When running Valgrind on our NGINX module for errors, found the following errors: ==49784== Conditional jump or move depends on uninitialised value(s) ==49784== at 0x4C32D08: strlen (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==49784== by 0x6C3A328: apr_pstrdup (in /usr/lib/x86_64-linux-gnu/libapr-1.so.0.6.3) ==49784== by 0x6C3DB3D: apr_table_add (in /usr/lib/x86_64-linux-gnu/libapr-1.so.0.6.3) ==49784== by 0x611CC82: get_request_properties (ta_ngx_http_module.c:329) ==49784== by 0x611CE30: get_new_token (ta_ngx_http_module.c:351) ==49784== by 0x611CF55: get_token_helper (ta_ngx_http_module.c:374) ==49784== by 0x611D4BC: ta_post_read_request_helper (ta_ngx_http_module.c:486) ==49784== by 0x611D750: ta_post_read_request (ta_ngx_http_module.c:920) ==49784== by 0x1553E6: ngx_http_core_access_phase (ngx_http_core_module.c:1083) ==49784== by 0x150A34: ngx_http_core_run_phases (ngx_http_core_module.c:858) ==49784== by 0x150ADA: ngx_http_handler (ngx_http_core_module.c:841) ==49784== by 0x1594B0: ngx_http_process_request (ngx_http_request.c:1952) ==49784== Uninitialised value was created by a heap allocation ==49784== at 0x4C31E76: memalign (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==49784== by 0x4C31F91: posix_memalign (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==49784== by 0x14611F: ngx_memalign (ngx_alloc.c:57) ==49784== by 0x122D09: ngx_create_pool (ngx_palloc.c:23) ==49784== by 0x142CD6: ngx_event_accept (ngx_event_accept.c:161) ==49784== by 0x14D313: ngx_epoll_process_events (ngx_epoll_module.c:902) ==49784== by 0x14218D: ngx_process_events_and_timers (ngx_event.c:242) ==49784== by 0x14C2A3: ngx_single_process_cycle (ngx_process_cycle.c:310) ==49784== by 0x1214E4: main (nginx.c:379) The code that is causing the error is as follows: const char *ip =(char *) (r->connection->addr_text).data; apr_table_add(request_table, (char *) TA_PROP_CLIENT_ADDR, ip); When printing the ip which is supposed to be "127.0.0.1" (localhost), but at times some garbage value is appended like: 127.0.0.1 at 1\u000b0\t\u0006\u0003xW?\u0005 I am not able to understand why addr_text contains garbage value, Can someone pls help me. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284065,284065#msg-284065 From rpaprocki at fearnothingproductions.net Mon May 6 18:12:11 2019 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Mon, 6 May 2019 11:12:11 -0700 Subject: Valgrind reporting issue in connection->addr_text In-Reply-To: <30e7f254e8b6f0c6042d128d51796115.NginxMailingListEnglish@forum.nginx.org> References: <30e7f254e8b6f0c6042d128d51796115.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, addr_text is of type 'ngx_str_t': http://lxr.nginx.org/source/src/core/ngx_connection.h#0148, which provides both the char pointer and the length. It's not correct to cast that value to a char pointer directly. On Mon, May 6, 2019 at 11:09 AM bhagavathula wrote: > Hi, > > When running Valgrind on our NGINX module for errors, found the following > errors: > ==49784== Conditional jump or move depends on uninitialised value(s) > ==49784== at 0x4C32D08: strlen (in > /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) > ==49784== by 0x6C3A328: apr_pstrdup (in > /usr/lib/x86_64-linux-gnu/libapr-1.so.0.6.3) > ==49784== by 0x6C3DB3D: apr_table_add (in > /usr/lib/x86_64-linux-gnu/libapr-1.so.0.6.3) > ==49784== by 0x611CC82: get_request_properties > (ta_ngx_http_module.c:329) > ==49784== by 0x611CE30: get_new_token (ta_ngx_http_module.c:351) > ==49784== by 0x611CF55: get_token_helper (ta_ngx_http_module.c:374) > ==49784== by 0x611D4BC: ta_post_read_request_helper > (ta_ngx_http_module.c:486) > ==49784== by 0x611D750: ta_post_read_request (ta_ngx_http_module.c:920) > ==49784== by 0x1553E6: ngx_http_core_access_phase > (ngx_http_core_module.c:1083) > ==49784== by 0x150A34: ngx_http_core_run_phases > (ngx_http_core_module.c:858) > ==49784== by 0x150ADA: ngx_http_handler (ngx_http_core_module.c:841) > ==49784== by 0x1594B0: ngx_http_process_request > (ngx_http_request.c:1952) > ==49784== Uninitialised value was created by a heap allocation > ==49784== at 0x4C31E76: memalign (in > /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) > ==49784== by 0x4C31F91: posix_memalign (in > /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) > ==49784== by 0x14611F: ngx_memalign (ngx_alloc.c:57) > ==49784== by 0x122D09: ngx_create_pool (ngx_palloc.c:23) > ==49784== by 0x142CD6: ngx_event_accept (ngx_event_accept.c:161) > ==49784== by 0x14D313: ngx_epoll_process_events (ngx_epoll_module.c:902) > ==49784== by 0x14218D: ngx_process_events_and_timers (ngx_event.c:242) > ==49784== by 0x14C2A3: ngx_single_process_cycle > (ngx_process_cycle.c:310) > ==49784== by 0x1214E4: main (nginx.c:379) > > The code that is causing the error is as follows: > const char *ip =(char *) (r->connection->addr_text).data; > apr_table_add(request_table, (char *) TA_PROP_CLIENT_ADDR, ip); > > When printing the ip which is supposed to be "127.0.0.1" (localhost), but > at > times some garbage value is appended like: > 127.0.0.1 at 1\u000b0\t\u0006\u0003xW?\u0005 > > I am not able to understand why addr_text contains garbage value, Can > someone pls help me. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,284065,284065#msg-284065 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From zzyzxd at gmail.com Mon May 6 19:08:44 2019 From: zzyzxd at gmail.com (Yuhao Zhang) Date: Mon, 6 May 2019 19:08:44 +0000 Subject: Disabling proxy_buffering not working Message-ID: Hi All, I am facing this issue where proxied server's response is buffered before sending back to the request client, even when proxy_buffering is disabled. I also tried setting "X-Accel-Buffering: no" header on the response, but it didn't work. I posted the issue on ingress-nginx github repo, since It is what I am using on Kubernetes. However, now I think the root cause is in the underlying nginx. The ingress controller did its job correctly, which is configuring nginx. The full story and a reproducible example can be found here: https://github.com/kubernetes/ingress-nginx/issues/4063 The nginx version used by the controller is 1.15.6 Thank you, Yuhao -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon May 6 20:05:27 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 6 May 2019 23:05:27 +0300 Subject: Disabling proxy_buffering not working In-Reply-To: References: Message-ID: <20190506200527.GI1877@mdounin.ru> Hello! On Mon, May 06, 2019 at 07:08:44PM +0000, Yuhao Zhang wrote: > I am facing this issue where proxied server's response is > buffered before sending back to the request client, even when > proxy_buffering is disabled. > > I also tried setting "X-Accel-Buffering: no" header on the > response, but it didn't work. > > I posted the issue on ingress-nginx github repo, since It is > what I am using on Kubernetes. However, now I think the root > cause is in the underlying nginx. The ingress controller did its > job correctly, which is configuring nginx. > > The full story and a reproducible example can be found here: > https://github.com/kubernetes/ingress-nginx/issues/4063 > > The nginx version used by the controller is 1.15.6 >From the issue description it looks like you think that proxying with "proxy_buffering off;" should preserve HTTP transfer encoding chunks as received from the upstream server. It's not, chunk boundaries are not guaranteed to be preserved regardless of the buffering settings. Chunked transfer encoding is a property of a message as transferred between two HTTP entities, and can be modified by any HTTP intermediary. You should not assume it will be preserved. Quoting RFC 7230: Unlike Content-Encoding (Section 3.1.2.1 of [RFC7231]), Transfer-Encoding is a property of the message, not of the representation, and any recipient along the request/response chain MAY decode the received transfer coding(s) or apply additional transfer coding(s) to the message body, assuming that corresponding changes are made to the Transfer-Encoding field-value. Additional information about the encoding parameters can be provided by other header fields not defined by this specification. The "proxy_bufferring off;" means that nginx won't wait for the whole buffer to be filled before it will start sending the response to the client. But as long as nginx have more than one chunk received from the backend server, it will decode all the chunks and will send them to the client combined. -- Maxim Dounin http://mdounin.ru/ From zzyzxd at gmail.com Mon May 6 21:30:33 2019 From: zzyzxd at gmail.com (Yuhao Zhang) Date: Mon, 6 May 2019 21:30:33 +0000 Subject: Disabling proxy_buffering not working In-Reply-To: <20190506200527.GI1877@mdounin.ru> References: , <20190506200527.GI1877@mdounin.ru> Message-ID: Hi, Thank you for the explanation. I definitely need to learn more about the protocol spec. Would you help me understand why the "proxy_buffer_size" affects the result even when "proxy_buffering" is off? Also, in my network topology, there are no other layer 7 hops. nginx is the only thing talks HTTP, and it is directly connecting to the backend server. I have also verified that, if I bypass nginx and directly connect to the TCP port, everything works just fine. So the chunks are not combined before they reach nginx. Thanks, Yuhao ________________________________ From: nginx on behalf of Maxim Dounin Sent: Monday, May 6, 2019 15:05 To: nginx at nginx.org Subject: Re: Disabling proxy_buffering not working Hello! On Mon, May 06, 2019 at 07:08:44PM +0000, Yuhao Zhang wrote: > I am facing this issue where proxied server's response is > buffered before sending back to the request client, even when > proxy_buffering is disabled. > > I also tried setting "X-Accel-Buffering: no" header on the > response, but it didn't work. > > I posted the issue on ingress-nginx github repo, since It is > what I am using on Kubernetes. However, now I think the root > cause is in the underlying nginx. The ingress controller did its > job correctly, which is configuring nginx. > > The full story and a reproducible example can be found here: > https://github.com/kubernetes/ingress-nginx/issues/4063 > > The nginx version used by the controller is 1.15.6 >From the issue description it looks like you think that proxying with "proxy_buffering off;" should preserve HTTP transfer encoding chunks as received from the upstream server. It's not, chunk boundaries are not guaranteed to be preserved regardless of the buffering settings. Chunked transfer encoding is a property of a message as transferred between two HTTP entities, and can be modified by any HTTP intermediary. You should not assume it will be preserved. Quoting RFC 7230: Unlike Content-Encoding (Section 3.1.2.1 of [RFC7231]), Transfer-Encoding is a property of the message, not of the representation, and any recipient along the request/response chain MAY decode the received transfer coding(s) or apply additional transfer coding(s) to the message body, assuming that corresponding changes are made to the Transfer-Encoding field-value. Additional information about the encoding parameters can be provided by other header fields not defined by this specification. The "proxy_bufferring off;" means that nginx won't wait for the whole buffer to be filled before it will start sending the response to the client. But as long as nginx have more than one chunk received from the backend server, it will decode all the chunks and will send them to the client combined. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From dukedougal at gmail.com Tue May 7 05:21:49 2019 From: dukedougal at gmail.com (Duke Dougal) Date: Tue, 7 May 2019 15:21:49 +1000 Subject: Cannot get secure link with expires to work In-Reply-To: <20190506151743.bcmpcot4ptha7aqz@daoine.org> References: <20190506151743.bcmpcot4ptha7aqz@daoine.org> Message-ID: Well you hit the mark thank you well done. The problem was that I needed to wrap the entire curl url in quotes. ugh. On Tue, May 7, 2019 at 1:17 AM Francis Daly wrote: > On Wed, May 01, 2019 at 09:14:11AM +1000, Duke Dougal wrote: > > Hi there, > > > Hello I've tried every possible way I can think of to make secure links > > work with expires. > > When I use your config on my test machine, it works for me. > > So it looks like what you have is fundamentally correct; there is > obviously something wrong somewhere, but it is likely something small. > > > No matter what I try, I cannot get it to work when I try to uses the > expire > > time. > > Can you copy-paste the command you use to test things; and perhaps show > the log line for that request? > > > The command that fails: > > > > ubuntu at ip-172-31-34-191:/var/www$ curl > > > http://127.0.0.1/html/index.html?md5=FsRb_uu5NsagF0hA_Z-OQgexpires=2147483647 > > As was pointed out - there should be & in the middle of that. > > If you "merely" add &, you will probably see mostly-the-same response -- > because your shell will read an unescaped & as "end of command". > > My guess is that your log line will show the request for > /html/index.html?md5=FsRb_uu5NsagF0hA_Z-OQg, which will (correctly) > return 403. > > What happens if you do > > $ curl ' > http://127.0.0.1/html/index.html?md5=FsRb_uu5NsagF0hA_Z-OQg&expires=2147483647 > ' > > (with &, and with the whole argument shell-quoted in '')? > > Cheers, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From enderulusoy at gmail.com Tue May 7 07:44:17 2019 From: enderulusoy at gmail.com (ender ulusoy) Date: Tue, 7 May 2019 10:44:17 +0300 Subject: redirect to another domain based on IP address In-Reply-To: <20190506141253.GA991@haller.ws> References: <20190506102724.GA27394@haller.ws> <20190506134948.GA360@haller.ws> <20190506141253.GA991@haller.ws> Message-ID: Patrick, thank you. Mapping works perfect. Today one of my developers ask me that he'll append ?domain=st when he wants to see the staging site and ?domain=www when he wants to see the production while testing end the end of the url. here is examples: request goes to staging if the domain=st parameter added from office ip : http://aaa.com/?domain=st request goes to production even from office ip if the domain=www parameter added http://aaa.com/?domain=www all requests go to staging from office ip if no domain parameter specified https://stage.aaa.com While other developers work fine with the solution you gave above this one is also a qas engineer who tests the old and new site functions same time. And wants to test functions on both sides one by one. Can I implement a location catching in to your if clause? Will it work inside the if block ? Patrick <201904-nginx at jslf.app>, 6 May 2019 Pzt, 17:08 tarihinde ?unu yazd?: > On 2019-05-06 16:47, ender ulusoy wrote: > > Main office ip 200.100.50.10 > > > > And it's shared office. I can not setup any dns services there. All > > the developers come from this ip. > > map $remote_addr $is_web_dev { > 200.100.50.10 1; > default 0; > } > > server { > server_name www.aaa.com; > > if ($is_web_dev) { > return 301 http://stage.aaa.com$uri ; > } > # rest of normal prod config > > > This config is probably going to cause something to blow up in the > future because it is not doing what the user requested -- if the user > wanted staging they should have just used http://stage.aaa.com in the > first place. > > > Patrick > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- um Gottes Willen! -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue May 7 09:11:04 2019 From: francis at daoine.org (Francis Daly) Date: Tue, 7 May 2019 10:11:04 +0100 Subject: Cannot get secure link with expires to work In-Reply-To: References: <20190506151743.bcmpcot4ptha7aqz@daoine.org> Message-ID: <20190507091104.zdkay6ywpt7yytbk@daoine.org> On Tue, May 07, 2019 at 03:21:49PM +1000, Duke Dougal wrote: Hi there, > Well you hit the mark thank you well done. Good that you found the fix -- and that your nginx config was correct all along. > The problem was that I needed to wrap the entire curl url in quotes. Interesting -- had you tested this with a "normal" browser, you probably would have seen it working right away. The normal advice to "use curl to avoid hiding all of the things that browsers hide" will probably be modified to note "make sure your shell does not interfere with the request url". Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue May 7 11:51:41 2019 From: nginx-forum at forum.nginx.org (bhagavathula) Date: Tue, 07 May 2019 07:51:41 -0400 Subject: Valgrind reporting issue in connection->addr_text In-Reply-To: References: Message-ID: <2e948643d56d1860578131bee781f6c1.NginxMailingListEnglish@forum.nginx.org> Hi, Thanks for the response. We fixed the issue related to casting and still didnt see any change in behavior where intermittently garbage values are being appended. However, when we tried to get the ip address using alternate means, as follows the issue is not happening and Valgrind is also not reporting any issue: struct sockaddr_in *sin = (struct sockaddr_in *)r->connection->sockaddr; char *ip = apr_palloc(conf->pool, sizeof(char)*INET_ADDRSTRLEN); inet_ntop(AF_INET, &(sin->sin_addr), ip, INET_ADDRSTRLEN); Can you please suggest if there is any issue in earlier approach and using sockaddr is preferable approach? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284065,284075#msg-284075 From mdounin at mdounin.ru Tue May 7 11:53:21 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 7 May 2019 14:53:21 +0300 Subject: Disabling proxy_buffering not working In-Reply-To: References: <20190506200527.GI1877@mdounin.ru> Message-ID: <20190507115321.GJ1877@mdounin.ru> Hello! On Mon, May 06, 2019 at 09:30:33PM +0000, Yuhao Zhang wrote: > Hi, Thank you for the explanation. I definitely need to learn > more about the protocol spec. Would you help me understand why > the "proxy_buffer_size" affects the result even when > "proxy_buffering" is off? This is because proxy buffer size limits the amount of data nginx can read from the backend in one read() operation, and hence limits maximum amount of data nginx will combine into a single chunk. > Also, in my network topology, there are no other layer 7 hops. > nginx is the only thing talks HTTP, and it is directly > connecting to the backend server. I have also verified that, if > I bypass nginx and directly connect to the TCP port, everything > works just fine. So the chunks are not combined before they > reach nginx. This doesn't really matter. HTTP does not provide any guarantees about transfer encoding, and you should not assume chunk boundaries will be preserved. They won't be. -- Maxim Dounin http://mdounin.ru/ From vivek.solanki at einfochips.com Tue May 7 16:02:51 2019 From: vivek.solanki at einfochips.com (Vivek Solanki) Date: Tue, 7 May 2019 16:02:51 +0000 Subject: Upstream error Message-ID: Hi team, Hope you all are doing well. I need some assistance regarding nginx configuration. I am using nginx service as proxy_pass to other URL's When I am performing load testing on nginx server with high no. of requests. For few tests it is running fine, But after some time it shows below upstream error message (5XX errors). -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2019/05/07 04:20:38 [error] 18168#0: *684822 no live upstreams while connecting to upstream, client: XX.XX.XX.XX, server: _, request: "GET https://XXXXXXXXXXXXXXXXXXXXXX/XXXXXXXXXXXXXXXX/XXXXXXXXXXX/XXXXX HTTP/1.1", upstream: " https://XXXXXXXXXXXXXXXXXXXXXX/XXXXXXXXXXXXXXXX/XXXXXXXXXXX/XXXXX ", host: " XXXXXXXXXXXXXXXXXX.XYZ.com " 2019/05/07 04:20:38 [error] 18168#0: *684820 connect() failed (110: Connection timed out) while connecting to upstream, client: XX.XX.XX.XX, server: _, request: "GET /XXXXXXXXXXXXXXXXXX/XXXXXXXXXXX/XXXXXXXXX/XXXXXXXXXXX HTTP/1.1", upstream: "https://XXXXXXXXXXXXXXXXXXXXXX/XXXXXXXXXXXXXXXX/XXXXXXXXXXX/XXXXX", host: "XXXXXXXXXXXXXXXXXX.XYZ.com" -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- After restarting nginx servers, it will again work fine. But this is not a proper solution. Please someone help me to resolve this issue permanently. Let me know if any further details required from my side. Vivek Solanki CloudOps Engineer ************************************************************************************************************************************************************* eInfochips Business Disclaimer: This e-mail message and all attachments transmitted with it are intended solely for the use of the addressee and may contain legally privileged and confidential information. If the reader of this message is not the intended recipient, or an employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any dissemination, distribution, copying, or other use of this message or its attachments is strictly prohibited. If you have received this message in error, please notify the sender immediately by replying to this message and please delete it from your computer. Any views expressed in this message are those of the individual sender unless otherwise stated. Company has taken enough precautions to prevent the spread of viruses. However the company accepts no liability for any damage caused by any virus transmitted by this email. ************************************************************************************************************************************************************* From brendan.doyle at oracle.com Tue May 7 22:26:16 2019 From: brendan.doyle at oracle.com (Brendan Doyle) Date: Tue, 7 May 2019 23:26:16 +0100 Subject: tcp stream load balancer not working on Oracle Linux 7.5 Message-ID: <3df7c1cb-246f-12a6-6d0d-bbb27865a8b0@oracle.com> Hi, I'm trying to get a basic tcp load balancer working on OL : cat /etc/oracle-release Oracle Linux Server release 7.5 My config is very basic: stream { ??????? upstream backend_stream { ??????????????? server 10.129.87.160:5000; ??????????????? server 10.129.87.120:5000; ??????? } ??????? server { ??????????????? listen??? ??? 5000; ??? ??? proxy_pass backend_stream; ??????? } } On both 10.129.87.160 & 10.129.87.120 I run 'nc -l 5000' to start a listening process: # ssh 10.129.87.160 "netstat -ntpl | grep 5000" tcp??????? 0????? 0 0.0.0.0:5000??????????? 0.0.0.0:* LISTEN????? 1360/nc tcp6?????? 0????? 0 :::5000???????????????? :::* LISTEN????? 1360/nc #ssh 10.129.87.160 "netstat -ntpl | grep 5000" tcp??????? 0????? 0 0.0.0.0:5000??????????? 0.0.0.0:* LISTEN????? 1360/nc tcp6?????? 0????? 0 :::5000???????????????? :::* LISTEN????? 1360/nc On my load balancer I can see nginx master listening on port 5000: # netstat -ntpl | grep nginx tcp??????? 0????? 0 0.0.0.0:5000??????????? 0.0.0.0:* LISTEN????? 22729/nginx: master tcp??????? 0????? 0 0.0.0.0:80????????????? 0.0.0.0:* LISTEN????? 22729/nginx: master tcp6?????? 0????? 0 :::80?????????????????? :::* LISTEN????? 22729/nginx: master I use nmap to contact the listening process, first try directly to one of the backend servers to make sure it is all working: # nmap -p 5000 10.129.87.120 Starting Nmap 6.40 ( http://nmap.org ) at 2019-05-07 17:28 EDT Nmap scan report for ovn87-120.us.oracle.com (10.129.87.120) Host is up (0.00032s latency). PORT???? STATE SERVICE 5000/tcp open? upnp MAC Address: 52:54:00:4A:4E:80 (QEMU Virtual NIC) Nmap done: 1 IP address (1 host up) scanned in 0.12 seconds Now try to the load balancer, whilst also snooping on the backend servers to see if the request is directed there: # nmap -p 5000 10.129.87.162 Starting Nmap 6.40 ( http://nmap.org ) at 2019-05-07 17:30 EDT Nmap scan report for ovn87-162 (10.129.87.162) Host is up (0.00015s latency). PORT???? STATE SERVICE 5000/tcp open? upnp MAC Address: 00:10:E0:8E:95:32 (Oracle) Nmap done: 1 IP address (1 host up) scanned in 0.10 seconds I get a response from the load balancer, nothing is directed to either server. And nothing shows up in /var/log/nginx/access.log? or /var/log/nginx/error.log Even with debug on. Any ideas? Thanks Brendan From 201904-nginx at jslf.app Wed May 8 01:24:17 2019 From: 201904-nginx at jslf.app (Patrick) Date: Wed, 8 May 2019 09:24:17 +0800 Subject: tcp stream load balancer not working on Oracle Linux 7.5 In-Reply-To: <3df7c1cb-246f-12a6-6d0d-bbb27865a8b0@oracle.com> References: <3df7c1cb-246f-12a6-6d0d-bbb27865a8b0@oracle.com> Message-ID: <20190508012417.GA27216@haller.ws> On 2019-05-07 23:26, Brendan Doyle wrote: > I'm trying to get a basic tcp load balancer working on OL : > ... > # nmap -p 5000 10.129.87.162 This is probably not working as you expect because the default scan for nmap is a SYN scan -- since the TCP handshake is not complete, why would nginx connect to the upstream? Perhaps try your setup with a realistic load generator such as Tsung or TRex. https://en.wikipedia.org/wiki/Tsung https://trex-tgn.cisco.com/ Patrick From brendan.doyle at oracle.com Wed May 8 14:22:49 2019 From: brendan.doyle at oracle.com (Brendan Doyle) Date: Wed, 8 May 2019 15:22:49 +0100 Subject: tcp stream load balancer not working on Oracle Linux 7.5 In-Reply-To: <20190508012417.GA27216@haller.ws> References: <3df7c1cb-246f-12a6-6d0d-bbb27865a8b0@oracle.com> <20190508012417.GA27216@haller.ws> Message-ID: <78be123d-0ae6-1254-8ec9-e7e33f7541ad@oracle.com> Ah yes, I should have paid more attention to the tcpdump output. I switched to using iperf, and it all seems to be working fin now. Thanks On 08/05/2019 02:24, Patrick wrote: > On 2019-05-07 23:26, Brendan Doyle wrote: >> I'm trying to get a basic tcp load balancer working on OL : >> ... >> # nmap -p 5000 10.129.87.162 > This is probably not working as you expect because the default scan for > nmap is a SYN scan -- since the TCP handshake is not complete, why would > nginx connect to the upstream? > > Perhaps try your setup with a realistic load generator such as Tsung > or TRex. > > https://en.wikipedia.org/wiki/Tsung > https://trex-tgn.cisco.com/ > > > Patrick > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From dukedougal at gmail.com Wed May 8 22:48:39 2019 From: dukedougal at gmail.com (Duke Dougal) Date: Thu, 9 May 2019 08:48:39 +1000 Subject: autoindex subdirectories Message-ID: I have autoindex working fine. However it only returns a directory listing at the specific directory level requested. Is there any way to get autoindex to return a recursive list of files/directories? thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From forum at ruhnke.cloud Wed May 8 23:31:27 2019 From: forum at ruhnke.cloud (Florian Ruhnke) Date: Thu, 09 May 2019 01:31:27 +0200 Subject: Mailman is giving me 554 5.7.1 because of using Mail-Relay In-Reply-To: <9ee8ebef6eea1ffb2335e552b1ea009a@ruhnke.cloud> References: <20190429125253.GT1877@mdounin.ru> <9ee8ebef6eea1ffb2335e552b1ea009a@ruhnke.cloud> Message-ID: My problems are solved if this mail is processed fine. I've got an VPN-tunnel to an static IPv4 including PTR-record... Am 2. Mai 2019 16:17:29 MESZ schrieb Florian Ruhnke via nginx : >Hi, > >thx for your reply. You say > >> [...] To post to >> the mailing list, you have to be subscribed [...] > >I am subscribed. >I did it per mail to nginx-request at nginx.org so mailman knows my >envelope-from and my from-header. >And I can talk to nginx-request@ without problems. >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx From 201904-nginx at jslf.app Thu May 9 00:06:33 2019 From: 201904-nginx at jslf.app (Patrick) Date: Thu, 9 May 2019 08:06:33 +0800 Subject: tcp stream load balancer not working on Oracle Linux 7.5 In-Reply-To: <78be123d-0ae6-1254-8ec9-e7e33f7541ad@oracle.com> References: <3df7c1cb-246f-12a6-6d0d-bbb27865a8b0@oracle.com> <20190508012417.GA27216@haller.ws> <78be123d-0ae6-1254-8ec9-e7e33f7541ad@oracle.com> Message-ID: <20190509000633.GA16536@haller.ws> On 2019-05-08 15:22, Brendan Doyle wrote: > I switched to using iperf, and it all seems to be working fin now. ^ Ha! I see what you did there ________________________________| From 201904-nginx at jslf.app Thu May 9 00:26:13 2019 From: 201904-nginx at jslf.app (Patrick) Date: Thu, 9 May 2019 08:26:13 +0800 Subject: autoindex subdirectories In-Reply-To: References: Message-ID: <20190509002613.GA17290@haller.ws> On 2019-05-09 08:48, Duke Dougal wrote: > Is there any way to get autoindex to return a recursive list of > files/directories? What modules do you have available to work with? Just using default built modules, there doesn't seem to be a way. Using non-default modules, you could use: 1) ngx_http_perl + some perl 2) ngx_http_addition + some javascript added to the page to ajax query and rewrite the page 3) the 3rd-party lua module + some lua While it's definitely a hack, option #2 seems the best unless you need to cater to javascript-less clients. Anyone see a cleaner solution? From 201904-nginx at jslf.app Thu May 9 00:49:29 2019 From: 201904-nginx at jslf.app (Patrick) Date: Thu, 9 May 2019 08:49:29 +0800 Subject: Mailman is giving me 554 5.7.1 because of using Mail-Relay In-Reply-To: References: <20190429125253.GT1877@mdounin.ru> <9ee8ebef6eea1ffb2335e552b1ea009a@ruhnke.cloud> Message-ID: <20190509004929.GB17290@haller.ws> On 2019-05-09 01:31, Florian Ruhnke via nginx wrote: > My problems are solved if this mail is processed fine. > I've got an VPN-tunnel to an static IPv4 including PTR-record... Hi Florian, Your SPF doesn't reject all, so you could theoretically have your local mail server ship direct all mail to nginx.org -- i.e. do not use the email-od.com smart host to relay. Patrick > dig +short txt ruhnke.cloud "v=spf1 mx a a:mout.ruhnke.cloud a:mxv6.ruhnke.cloud include:email-od.com ?all" From brendan.doyle at oracle.com Thu May 9 08:57:08 2019 From: brendan.doyle at oracle.com (brendan.doyle at oracle.com) Date: Thu, 9 May 2019 09:57:08 +0100 Subject: tcp stream load balancer not working on Oracle Linux 7.5 In-Reply-To: <20190509000633.GA16536@haller.ws> References: <3df7c1cb-246f-12a6-6d0d-bbb27865a8b0@oracle.com> <20190508012417.GA27216@haller.ws> <78be123d-0ae6-1254-8ec9-e7e33f7541ad@oracle.com> <20190509000633.GA16536@haller.ws> Message-ID: <82fae8af-8c41-43ae-59c2-e75a2a0774ba@oracle.com> Totally? a typo, but fitting :) On 09/05/2019 01:06, Patrick wrote: > On 2019-05-08 15:22, Brendan Doyle wrote: >> I switched to using iperf, and it all seems to be working fin now. > ^ > Ha! I see what you did there ________________________________| > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu May 9 21:30:00 2019 From: nginx-forum at forum.nginx.org (careygister) Date: Thu, 09 May 2019 17:30:00 -0400 Subject: SSL and Slice Module Message-ID: I am using the slice module to request data from an upstream server. My clients are connecting over SSL. With a non-SSL connection, nginx reads the slices from the upstream server as quickly as it can deliver them. Over SSL connections, slices are read incrementally only as quickly as the client can consume them. Can anyone explain this behavior and tell me how to change it so that the SSL client behaves more like the non-SSL client? This is a problem for clients because they have to 'wait' for data over SSL connections whereas over non-SSL connections the data is available immediately. Thanks, Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284116,284116#msg-284116 From iippolitov at nginx.com Thu May 9 21:35:23 2019 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Fri, 10 May 2019 00:35:23 +0300 Subject: autoindex subdirectories In-Reply-To: <20190509002613.GA17290@haller.ws> References: <20190509002613.GA17290@haller.ws> Message-ID: Hello, guys. I think I have a POC using autoindex, ssi and xslt. Obviously it requires further tweaking. You can use configuration like this: > map $uri $doc { > ??? ~*/index[^/]*(.*) $1; > } > server { > ??? listen 8080; > ??? proxy_http_version 1.1; > ??? location /index/ { > ??????? alias /tests/nginx; > ??????? ssi on; > ??????? proxy_pass http://localhost:8080/index2/; > ??? } > ??? location /index2 { > ??????? autoindex on; > ??????? autoindex_format xml; > ??????? xslt_string_param base /index/$doc; > ??????? xslt_stylesheet /tests/nginx/req.index.xslt; > ??????? alias /tests/nginx; > ??? } > } > } Along with a simple xslt like this: > ??? xmlns:xsl="http://www.w3.org/1999/XSL/Transform" > ??? exclude-result-prefixes="xsl" > > > ? > > ? > ? > ??? > ??? |-># include virtual=" select="$base"/>/" > ? > > And this will make nginx do recursive listing but the resulting output is far from ideal. Also no error handling is done (should be done using error_page). Regards, Igor. On 09.05.2019 3:26, Patrick wrote: > On 2019-05-09 08:48, Duke Dougal wrote: >> Is there any way to get autoindex to return a recursive list of >> files/directories? > What modules do you have available to work with? > > Just using default built modules, there doesn't seem to be a way. > > Using non-default modules, you could use: > > 1) ngx_http_perl + some perl > > 2) ngx_http_addition + some javascript added to the page to ajax query > and rewrite the page > > 3) the 3rd-party lua module + some lua > > > While it's definitely a hack, option #2 seems the best unless you need > to cater to javascript-less clients. > > Anyone see a cleaner solution? > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu May 9 23:25:13 2019 From: nginx-forum at forum.nginx.org (raviharshil27) Date: Thu, 09 May 2019 19:25:13 -0400 Subject: No live upstreams In-Reply-To: References: Message-ID: <037b77b5692b719339336c87471a480c.NginxMailingListEnglish@forum.nginx.org> Hi Any luck with that? was that an issue ? I am also having similar issue but dont know whats going in here Posted at Nginx Forum: https://forum.nginx.org/read.php?2,279748,284118#msg-284118 From 201904-nginx at jslf.app Fri May 10 01:20:01 2019 From: 201904-nginx at jslf.app (Patrick) Date: Fri, 10 May 2019 09:20:01 +0800 Subject: SSL and Slice Module In-Reply-To: References: Message-ID: <20190510012001.GA10098@haller.ws> On 2019-05-09 17:30, careygister wrote: > I am using the slice module to request data from an upstream server. My > clients are connecting over SSL. With a non-SSL connection, nginx reads the > slices from the upstream server as quickly as it can deliver them. Over SSL > connections, slices are read incrementally only as quickly as the client can > consume them. Hi, Can you post illustrative tshark captures from the client-side? tshark -n -Y http port 80 and host $SERVER_IP tshark -n -Y ssl port 443 and host $SERVER_IP Thanks! Patrick From nginx-forum at forum.nginx.org Fri May 10 11:57:14 2019 From: nginx-forum at forum.nginx.org (cox123456a) Date: Fri, 10 May 2019 07:57:14 -0400 Subject: Web site not working on port redirection Message-ID: I've post this on SO but no solution after weeks, so I think here I have a better chance as this is being critical to myself. I have the following NGINX setup: server { listen 80 default_server; root /var/www/serviceserver1; index index.html index.htm; location /api { proxy_redirect http://localhost:3001/ /api; proxy_pass_header Server; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Scheme $scheme; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_connect_timeout 5; proxy_read_timeout 240; proxy_intercept_errors on; proxy_pass http://localhost:3001; } location /graphql { proxy_redirect http://localhost:3001/ /graphql; proxy_pass_header Server; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Scheme $scheme; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_connect_timeout 5; proxy_read_timeout 240; proxy_intercept_errors on; proxy_pass http://localhost:3001; } # Root route location = / { try_files $uri /landing/index.html; } # admin routes location /admin { try_files $uri $uri/ /admin/index.html; } # analytics routes location /analytics { try_files $uri $uri/ /analytics/index.html; } # landing routes location /landing { try_files $uri $uri/ /landing/index.html; } # Any other route default to landing location / { try_files $uri $uri/ /landing/index.html; } } landing, analytics, admin are my application modules, where I navigate using window.location = "/admin" in example. All works fine in my LAN, all fine, no problems. Now I have setup a Virtual Server on my access point/router to access the system from the internet. I'm not allowed on this IP to access port 80 from the Internet, so I've redirected port 8000 to 80 in the Virtual Server. Now, every time I navigate from the Internet, my pages are breaking. Let me explain better: Navigating inside a module works fine: http://170.180.190.200:8000/admin => http://170.180.190.200:8000/admin (FINE) http://170.180.190.200:8000/admin/home => http://170.180.190.200:8000/admin/home (FINE) http://170.180.190.200:8000/admin/page1 => http://170.180.190.200:8000/admin/page1 (FINE) When I click to change the module (from admin to analytics, for example), I'm loosing the port number: http://170.180.190.200:8000/admin => http://170.180.190.200/analytics (Page breaks as the port is missing) How can I configure NGINX to forcelly add the port number when I change the module? Inside the application, just before calling window.location: console.log(windows.location); ancestorOrigins: DOMStringList {length: 0} assign: ? () hash: "" host: "170.80.224.142:8000" hostname: "170.80.224.142" href: "http://170.80.224.142:8000/" origin: "http://170.80.224.142:8000" pathname: "/" port: "8000" protocol: "http:" reload: ? reload() replace: ? () search: "" toString: ? toString() valueOf: ? valueOf() Symbol(Symbol.toPrimitive): undefined __proto__: Location Component that changes the module (ReactJS): class NavButton extends Component { handleModuleNav = action => { let to = "/" + action; window.location = to; }; render = () => { return (
this.handleModuleNav("admin")}> GO TO ADMIN
) } } My router configuration (page 8000 to 80): https://unix.stackexchange.com/questions/518085/accessing-nginx-behind-a-virtual-server-looses-port-number/518183?noredirect=1#5181833 Thanks for helping. This is being critical for my production site. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284120,284120#msg-284120 From francis at daoine.org Fri May 10 13:55:10 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 10 May 2019 14:55:10 +0100 Subject: Web site not working on port redirection In-Reply-To: References: Message-ID: <20190510135510.5devywaltzzq2pqs@daoine.org> On Fri, May 10, 2019 at 07:57:14AM -0400, cox123456a wrote: Hi there, > When I click to change the module (from admin to analytics, for example), > I'm loosing the port number: > http://170.180.190.200:8000/admin => http://170.180.190.200/analytics (Page > breaks as the port is missing) You can check the nginx logs to be sure; but what I suspect is happening is that when you change to analytics, your browser makes a request for http://170.180.190.200:8000/analytics, which gets a redirect to http://170.180.190.200/analytics/. And the second one is the one that fails for you, because you want it to keep the port from the Host: header in the http redirection from nginx. If that is the case... > How can I configure NGINX to forcelly add the port number when I change the > module? I think that you cannot easily. When nginx generates a http redirection, it can use the hostname from the Host: header, and it can use the port that the request came to nginx on; but it cannot (I think) use the port that was listed in the Host: header. One possible way, which may or may not be useful in your case, would be to change your nginx so that it listens on both port 80 and 8000; and change your port forwarding so that [external] port 8000 is sent to [nginx] port 8000 (instead of to the current [nginx] port 80). With that config, then I think that internal things that talk to port 80 will continue to talk to port 80; while external things that talk to port 8000, will have their redirects-from-nginx directed to port 8000 (because they are now talking to nginx-port-8000). That is, in nginx, where you have > server { > listen 80 default_server; add an extra line > listen 8000 default_server; and change this: > My router configuration (page 8000 to 80): > > https://unix.stackexchange.com/questions/518085/accessing-nginx-behind-a-virtual-server-looses-port-number/518183?noredirect=1#5181833 to go to port 8000 instead of port 80. Untested by me, but it looks like it should work. Good luck testing it! f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri May 10 16:31:43 2019 From: nginx-forum at forum.nginx.org (cox123456a) Date: Fri, 10 May 2019 12:31:43 -0400 Subject: Web site not working on port redirection In-Reply-To: <20190510135510.5devywaltzzq2pqs@daoine.org> References: <20190510135510.5devywaltzzq2pqs@daoine.org> Message-ID: Hi Francis, Thanks for the detailed explanation. I agree with your diagnostic. In fact I've tried to listen on port 8000 and it works properly. The problem is that I cannot let this port open on LAN, just 80 due to security requirements. And port 80 cannot be opened to the internet, so I need to use a high numbered port (8000). Sounds strange but that's my scenario. I really need a way to bring 80 to 8000 and make it work. Repair that NGINX seens to knowsthat the request is coming from port 8000 (I can see in the logs and my print of window.location shows it), I just need a way to optionally include it in the redirected address. Some log events when navigating: 190.254.100.100 - - [10/May/2019:09:53:29 -0300] "POST /graphql HTTP/1.1" 200 1372895 "http://190.80.200.100:8000/admin/running" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36" 190.254.100.100 - - [10/May/2019:09:53:59 -0300] "POST /graphql HTTP/1.1" 200 1386 "http://190.80.200.100:8000/admin/orders" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36" 190.254.100.100 - - [10/May/2019:09:54:00 -0300] "POST /graphql HTTP/1.1" 200 191740 "http://190.80.200.100:8000/admin/orders" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36" 205.185.100.100 - - [10/May/2019:10:12:12 -0300] "GET / HTTP/1.1" 200 580 "http://190.80.200.100:8000/" "Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0)" Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284120,284122#msg-284122 From nginx-forum at forum.nginx.org Fri May 10 22:49:58 2019 From: nginx-forum at forum.nginx.org (cox123456a) Date: Fri, 10 May 2019 18:49:58 -0400 Subject: Web site not working on port redirection In-Reply-To: <20190510135510.5devywaltzzq2pqs@daoine.org> References: <20190510135510.5devywaltzzq2pqs@daoine.org> Message-ID: Added port 8000: listen 8000 default_server; Unfortunatelly It does not work. Same behaviour.... I'm out of ideas... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284120,284124#msg-284124 From francis at daoine.org Fri May 10 22:52:49 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 10 May 2019 23:52:49 +0100 Subject: Web site not working on port redirection In-Reply-To: References: <20190510135510.5devywaltzzq2pqs@daoine.org> Message-ID: <20190510225249.ivymdl2thfh6eg43@daoine.org> On Fri, May 10, 2019 at 12:31:43PM -0400, cox123456a wrote: Hi there, > I really need a way to bring 80 to 8000 and make it work. I've not used it; but you could try "absolute_redirect off" (http://nginx.org/r/absolute_redirect) and see if your clients are happy with the response from that. Instead of automatic redirects going to http://name:port/place, they would just go to /place, and let the clients assume that the http://name:port part matches what the original url had. Good luck with it, f -- Francis Daly francis at daoine.org From wizard at bnnorth.net Sat May 11 13:40:25 2019 From: wizard at bnnorth.net (Ken Wright) Date: Sat, 11 May 2019 09:40:25 -0400 Subject: nginx stopped working Message-ID: Can someone give me a copy of the original nginx.conf file?? I modified mine and nginx stopped working, but (fool that I am) I failed to save the original. Thanks in advance! Ken -- Registered Linux user #483005 If you ever think international relations make sense, remember this: because a Serb shot an Austrian in Bosnia, Germany invaded Belgium. From lists at lazygranch.com Sat May 11 13:52:19 2019 From: lists at lazygranch.com (lists) Date: Sat, 11 May 2019 06:52:19 -0700 Subject: nginx stopped working In-Reply-To: Message-ID: <3381ogrkrt8567epmfdis6ut.1557582739017@lazygranch.com> https://gist.github.com/xameeramir/a5cb675fb6a6a64098365e89a239541d This claims to be the original. ? Original Message ? From: wizard at bnnorth.net Sent: May 11, 2019 6:40 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: nginx stopped working Can someone give me a copy of the original nginx.conf file?? I modified mine and nginx stopped working, but (fool that I am) I failed to save the original. Thanks in advance! Ken -- Registered Linux user #483005 If you ever think international relations make sense, remember this:? because a Serb shot an Austrian in Bosnia, Germany invaded Belgium. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From satcse88 at gmail.com Sun May 12 01:40:07 2019 From: satcse88 at gmail.com (Sathish Kumar) Date: Sun, 12 May 2019 09:40:07 +0800 Subject: Nginx Reverse Proxy Caching In-Reply-To: References: <06F50867-DDDF-4564-97A0-825DBD349D17@me.com> Message-ID: Hi Team, Is it better to enabling caching on upstream or on Nginx. Added to Nginx add_header Cache-Control "no-cache"; ETag on; gzip off; proxy_ignore_headers Cache-Control; Is it the right way to enable caching on a web application. Upstream server is a Jetty application server. On Fri, Feb 15, 2019, 8:25 PM Sathish Kumar wrote: > Hi All, > > Is it possible to enable gzip and etag to solve caching problem. > > On Thu, Feb 14, 2019, 10:00 AM Sathish Kumar >> Hi All, >> >> How can I achieve caching html files only for this location context >> /abc/* and not for other context path. >> >> >> On Thu, Feb 14, 2019, 7:26 AM Sathish Kumar > >>> Hi Peter, >>> >>> Thanks, I am looking for the same solution but to enable only for html >>> files. >>> >>> On Thu, Feb 14, 2019, 2:02 AM Peter Booth via nginx >> wrote: >>> >>>> Satish, >>>> >>>> The browser (client-side) cache isn?t related to the nginx reverse >>>> proxy cache. You can tell Chrome to not cache html by adding the following >>>> to your location definition: >>>> >>>> add_header Cache-Control 'no-store'; >>>> >>>> You can use Developer Tool in Chrome to check that it is working. >>>> >>>> >>>> Peter >>>> >>>> >>>> Sent from my iPhone >>>> >>>> On Feb 13, 2019, at 11:56 AM, Sathish Kumar wrote: >>>> >>>> Hi All, >>>> >>>> We have Nginx in front of our Application server. We would like to >>>> disable caching for html files. >>>> >>>> Sample config file: >>>> >>>> location /abc/ { >>>> proxy_pass http://127.0.0.1:8080; >>>> } >>>> >>>> We noticed few html files get stored in Chrome local disk cache and >>>> would like to fix this issue. Can anybody help, thanks >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Sun May 12 10:01:24 2019 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Sun, 12 May 2019 15:01:24 +0500 Subject: rewrite hostname in sub_filter! Message-ID: Hi, We've running nginx as reverse proxy for backend domain named "mydomain.com". On proxy server we've setup vhost that covers domain and all subdomains *. mydomain.com. What we need is that if user request to any subdomain like work.mydomain.com, he should be proxied to single backend mydomain.com but all underline links e.g css/js should be changed to *work-mydomain-com.newdomain.com * . If second user requests nginx.mydomain.com, he should be proxied back to single backend *mydomain.com * while underline links css/js should be changed to *nginx-mydomain-com.newdomain.com .* As you can see we want all dots in domain/subdomain to be changed to hyphen (-). I am able to change css/js links using sub_filter to work.mydomain.com.newdomain.com by making use of $host param but i am struggling on changing it to hyphens. Please check this nginx config and advise on how to change hostname from dots to hyphen in sub_filter e.g: mydomain.com ==> mydomain-com.newdomain.com work.mydomain.com ==> work-mydomain-com.newdomain.com nginx.mydomain.com ==> nginx-mydomain-com.newdomain.com ==================================================== upstream server { server 10.10.10.10:443; } server { listen 80; listen 443 ssl http2; ssl_certificate /etc/ssl/certs/mydomain/mydomain.crt; ssl_certificate_key /etc/ssl/certs/mydomain/privkey1.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4'; ssl_prefer_server_ciphers on; server_name mydomain.com *.mydomain.com; location / { proxy_set_header Accept-Encoding ""; #subs_filter_types text/css text/xml text/css; sub_filter "https://mydomain.com" "https://$host.newdomain.com"; sub_filter_once off; proxy_pass https://server; proxy_set_header HOST mydomain.com; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Sun May 12 12:33:49 2019 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Sun, 12 May 2019 17:33:49 +0500 Subject: rewrite hostname in sub_filter! In-Reply-To: References: Message-ID: Hi, I am able to achieve this by writing the following rule which creates a new variable with domain mydomain-com. if ($host ~* "(\w+)[-.](\w+)") { set $host_new "$1-$2"; } However, its not working for any subdomain like work.mydomain.com , i tried following configs to cover subdomains e.g work-mydomain-com but it didn't worked: if ($host ~* "(\w+)[-.](\w+)[-.](\w+)") { set $host_new "$1-$2-$3"; } ====================================================== Any clue where am i wrong? On Sun, May 12, 2019 at 3:01 PM shahzaib mushtaq wrote: > Hi, > > We've running nginx as reverse proxy for backend domain named " > mydomain.com". On proxy server we've setup vhost that covers domain and > all subdomains *.mydomain.com. > > What we need is that if user request to any subdomain like > work.mydomain.com, he should be proxied to single backend mydomain.com > but all underline links e.g css/js should be changed to *work-mydomain-com.newdomain.com > * . If second user requests > nginx.mydomain.com, he should be proxied back to single backend *mydomain.com > * while underline links css/js should be changed to *nginx-mydomain-com.newdomain.com > .* > > As you can see we want all dots in domain/subdomain to be changed to > hyphen (-). > > I am able to change css/js links using sub_filter to > work.mydomain.com.newdomain.com by making use of $host param but i am > struggling on changing it to hyphens. > > Please check this nginx config and advise on how to change hostname from > dots to hyphen in sub_filter e.g: > > mydomain.com ==> mydomain-com.newdomain.com > work.mydomain.com ==> work-mydomain-com.newdomain.com > nginx.mydomain.com ==> nginx-mydomain-com.newdomain.com > > ==================================================== > > upstream server { > server 10.10.10.10:443; > > } > server { > listen 80; > listen 443 ssl http2; > ssl_certificate /etc/ssl/certs/mydomain/mydomain.crt; > ssl_certificate_key /etc/ssl/certs/mydomain/privkey1.pem; > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > ssl_ciphers > 'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4'; > ssl_prefer_server_ciphers on; > server_name mydomain.com *.mydomain.com; > > > > location / { > > proxy_set_header Accept-Encoding ""; > #subs_filter_types text/css text/xml text/css; > sub_filter "https://mydomain.com" "https://$host.newdomain.com"; > > sub_filter_once off; > proxy_pass https://server; > proxy_set_header HOST mydomain.com; > } > } > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Sun May 12 14:34:11 2019 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Sun, 12 May 2019 19:34:11 +0500 Subject: rewrite hostname in sub_filter! In-Reply-To: References: Message-ID: Will very much appreciate if someone can help on it. On Sun, May 12, 2019 at 5:33 PM shahzaib mushtaq wrote: > Hi, > > I am able to achieve this by writing the following rule which creates a > new variable with domain mydomain-com. > > if ($host ~* "(\w+)[-.](\w+)") { > set $host_new "$1-$2"; > } > > However, its not working for any subdomain like work.mydomain.com , i > tried following configs to cover subdomains e.g work-mydomain-com but it > didn't worked: > > if ($host ~* "(\w+)[-.](\w+)[-.](\w+)") { > set $host_new "$1-$2-$3"; > } > > ====================================================== > Any clue where am i wrong? > > On Sun, May 12, 2019 at 3:01 PM shahzaib mushtaq > wrote: > >> Hi, >> >> We've running nginx as reverse proxy for backend domain named " >> mydomain.com". On proxy server we've setup vhost that covers domain and >> all subdomains *.mydomain.com. >> >> What we need is that if user request to any subdomain like >> work.mydomain.com, he should be proxied to single backend mydomain.com >> but all underline links e.g css/js should be changed to *work-mydomain-com.newdomain.com >> * . If second user requests >> nginx.mydomain.com, he should be proxied back to single backend *mydomain.com >> * while underline links css/js should be changed to *nginx-mydomain-com.newdomain.com >> .* >> >> As you can see we want all dots in domain/subdomain to be changed to >> hyphen (-). >> >> I am able to change css/js links using sub_filter to >> work.mydomain.com.newdomain.com by making use of $host param but i am >> struggling on changing it to hyphens. >> >> Please check this nginx config and advise on how to change hostname from >> dots to hyphen in sub_filter e.g: >> >> mydomain.com ==> mydomain-com.newdomain.com >> work.mydomain.com ==> work-mydomain-com.newdomain.com >> nginx.mydomain.com ==> nginx-mydomain-com.newdomain.com >> >> ==================================================== >> >> upstream server { >> server 10.10.10.10:443; >> >> } >> server { >> listen 80; >> listen 443 ssl http2; >> ssl_certificate /etc/ssl/certs/mydomain/mydomain.crt; >> ssl_certificate_key /etc/ssl/certs/mydomain/privkey1.pem; >> ssl_protocols TLSv1 TLSv1.1 TLSv1.2; >> ssl_ciphers >> 'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4'; >> ssl_prefer_server_ciphers on; >> server_name mydomain.com *.mydomain.com; >> >> >> >> location / { >> >> proxy_set_header Accept-Encoding ""; >> #subs_filter_types text/css text/xml text/css; >> sub_filter "https://mydomain.com" "https://$host.newdomain.com"; >> >> sub_filter_once off; >> proxy_pass https://server; >> proxy_set_header HOST mydomain.com; >> } >> } >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From norrcomm at gmail.com Mon May 13 15:30:19 2019 From: norrcomm at gmail.com (NorrComm Solutions) Date: Mon, 13 May 2019 11:30:19 -0400 Subject: OSTicket configuration issues Message-ID: Hi we are attempting to run OSTicket with nginx. We get this message: "too many redirects" when going to the URL. We followed this URL for a resolution but so far no luck. Has anyone seen this error and resolved it? https://stackoverflow.com/questions/55651186/user-diretive-is-not-allowed/55661791#55661791 Virus-free. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon May 13 18:16:19 2019 From: francis at daoine.org (Francis Daly) Date: Mon, 13 May 2019 19:16:19 +0100 Subject: OSTicket configuration issues In-Reply-To: References: Message-ID: <20190513181619.eanzjs2z2755bgoa@daoine.org> On Mon, May 13, 2019 at 11:30:19AM -0400, NorrComm Solutions wrote: Hi there, > Hi we are attempting to run OSTicket with nginx. > > We get this message: "too many redirects" when going to the URL. > > We followed this URL for a resolution but so far no luck. > > Has anyone seen this error and resolved it? > > https://stackoverflow.com/questions/55651186/user-diretive-is-not-allowed/55661791#55661791 The configuration snippet includes: === # Rewrite all requests from HTTP to HTTPS server { listen 80; server_name 192.168.0.24; rewrite ^ http://192.168.0.24 permanent; } === The "rewrite" causes every request to get a http redirect back to the same service. That is your loop. You probably want https:// in the rewrite line. The rest of the config, within the https server, looks a bit strange to me. But presumably someone uses OSTicket and this works for them. So - once you get to https://192.168.0.24, you should be ok. If not, and you send a follow-up message, please include all of the details in that mail, rather than pointing to an external web site -- that will help someone on this list who has the same problem in the future, when that other web site has been edited or removed. Thanks, f -- Francis Daly francis at daoine.org From francis at daoine.org Mon May 13 22:37:26 2019 From: francis at daoine.org (Francis Daly) Date: Mon, 13 May 2019 23:37:26 +0100 Subject: Web site not working on port redirection In-Reply-To: References: <20190510135510.5devywaltzzq2pqs@daoine.org> Message-ID: <20190513223726.v3clbrfphh6zch3n@daoine.org> On Fri, May 10, 2019 at 12:31:43PM -0400, cox123456a wrote: Hi there, > Thanks for the detailed explanation. I agree with your diagnostic. > I really need a way to bring 80 to 8000 and make it work. Another way to approach this could be to change your code. It would make it more efficient (avoiding one http redirect each time); but the real reason to do it is to work around the particular nginx issue. That is, where you currently have === class NavButton extends Component { handleModuleNav = action => { let to = "/" + action; window.location = to; }; render = () => { return (
this.handleModuleNav("admin")}> GO TO ADMIN
) } } === the intention is to make a request for "/admin". You could change that so that it make a request for "/admin/"; either by changing one line to be let to = "/" + action + "/"; or by changing another to be
this.handleModuleNav("admin/")}> Either of those changes would avoid nginx having to tell the client to switch from /admin to /admin/. It would not fix the general case, but that might not matter for your specific case. Good luck with it, f -- Francis Daly francis at daoine.org From Scott.Clark at godaddy.com Tue May 14 07:14:03 2019 From: Scott.Clark at godaddy.com (Scott Clark) Date: Tue, 14 May 2019 07:14:03 +0000 Subject: nginx smtp proxy Message-ID: <9b64c496-1ea5-2b14-d53d-0e524d7fd453@godaddy.com> Hi There, Is it possible to get the smtp proxy to make outbound connections on a specific IP address? Trying to use proxy_bind x.x.x.x but where ever I put it in the mail config I get: nginx: [emerg] "proxy_bind" directive is not allowed here mail { # Pass any error message from the remote server proxy_pass_error_message on; server { listen 125; protocol smtp; smtp_auth none; auth_http 127.0.0.1:8008/auth-smtp.php; auth_http_header default-server 127.0.0.1; xclient off; } } I don't see the proxy_bind directive listed in: http://nginx.org/en/docs/mail/ngx_mail_core_module.html http://nginx.org/en/docs/mail/ngx_mail_proxy_module.html http://nginx.org/en/docs/mail/ngx_mail_smtp_module.html Any ideas how to achieve this? Thanks Scott From 201904-nginx at jslf.app Tue May 14 07:47:19 2019 From: 201904-nginx at jslf.app (Patrick) Date: Tue, 14 May 2019 15:47:19 +0800 Subject: nginx smtp proxy In-Reply-To: <9b64c496-1ea5-2b14-d53d-0e524d7fd453@godaddy.com> References: <9b64c496-1ea5-2b14-d53d-0e524d7fd453@godaddy.com> Message-ID: <20190514074719.GA24622@haller.ws> On 2019-05-14 07:14, Scott Clark wrote: > Is it possible to get the smtp proxy to make outbound connections on a > specific IP address? That doesn't appear be supported yet. A workaround is to SNAT the connections to the upstream mail servers. Normally, this isn't an issue since nginx is the front-end to a series of mail servers that you control. Can you provide a little detail as to what your mail architecture looks like? Patrick From thresh at nginx.com Tue May 14 11:28:22 2019 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 14 May 2019 14:28:22 +0300 Subject: Linux packages: RHEL 8 added Message-ID: Hello, As a part of our continued effort to bring nginx to new platforms and operating systems, I'd like to inform that nginx.org prebuilt binary packages are now available for RHEL 8. You can read more about setting the repository up and installing the packages on https://nginx.org/en/linux_packages.html#RHEL-CentOS. The packaging source code is available under http://hg.nginx.org/pkg-oss/file/default/rpm/. Enjoy, -- Konstantin Pavlov https://www.nginx.com/ From nginx-forum at forum.nginx.org Wed May 15 07:57:57 2019 From: nginx-forum at forum.nginx.org (flierps) Date: Wed, 15 May 2019 03:57:57 -0400 Subject: Reverse proxy caching for dynamic content Message-ID: <1b7b79f5344e987cf93fee2a4b2d8f58.NginxMailingListEnglish@forum.nginx.org> Nginx will not cache content when max-age=0 although I believe this should be standard in combination with must-revalidate. I want to cache dynamic content, so every request needs to be revalidated. This way when I get multiple fresh GET requests for an expensive resource, Nginx can revalidate with the backend that only needs to generate a 304's and the full content can be served to the clients. I can get close to what I want by ignoring the header "Cache-Control: max-age=0, public, must-revalidate" and setting proxy_cache_valid to 1s. Unfortunately even 1s is too long for some situations. Is there a way to cache content and mark it as stale immediately or always revalidate on the next hit? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284167,284167#msg-284167 From 201904-nginx at jslf.app Wed May 15 08:42:08 2019 From: 201904-nginx at jslf.app (Patrick) Date: Wed, 15 May 2019 16:42:08 +0800 Subject: Reverse proxy caching for dynamic content In-Reply-To: <1b7b79f5344e987cf93fee2a4b2d8f58.NginxMailingListEnglish@forum.nginx.org> References: <1b7b79f5344e987cf93fee2a4b2d8f58.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190515084207.GA28257@haller.ws> On 2019-05-15 03:57, flierps wrote: > I want to cache dynamic content, so every request needs to be revalidated. > This way when I get multiple fresh GET requests for an expensive resource, > Nginx can revalidate with the backend that only needs to generate a 304's > and the full content can be served to the clients. As you note, If-Modified-Since has a smallest time resolution of 1 second. Can the upstream publish ETags and handle If-None-Match request headers so that nginx can get 304s in response? Patrick From nginx-forum at forum.nginx.org Wed May 15 11:10:21 2019 From: nginx-forum at forum.nginx.org (flierps) Date: Wed, 15 May 2019 07:10:21 -0400 Subject: Reverse proxy caching for dynamic content In-Reply-To: <20190515084207.GA28257@haller.ws> References: <20190515084207.GA28257@haller.ws> Message-ID: <811578b8521a53380f68aae338264278.NginxMailingListEnglish@forum.nginx.org> Yes, upstream behaves as you would expect. Right now Nginx proxy_valid is set to 1 second. After that second Nginx revalidates with upstream and upstream will respond with 304 if applicable. I just do not want Nginx to serve from cache during that second. It always needs to revalidate. Apache supports this as noted here by somebody else: https://stackoverflow.com/questions/41252208/nginx-cache-but-immediately-expire-revalidate-using-cache-control-public-s-ma Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284167,284175#msg-284175 From 201904-nginx at jslf.app Thu May 16 03:36:25 2019 From: 201904-nginx at jslf.app (Patrick) Date: Thu, 16 May 2019 11:36:25 +0800 Subject: Reverse proxy caching for dynamic content In-Reply-To: <811578b8521a53380f68aae338264278.NginxMailingListEnglish@forum.nginx.org> References: <20190515084207.GA28257@haller.ws> <811578b8521a53380f68aae338264278.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190516033625.GA7394@haller.ws> On 2019-05-15 07:10, flierps wrote: > Yes, upstream behaves as you would expect. > > Right now Nginx proxy_valid is set to 1 second. After that second Nginx > revalidates with upstream and upstream will respond with 304 if applicable. > > I just do not want Nginx to serve from cache during that second. It always > needs to revalidate. Provided that: a) the servers are time-synced, and b) the upstream sets Expires to be the current time, and c) the upstream sets Cache-Control to be 'must-revalidate' nginx will cache the result, and handle subsequent 304s from the cache. The nginx config is just: proxy_pass $upstream; proxy_cache STATIC; Patrick From nginx-forum at forum.nginx.org Thu May 16 04:59:28 2019 From: nginx-forum at forum.nginx.org (flierps) Date: Thu, 16 May 2019 00:59:28 -0400 Subject: Reverse proxy caching for dynamic content In-Reply-To: <20190516033625.GA7394@haller.ws> References: <20190516033625.GA7394@haller.ws> Message-ID: <68de9b9f8a06e2ae3bb8e4b8a4ded91d.NginxMailingListEnglish@forum.nginx.org> When max-age=0 in the Cache-Control header, Nginx will not cache, so that will not work. I need this in cache-control to make sure all browsers will revalidate. So, as far as I can see I can chose between: 1. ignore cache-control header, need to set proxy_valid or nothing will be cached minimum is 1s. 2. not ignore cache-control, but obliged to set max-age or nothing will be cached minimum is 1s. Nginx should be able to cache even when max-age is set to 0. There are valid reasons to cache when max-age is 0. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284167,284181#msg-284181 From 201904-nginx at jslf.app Thu May 16 05:11:44 2019 From: 201904-nginx at jslf.app (Patrick) Date: Thu, 16 May 2019 13:11:44 +0800 Subject: Reverse proxy caching for dynamic content In-Reply-To: <68de9b9f8a06e2ae3bb8e4b8a4ded91d.NginxMailingListEnglish@forum.nginx.org> References: <20190516033625.GA7394@haller.ws> <68de9b9f8a06e2ae3bb8e4b8a4ded91d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190516051144.GA15598@haller.ws> On 2019-05-16 00:59, flierps wrote: > When max-age=0 in the Cache-Control header, Nginx will not cache, so that > will not work. > I need this in cache-control to make sure all browsers will revalidate. Don't set Cache-Control at the upstream, though. Have nginx reset it to what you need the browsers to see. Patrick From nginx-forum at forum.nginx.org Thu May 16 05:38:11 2019 From: nginx-forum at forum.nginx.org (flierps) Date: Thu, 16 May 2019 01:38:11 -0400 Subject: Reverse proxy caching for dynamic content In-Reply-To: <20190516033625.GA7394@haller.ws> References: <20190516033625.GA7394@haller.ws> Message-ID: <6b645c830c66e0ba8bbc4c54425e64cd.NginxMailingListEnglish@forum.nginx.org> In HTTP 1.1, the Expires header was deprecated and Cache-Control is the alternative. If both Expires and Cache-Control headers are found, Expires will be ignored. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284167,284183#msg-284183 From nginx-forum at forum.nginx.org Thu May 16 12:53:22 2019 From: nginx-forum at forum.nginx.org (CCS) Date: Thu, 16 May 2019 08:53:22 -0400 Subject: Port Exhaustion - SQL Message-ID: I have recently moved our sql server off of our webserver and now we are experiencing Port Exhaustion. We have made all the changed we could in the kernel to help with this but still hitting limits. I have added a 2nd virtual network adapter, I am now trying to "load balance" the lan connection between our web server and mysql server port 3306. Webserver 192.168.99.17 192.168.99.21 MYSQL Server 192.168.99.19 Can anyone help with a config that would accomplish splitting the connection from 192.168.99.17 -> 192.168.99.19 with a second lan adapter (192.169.99.21). These are local connections so spiting by remote IP is not an option. I just want say half of the connections that go to 192.169.99.19 use port from 192.168.99.17 and half to use ports from 192.168.99.21. Any help or advice would be appreciated. Thanks Brandon Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284187,284187#msg-284187 From r at roze.lv Thu May 16 13:09:25 2019 From: r at roze.lv (Reinis Rozitis) Date: Thu, 16 May 2019 16:09:25 +0300 Subject: Port Exhaustion - SQL In-Reply-To: References: Message-ID: <000001d50be8$965579c0$c3006d40$@roze.lv> > We have made all the changed we could in the kernel to help with this but still hitting limits. What changes have you made? Usually the port limit is reached because of time wait sockets. If not done already try with: net.ipv4.ip_local_port_range = 1028 65535 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_fin_timeout = 30 Increasing the ephemeral port range (usually by default it starts around 30k so you effectively lose 30k ports - obviously adjust the lower limit to your application needs). Then time wait socket reuse helps a lot and also decreasing the FIN timeout (the default is something like 60 seconds). rr From brandonm at medent.com Thu May 16 13:11:39 2019 From: brandonm at medent.com (Brandon Mallory) Date: Thu, 16 May 2019 09:11:39 -0400 (EDT) Subject: Port Exhaustion - SQL In-Reply-To: <000001d50be8$965579c0$c3006d40$@roze.lv> References: <000001d50be8$965579c0$c3006d40$@roze.lv> Message-ID: <1497205455.160069.1558012299851.JavaMail.zimbra@medent.com> Yes all of those changes you have mentioned have been made. Thanks Brandon Best Regards, Brandon Mallory Network & Systems Engineer MEDENT EMR/EHR 15 Hulbert Street Auburn, NY 13021 Phone: [ callto:(315)-255-0900 | (315)-255-0900 ] Fax: [ callto:(315)-255-3539 | (315)-255-3539 ] Web: [ http://www.medent.com/ | www.medent.com ] This message and any attachments may contain information that is protected by law as privileged and confidential, and is transmitted for the sole use of the intended recipient(s). If you are not the intended recipient, you are hereby notified that any use, dissemination, copying or retention of this e-mail or the information contained herein is strictly prohibited. If you received this e-mail in error, please immediately notify the sender by e-mail, and permanently delete this e-mail. From: "Reinis Rozitis" To: "nginx" Sent: Thursday, May 16, 2019 9:09:25 AM Subject: RE: Port Exhaustion - SQL > We have made all the changed we could in the kernel to help with this but still hitting limits. What changes have you made? Usually the port limit is reached because of time wait sockets. If not done already try with: net.ipv4.ip_local_port_range = 1028 65535 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_fin_timeout = 30 Increasing the ephemeral port range (usually by default it starts around 30k so you effectively lose 30k ports - obviously adjust the lower limit to your application needs). Then time wait socket reuse helps a lot and also decreasing the FIN timeout (the default is something like 60 seconds). rr _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Thu May 16 13:35:18 2019 From: r at roze.lv (Reinis Rozitis) Date: Thu, 16 May 2019 16:35:18 +0300 Subject: Port Exhaustion - SQL In-Reply-To: <1497205455.160069.1558012299851.JavaMail.zimbra@medent.com> References: <000001d50be8$965579c0$c3006d40$@roze.lv> <1497205455.160069.1558012299851.JavaMail.zimbra@medent.com> Message-ID: <000901d50bec$33b2c940$9b185bc0$@roze.lv> > Yes all of those changes you have mentioned have been made. Well imo there is nothing else besides to even more decrease the FIN timeout (in a LAN that shouldn't be an issue (no slow clients)) so the lingering sockets are closed faster. Also instead of adding the network adapter(s) on the webserver you should add the interfaces on the mysql server and then either via loadbalancer or on application level use a round robin fashion (as binding to a specific local interface is harder than just connect to a different remote ip). Other than that depending on the application you might want to consider using persistent connections to MySQL or use some kind of mysql proxy between which could pool the connections to the mysql server. rr From brandonm at medent.com Thu May 16 13:46:13 2019 From: brandonm at medent.com (Brandon Mallory) Date: Thu, 16 May 2019 09:46:13 -0400 (EDT) Subject: Port Exhaustion - SQL In-Reply-To: <000901d50bec$33b2c940$9b185bc0$@roze.lv> References: <000001d50be8$965579c0$c3006d40$@roze.lv> <1497205455.160069.1558012299851.JavaMail.zimbra@medent.com> <000901d50bec$33b2c940$9b185bc0$@roze.lv> Message-ID: <173196454.166173.1558014373946.JavaMail.zimbra@medent.com> This is a very busy server and tried to push our programming department to move to persistent connections, they feel that it could be a security issue if dealing with sensitive information since that connection could be hijacked. We do not have an issue on the mysql server side with Port Exhaustion, just on the "Frontend webserver". We have made a lot of changes, and are currently managing but I fear that we will reach the 65k limit again. If I could get something to load balance LAN interfaces I could double the port limitation. I see that haproxy has an article on this, I love nginx and use it for other applications but maybe its the wrong product for this senerio. I was thinking there might be a way using proxy_bind. [ https://www.haproxy.com/blog/haproxy-high-mysql-request-rate-and-tcp-source-port-exhaustion/ | https://www.haproxy.com/blog/haproxy-high-mysql-request-rate-and-tcp-source-port-exhaustion/ ] Best Regards, Brandon Mallory Network & Systems Engineer MEDENT EMR/EHR 15 Hulbert Street Auburn, NY 13021 Phone: [ callto:(315)-255-0900 | (315)-255-0900 ] Fax: [ callto:(315)-255-3539 | (315)-255-3539 ] Web: [ http://www.medent.com/ | www.medent.com ] This message and any attachments may contain information that is protected by law as privileged and confidential, and is transmitted for the sole use of the intended recipient(s). If you are not the intended recipient, you are hereby notified that any use, dissemination, copying or retention of this e-mail or the information contained herein is strictly prohibited. If you received this e-mail in error, please immediately notify the sender by e-mail, and permanently delete this e-mail. From: "Reinis Rozitis" To: "nginx" Sent: Thursday, May 16, 2019 9:35:18 AM Subject: RE: Port Exhaustion - SQL > Yes all of those changes you have mentioned have been made. Well imo there is nothing else besides to even more decrease the FIN timeout (in a LAN that shouldn't be an issue (no slow clients)) so the lingering sockets are closed faster. Also instead of adding the network adapter(s) on the webserver you should add the interfaces on the mysql server and then either via loadbalancer or on application level use a round robin fashion (as binding to a specific local interface is harder than just connect to a different remote ip). Other than that depending on the application you might want to consider using persistent connections to MySQL or use some kind of mysql proxy between which could pool the connections to the mysql server. rr _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Thu May 16 13:51:06 2019 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 16 May 2019 16:51:06 +0300 Subject: Port Exhaustion - SQL In-Reply-To: <173196454.166173.1558014373946.JavaMail.zimbra@medent.com> References: <000001d50be8$965579c0$c3006d40$@roze.lv> <1497205455.160069.1558012299851.JavaMail.zimbra@medent.com> <000901d50bec$33b2c940$9b185bc0$@roze.lv> <173196454.166173.1558014373946.JavaMail.zimbra@medent.com> Message-ID: Hi, On 16/05/2019 16:46, Brandon Mallory wrote: > This is a very busy server and tried to push our programming > department to move to persistent connections, they feel that it > could be a security issue if dealing with sensitive information > since that connection could be hijacked. We do not have an issue on > the mysql server side with?Port Exhaustion, just on the "Frontend > webserver".? We have made a lot of changes, and are currently > managing but I fear that we will reach the 65k limit again. If I > could get something to load balance LAN interfaces I could double > the port limitation. I see that haproxy has an article on this, I > love nginx and use it for other applications but maybe its the wrong > product for this senerio. I was thinking there might be a way using > proxy_bind.? > > https://www.haproxy.com/blog/haproxy-high-mysql-request-rate-and-tcp-source-port-exhaustion/ > * Nothing wrong with nginx in this scenario: https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/ -- Maxim Konovalov From brandonm at medent.com Thu May 16 14:11:20 2019 From: brandonm at medent.com (Brandon Mallory) Date: Thu, 16 May 2019 10:11:20 -0400 (EDT) Subject: Port Exhaustion - SQL In-Reply-To: References: <000001d50be8$965579c0$c3006d40$@roze.lv> <1497205455.160069.1558012299851.JavaMail.zimbra@medent.com> <000901d50bec$33b2c940$9b185bc0$@roze.lv> <173196454.166173.1558014373946.JavaMail.zimbra@medent.com> Message-ID: <1098512873.171286.1558015880645.JavaMail.zimbra@medent.com> That is what I was thinking, I am having an issue with the listen directive, what should I use since the local port is "random" also for split_clients "$remote_addr$remote_port" $split_ip I cant use remote address since its a local address ?? same with port ? This is what I have been trying and have not had any luck upstream backend { server 192.168.99.19:3306; } server { listen 3306 proxy_pass backend; proxy_bind $split_ip; } split_clients "$remote_addr$remote_port" $split_ip { 50% 192.168.99.17; 50% 192.169.99.21; } Best Regards, Brandon Mallory Network & Systems Engineer MEDENT EMR/EHR 15 Hulbert Street Auburn, NY 13021 Phone: [ callto:(315)-255-0900 | (315)-255-0900 ] Fax: [ callto:(315)-255-3539 | (315)-255-3539 ] Web: [ http://www.medent.com/ | www.medent.com ] This message and any attachments may contain information that is protected by law as privileged and confidential, and is transmitted for the sole use of the intended recipient(s). If you are not the intended recipient, you are hereby notified that any use, dissemination, copying or retention of this e-mail or the information contained herein is strictly prohibited. If you received this e-mail in error, please immediately notify the sender by e-mail, and permanently delete this e-mail. From: "Maxim Konovalov" To: "nginx" Cc: "brandonm" Sent: Thursday, May 16, 2019 9:51:06 AM Subject: Re: Port Exhaustion - SQL Hi, On 16/05/2019 16:46, Brandon Mallory wrote: > This is a very busy server and tried to push our programming > department to move to persistent connections, they feel that it > could be a security issue if dealing with sensitive information > since that connection could be hijacked. We do not have an issue on > the mysql server side with Port Exhaustion, just on the "Frontend > webserver". We have made a lot of changes, and are currently > managing but I fear that we will reach the 65k limit again. If I > could get something to load balance LAN interfaces I could double > the port limitation. I see that haproxy has an article on this, I > love nginx and use it for other applications but maybe its the wrong > product for this senerio. I was thinking there might be a way using > proxy_bind. > > https://www.haproxy.com/blog/haproxy-high-mysql-request-rate-and-tcp-source-port-exhaustion/ > * Nothing wrong with nginx in this scenario: https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/ -- Maxim Konovalov -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Thu May 16 14:14:33 2019 From: r at roze.lv (Reinis Rozitis) Date: Thu, 16 May 2019 17:14:33 +0300 Subject: Port Exhaustion - SQL In-Reply-To: <173196454.166173.1558014373946.JavaMail.zimbra@medent.com> References: <000001d50be8$965579c0$c3006d40$@roze.lv> <1497205455.160069.1558012299851.JavaMail.zimbra@medent.com> <000901d50bec$33b2c940$9b185bc0$@roze.lv> <173196454.166173.1558014373946.JavaMail.zimbra@medent.com> Message-ID: <001501d50bf1$af3297d0$0d97c770$@roze.lv> > I love nginx and use it for other applications but maybe its the wrong product for this senerio Does nginx connect to mysql (like you use some kind of embedded module (perl/lua etc)?) or do you proxy some backend app? If not then it has no relation to this issue. > We do not have an issue on the mysql server side with Port Exhaustion, just on the "Frontend webserver". We have made a lot of changes, and are currently managing but I fear that we will reach the 65k limit again. Well it doesn't matter on which side as the tuples are constructed this way: localip:localport - remoteip:remoteport If you have a single mysql ip then it becomes: localip:localport - 192.168.99.19:3306 And then if you have a single localip it becomes: 192.168.99.17:localport - 192.168.99.19:3306 .. and 'localport' can have only ~65k values. But now instead of having multiple local ips (as not all applications support binding to a specific outgoing interface (for example as far as I know php with the default mysql(i)_connect() can't for different languages/apps/frameworks it might be different) and doing it with iptables/postrouting/snat is cumbersome, you could have just multiple remote ips which each would give you effectively 65k localports. Then your application could just connect to a random remote ip (which is more simple from code point) or you can do it with a simple haproxy loadbalance setup: frontend db bind :3306 mode tcp default_backend mysqlserver backend mysqlserver balance leastconn server c1 remotemysqlip1:3306 server c2 remotemysqlip2:3306 server c3 remotemysqlip3:3306 (would need to change the application to connect to 127.0.0.1:3306 (then again if not randomizing the 127.0.0.x you could hit the port exhaust anyways). rr From maxim at nginx.com Thu May 16 14:16:49 2019 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 16 May 2019 17:16:49 +0300 Subject: Port Exhaustion - SQL In-Reply-To: References: <000001d50be8$965579c0$c3006d40$@roze.lv> <1497205455.160069.1558012299851.JavaMail.zimbra@medent.com> <000901d50bec$33b2c940$9b185bc0$@roze.lv> <173196454.166173.1558014373946.JavaMail.zimbra@medent.com> Message-ID: <9bfc088c-8982-7979-45cb-87eb093ec638@nginx.com> On 16/05/2019 16:51, Maxim Konovalov wrote: > Hi, > > On 16/05/2019 16:46, Brandon Mallory wrote: >> This is a very busy server and tried to push our programming >> department to move to persistent connections, they feel that it >> could be a security issue if dealing with sensitive information >> since that connection could be hijacked. We do not have an issue on >> the mysql server side with?Port Exhaustion, just on the "Frontend >> webserver".? We have made a lot of changes, and are currently >> managing but I fear that we will reach the 65k limit again. If I >> could get something to load balance LAN interfaces I could double >> the port limitation. I see that haproxy has an article on this, I >> love nginx and use it for other applications but maybe its the wrong >> product for this senerio. I was thinking there might be a way using >> proxy_bind.? >> >> https://www.haproxy.com/blog/haproxy-high-mysql-request-rate-and-tcp-source-port-exhaustion/ >> * > > Nothing wrong with nginx in this scenario: > > https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/ > + make sure you are not using ancient nginx version. I refer to this change in 1.11.2 and follow up change in 1.11.4 *) Feature: now nginx uses the IP_BIND_ADDRESS_NO_PORT socket option when available. -- Maxim Konovalov From r at roze.lv Thu May 16 14:38:43 2019 From: r at roze.lv (Reinis Rozitis) Date: Thu, 16 May 2019 17:38:43 +0300 Subject: Port Exhaustion - SQL In-Reply-To: <1098512873.171286.1558015880645.JavaMail.zimbra@medent.com> References: <000001d50be8$965579c0$c3006d40$@roze.lv> <1497205455.160069.1558012299851.JavaMail.zimbra@medent.com> <000901d50bec$33b2c940$9b185bc0$@roze.lv> <173196454.166173.1558014373946.JavaMail.zimbra@medent.com> <1098512873.171286.1558015880645.JavaMail.zimbra@medent.com> Message-ID: <002001d50bf5$0fea6410$2fbf2c30$@roze.lv> Ohh I missed the whole idea that nginx is used as tcp balancer for mysql. But imo it is still more simple (unless you can't do anything with the DB server) to balance the remote server rather than split and bind local clients: upstream backend { least_conn; server ip1:3306; server ip2:3306; server ip3:3306; } rr From brandonm at medent.com Thu May 16 15:12:54 2019 From: brandonm at medent.com (Brandon Mallory) Date: Thu, 16 May 2019 11:12:54 -0400 (EDT) Subject: Port Exhaustion - SQL In-Reply-To: <002001d50bf5$0fea6410$2fbf2c30$@roze.lv> References: <000001d50be8$965579c0$c3006d40$@roze.lv> <1497205455.160069.1558012299851.JavaMail.zimbra@medent.com> <000901d50bec$33b2c940$9b185bc0$@roze.lv> <173196454.166173.1558014373946.JavaMail.zimbra@medent.com> <1098512873.171286.1558015880645.JavaMail.zimbra@medent.com> <002001d50bf5$0fea6410$2fbf2c30$@roze.lv> Message-ID: <1483987565.185961.1558019574251.JavaMail.zimbra@medent.com> The programmers currently use a file to specify the IP of the remote SQL server. As a work around for the time being I have added a 2nd interface to the front end server and the remote SQL server. These are the errors we get when we hit the 65k limit. mysqli_connect(): Can't connect to MySQL server on '192.168.98.19' (99) Front end 192.168.99.8 192.169.98.9 Remote SQL 192.168.99.19 192.168.98.19 There is a file where I can specify which IP to use ( 192.168.99.19 or 192.168.98.19) wrote a script to change that IP when we hit 30k connections. SQLIP=`cat /MYSQL_HOST` COUNT=`netstat -an | grep $SQLIP | wc -l` if [ "$SQLIP" = "192.168.98.19" ];then FLIP="192.168.99.19" else FLIP="192.168.98.19" fi if [ $COUNT -gt 30000 ];then echo "$FLIP" > /MYSQL_HOST COUNTFLIP=`netstat -an | grep $FLIP | wc -l` echo "$SQLIP Hit $COUNT switching IP to $FLIP with $COUNTFLIP connections - $(date)" >> $LOG fi There just seems like a better way to have something else "load balance" between 2 lan IP's. Best Regards, Brandon Mallory Network & Systems Engineer MEDENT EMR/EHR 15 Hulbert Street Auburn, NY 13021 Phone: [ callto:(315)-255-0900 | (315)-255-0900 ] Fax: [ callto:(315)-255-3539 | (315)-255-3539 ] Web: [ http://www.medent.com/ | www.medent.com ] This message and any attachments may contain information that is protected by law as privileged and confidential, and is transmitted for the sole use of the intended recipient(s). If you are not the intended recipient, you are hereby notified that any use, dissemination, copying or retention of this e-mail or the information contained herein is strictly prohibited. If you received this e-mail in error, please immediately notify the sender by e-mail, and permanently delete this e-mail. From: "Reinis Rozitis" To: "nginx" Sent: Thursday, May 16, 2019 10:38:43 AM Subject: RE: Port Exhaustion - SQL Ohh I missed the whole idea that nginx is used as tcp balancer for mysql. But imo it is still more simple (unless you can't do anything with the DB server) to balance the remote server rather than split and bind local clients: upstream backend { least_conn; server ip1:3306; server ip2:3306; server ip3:3306; } rr _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From yichun at openresty.com Fri May 17 08:17:12 2019 From: yichun at openresty.com (Yichun Zhang) Date: Fri, 17 May 2019 01:17:12 -0700 Subject: [ANN] OpenResty 1.15.8.1 released Message-ID: Hi folks! I am happy to announce the new formal release, 1.15.8.1, of the OpenResty web platform based on NGINX and LuaJIT: https://openresty.org/en/ann-1015008001.html This release contains many big new features as well as many important bug fixes accumulated in the past year. We now have 1. full Aarch64 support, ngx.pipe API for doing shell commands nonblockingly, 2. lua-resty-shell and lua-resty-signal libraries for simple shell command automation, 3. much enhanced ngx_stream_lua module with peek(), ssl_certificate_by_lua*, ngx.semaphore, and etc, 4. LuaJIT without the 2G GC-managed memory limit on x86_64 by default (now the limit is 128TB), 5. several new table.* built-in API functions in our LuaJIT (which can also be JIT compiled), 6. nonblocking cosocket connect() request queing for limiting backend concurrency level, 7. automatic loading of lua-resty-core by default, and many other features. See the web page link above for more details on the highlighted features and complete change logs. Special thanks go to all our developers, sponsors, and contributors! Also thanks Thibault Charbonnier for his great help in preparing this release. We'll try making releases much more often in the future (one release for every 1 ~ 2 months). OpenResty is a high performance and dynamic web platform based on our enhanced version of Nginx core, our enhanced version of LuaJIT, and many powerful Nginx modules and Lua libraries. See OpenResty's homepage for details: https://openresty.org/ Have fun! Best, Yichun --- Yichun Zhang is the creator of OpenResty, the founder and CEO of OpenResty Inc. From nginx-forum at forum.nginx.org Fri May 17 18:31:54 2019 From: nginx-forum at forum.nginx.org (jarstewa) Date: Fri, 17 May 2019 14:31:54 -0400 Subject: Max_fails for proxy_pass without an upstream block In-Reply-To: <01f10e410294bf9bcc1e756153217596.NginxMailingListEnglish@forum.nginx.org> References: <01f10e410294bf9bcc1e756153217596.NginxMailingListEnglish@forum.nginx.org> Message-ID: <62f6ebd0383f33cc5930908aa9e548f5.NginxMailingListEnglish@forum.nginx.org> Bumping this thread in the hopes that someone knows the answer. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284004,284208#msg-284208 From nginx-forum at forum.nginx.org Sat May 18 21:10:29 2019 From: nginx-forum at forum.nginx.org (NginxNewbee) Date: Sat, 18 May 2019 17:10:29 -0400 Subject: Reading large request body using ngx_http_read_client_request_body Message-ID: <8085dc8b00ccb2bcf371e8c0d104540f.NginxMailingListEnglish@forum.nginx.org> Apologies if this is a trivial question. I have searched it on web and none of the answers have solved my problem. I am trying to read request body and thing seem to work fine if request body is small (4 kb). As soon as it becomes 4+ megabytes,ngx_http_read_client_request_body returns NGX_AGAIN (-2) in rc and my RequestBodyHandler is never called. snippet of nginx.conf is like this for buffer settings. I have removed lot of other stuff from nginx.conf to keep it brief. http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; client_body_buffer_size 1m; client_max_body_size 0; } / * code sample start */ r->request_body_in_single_buf = 1; r->request_body_in_persistent_file = 1; r->request_body_in_clean_file = 1; r->request_body_file_log_level = 0; rc = ngx_http_read_client_request_body(r, RequestBodyHandler); if (rc >= NGX_HTTP_SPECIAL_RESPONSE) { CORE_TRACE(Error, L"Nginx read request error: %d", rc); return from thread. } return NGX_DONE. / * code sample end */ Just to add some more context here. I use nginx thread pool. So my module's content handler basically queues a task to thread pool. Callback of thread eventually calls ngx_http_read_client_request_body to read request body. If there is an error (as stated in above if condition), thread simply returns and the completion handler of thread is called by nginx. In the completion handler, I do finalize request with NGX_DONE always. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284214,284214#msg-284214 From nginx-forum at forum.nginx.org Sun May 19 07:52:40 2019 From: nginx-forum at forum.nginx.org (naupe) Date: Sun, 19 May 2019 03:52:40 -0400 Subject: "Welcome to nginx" and "Apache2 Ubuntu Default Page" default pages Message-ID: <8529df586729c803a3bbbca03a8f0cb6.NginxMailingListEnglish@forum.nginx.org> I've created a post for this over in the /r/nginx Reddit community: https://www.reddit.com/r/nginx/comments/bq6g77/welcome_to_nginx_and_apache2_ubuntu_default_page/ How come I'm getting the "Welcome to nginx!" and "Apache2 Ubuntu Default Page" default pages? Where do I find these "default" pages in Nginx? The default Apache page is very odd for ownCloud. Its running Apache for its SSL, so the default page for it could be on the ownCloud VM. Meanwhile Discourse does not have Nginx installed on it at all. Can I redirect from one link to another with Nginx? For example, could I redirect from https://oc.myreserveddns.com to https://oc.myreserveddns.com/owncloud/index.php/login? If you need more details, please let me know. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284216,284216#msg-284216 From mdounin at mdounin.ru Mon May 20 13:59:42 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 May 2019 16:59:42 +0300 Subject: Reading large request body using ngx_http_read_client_request_body In-Reply-To: <8085dc8b00ccb2bcf371e8c0d104540f.NginxMailingListEnglish@forum.nginx.org> References: <8085dc8b00ccb2bcf371e8c0d104540f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190520135942.GF1877@mdounin.ru> Hello! On Sat, May 18, 2019 at 05:10:29PM -0400, NginxNewbee wrote: > Apologies if this is a trivial question. I have searched it on web and none > of the answers have solved my problem. I am trying to read request body and > thing seem to work fine if request body is small (4 kb). As soon as it > becomes 4+ megabytes,ngx_http_read_client_request_body returns NGX_AGAIN > (-2) in rc and my RequestBodyHandler is never called. > > snippet of nginx.conf is like this for buffer settings. I have removed lot > of other stuff from nginx.conf to keep it brief. > > http { > include mime.types; > default_type application/octet-stream; > > sendfile on; > keepalive_timeout 65; > > client_body_buffer_size 1m; > client_max_body_size 0; > } > > / * code sample start */ > r->request_body_in_single_buf = 1; > r->request_body_in_persistent_file = 1; > r->request_body_in_clean_file = 1; > r->request_body_file_log_level = 0; > > rc = ngx_http_read_client_request_body(r, RequestBodyHandler); > if (rc >= NGX_HTTP_SPECIAL_RESPONSE) { > > CORE_TRACE(Error, L"Nginx read request error: %d", rc); > return from thread. > } > > return NGX_DONE. > > / * code sample end */ > > Just to add some more context here. I use nginx thread pool. So my module's > content handler basically queues a task to thread pool. Callback of thread > eventually calls ngx_http_read_client_request_body to read request body. If > there is an error (as stated in above if condition), thread simply returns > and the completion handler of thread is called by nginx. In the completion > handler, I do finalize request with NGX_DONE always. In no particular order: - You shouldn't try to use nginx functions such as ngx_http_read_client_request_body() and objects such as "r" in threads. While this may appear to work, in fact it's not - as nginx provides no thread safety for these functions and objects. - You should finalize the request with NGX_DONE only after the request body handler is called and you've done with the request. By finalizing the request earlier you basically say that you've done with the request, and nginx should stop processing - so there is no surprie your body handler is never called. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon May 20 19:04:05 2019 From: nginx-forum at forum.nginx.org (NginxNewbee) Date: Mon, 20 May 2019 15:04:05 -0400 Subject: Reading large request body using ngx_http_read_client_request_body In-Reply-To: <20190520135942.GF1877@mdounin.ru> References: <20190520135942.GF1877@mdounin.ru> Message-ID: <11634031b845e9e59485d26ddec460a3.NginxMailingListEnglish@forum.nginx.org> Thanks for your comments, Maxim. I truly appreciate it. For first comment, the reason I chose to do request processing on thread is so it wouldn't block nginx. We launch one thread per request (from content handler). There will never will be multiple threads working on a request. r is passed onto a thread callback. Inside that callback, we extract headers, request body etc. from r and store it in our own objects. Then we do some compute (business logic) and then generate a response and write to output buffers of r. Then thread returns and its completion handler is called by nginx. In the completion handler we finalize request (with NGX_DONE). Assumption here is that nginx wouldn't be messing with object r as long as our thread is executing (since we haven't finalized request yet). With this context, If you still think this isn't correct (and because of thread, I am seeing issue in reading body for large content size), I'll be go ahead and change my code and not use nginx functions on thread. For second comment - I may have not it explained correctly in my first post, but we do not call finalize request until thread finishes. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284214,284222#msg-284222 From mdounin at mdounin.ru Mon May 20 19:41:30 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 May 2019 22:41:30 +0300 Subject: Reading large request body using ngx_http_read_client_request_body In-Reply-To: <11634031b845e9e59485d26ddec460a3.NginxMailingListEnglish@forum.nginx.org> References: <20190520135942.GF1877@mdounin.ru> <11634031b845e9e59485d26ddec460a3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190520194130.GG1877@mdounin.ru> Hello! On Mon, May 20, 2019 at 03:04:05PM -0400, NginxNewbee wrote: > Thanks for your comments, Maxim. I truly appreciate it. > > For first comment, the reason I chose to do request processing on thread is > so it wouldn't block nginx. We launch one thread per request (from content > handler). There will never will be multiple threads working on a request. r > is passed onto a thread callback. Inside that callback, we extract headers, > request body etc. from r and store it in our own objects. Then we do some > compute (business logic) and then generate a response and write to output > buffers of r. Then thread returns and its completion handler is called by > nginx. In the completion handler we finalize request (with NGX_DONE). > Assumption here is that nginx wouldn't be messing with object r as long as > our thread is executing (since we haven't finalized request yet). With this > context, If you still think this isn't correct (and because of thread, I am > seeing issue in reading body for large content size), I'll be go ahead and > change my code and not use nginx functions on thread. Your assumption is not correct. At any time nginx may have reasons to do something with r - e.g., when client closes a connection or cancels an HTTP/2 stream, or when something happens in a subrequest. Moreover, functions such as ngx_http_read_client_request_body() work with various global objects, such as the list of all timers, various event-related things, and so on. It is simply not expected to be called from other threads. If you want to do your own processing in a thread, you have to extract all the needed information (including reading the request body) in the main thread context, and pass everything to your thread in your own tread-safe structures. > For second comment - I may have not it explained correctly in my first post, > but we do not call finalize request until thread finishes. >From your explanation I assume that you return from the thread as long as ngx_http_read_client_request_body() returns NGX_AGAIN, so basically the request is finalized as long as ngx_http_read_client_request_body() returns NGX_AGAIN. On the other hand, this is not really important as it is not going to work anyway - due to the above problem with trying to call ngx_http_read_client_request_body() in a thread. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon May 20 20:35:51 2019 From: nginx-forum at forum.nginx.org (NginxNewbee) Date: Mon, 20 May 2019 16:35:51 -0400 Subject: Reading large request body using ngx_http_read_client_request_body In-Reply-To: <20190520194130.GG1877@mdounin.ru> References: <20190520194130.GG1877@mdounin.ru> Message-ID: Thanks. Your understanding is correct. For cases, when ngx_http_read_client_request_body returns NGX_AGAIN, thread returns and request will get finalized. I'll go ahead and move reading contents of r and request body to main thread in the content handler and just keep execution of our business logic on thread. Just one more question - Once I move ngx_http_read_client_request_body in content handler on main thread and it returns NGX_AGAIN, how should I handle it? Code sample on nginx developer reference, recommends this. ngx_int_t ngx_http_foo_content_handler(ngx_http_request_t *r) { ngx_int_t rc; rc = ngx_http_read_client_request_body(r, ngx_http_foo_init); if (rc >= NGX_HTTP_SPECIAL_RESPONSE) { /* error */ return rc; } return NGX_DONE; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284214,284224#msg-284224 From nginx-forum at forum.nginx.org Tue May 21 04:28:18 2019 From: nginx-forum at forum.nginx.org (NginxNewbee) Date: Tue, 21 May 2019 00:28:18 -0400 Subject: Reading large request body using ngx_http_read_client_request_body In-Reply-To: References: <20190520194130.GG1877@mdounin.ru> Message-ID: <368362aade0f21ca21573b878b760908.NginxMailingListEnglish@forum.nginx.org> Hey Maxim, can you please find some time to respond to my last question. We read nginx code as well on git hub but confusion remains. We would really appreciate your help on it. Thanks ! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284214,284225#msg-284225 From 201904-nginx at jslf.app Tue May 21 04:46:24 2019 From: 201904-nginx at jslf.app (Patrick) Date: Tue, 21 May 2019 12:46:24 +0800 Subject: Reading large request body using ngx_http_read_client_request_body In-Reply-To: <368362aade0f21ca21573b878b760908.NginxMailingListEnglish@forum.nginx.org> References: <20190520194130.GG1877@mdounin.ru> <368362aade0f21ca21573b878b760908.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190521044624.GA3223@haller.ws> On 2019-05-21 00:28, NginxNewbee wrote: > Hey Maxim, can you please find some time to respond to my last question. We > read nginx code as well on git hub but confusion remains. We would really > appreciate your help on it. If you need immediate help, a support contract or nginx consultant is probably the way to go. Patrick From mdounin at mdounin.ru Tue May 21 12:51:57 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 May 2019 15:51:57 +0300 Subject: Reading large request body using ngx_http_read_client_request_body In-Reply-To: References: <20190520194130.GG1877@mdounin.ru> Message-ID: <20190521125157.GK1877@mdounin.ru> Hello! On Mon, May 20, 2019 at 04:35:51PM -0400, NginxNewbee wrote: > Thanks. Your understanding is correct. For cases, when > ngx_http_read_client_request_body returns NGX_AGAIN, thread returns and > request will get finalized. I'll go ahead and move reading contents of r and > request body to main thread in the content handler and just keep execution > of our business logic on thread. > > Just one more question - Once I move ngx_http_read_client_request_body in > content handler on main thread and it returns NGX_AGAIN, how should I handle > it? Code sample on nginx developer reference, recommends this. > > ngx_int_t > ngx_http_foo_content_handler(ngx_http_request_t *r) > { > ngx_int_t rc; > > rc = ngx_http_read_client_request_body(r, ngx_http_foo_init); > > if (rc >= NGX_HTTP_SPECIAL_RESPONSE) { > /* error */ > return rc; > } > > return NGX_DONE; > } As long as ngx_http_read_client_request_body() is used in a content handler and returns NGX_AGAIN, no special handling is needed. Just returning NGX_DONE as in the example is enough. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue May 21 14:39:30 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 May 2019 17:39:30 +0300 Subject: nginx-1.17.0 Message-ID: <20190521143929.GM1877@mdounin.ru> Changes with nginx 1.17.0 21 May 2019 *) Feature: variables support in the "limit_rate" and "limit_rate_after" directives. *) Feature: variables support in the "proxy_upload_rate" and "proxy_download_rate" directives in the stream module. *) Change: minimum supported OpenSSL version is 0.9.8. *) Change: now the postpone filter is always built. *) Bugfix: the "include" directive did not work inside the "if" and "limit_except" blocks. *) Bugfix: in byte ranges processing. -- Maxim Dounin http://nginx.org/ From xeioex at nginx.com Tue May 21 16:18:41 2019 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 21 May 2019 19:18:41 +0300 Subject: njs-0.3.2 Message-ID: Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release mostly focuses on stability issues in njs core after regular fuzzing tests were introduced. Notable new features: - Added ES6 template literals support: : > var a = "Template", b = "literals" : undefined : > `${a} ${b.toUpperCase()}!` : 'Template LITERALS!' - Added ES9 RegExp "groups" object support: : > /(?(?no)?(?yes)?)/.exec('yes').groups { no: undefined, r: 'yes', yes: 'yes' } You can learn more about njs: - Overview and introduction: http://nginx.org/en/docs/njs/ - Presentation: https://youtu.be/Jc_L6UffFOs Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.3.2 21 May 2019 Core: *) Feature: added support for template literals. Thanks to ??? (Hong Zhi Dao) and Artem S. Povalyukhin. *) Feature: executing command from command line arguments. *) Feature: added support for RegExp "groups" object (ES9). *) Feature: added block scoped function definitions support. *) Feature: added support for building with GNU Readline library. *) Feature: made configurable "length", "name", and most of built-in methods. *) Feature: made all constructor properties configurable. *) Bugfix: fixed Regexp.prototype.exec() for Unicode-only regexps. *) Bugfix: fixed njs_vm_value_dump() for empty string values. *) Bugfix: fixed RegExp constructor for regexp value arguments. *) Bugfix: fixed walking over prototypes chain during iteration over an object. *) Bugfix: fixed overflow in Array.prototype.concat(). *) Bugfix: fixed length calculation for UTF-8 string with escape characters. *) Bugfix: fixed parsing surrogate pair presents as UTF-16 escape sequences. *) Bugfix: fixed processing asterisk quantifier for String.prototype.match(). *) Bugfix: fixed Date() constructor with one argument. *) Bugfix: fixed arrays expansion. *) Bugfix: fixed heap-buffer-overflow in String.prototype.replace(). *) Bugfix: fixed heap-buffer-overflow in String.prototype.lastIndexOf(). *) Bugfix: fixed regexp literals parsing with escaped backslash and backslash in square brackets. *) Bugfix: fixed regexp literals with lone closing brackets. *) Bugfix: fixed uninitialized-memory-access in Object.defineProperties(). *) Bugfix: fixed processing "*" quantifier for String.prototype.replace(). *) Bugfix: fixed Array.prototype.slice() for UTF8-invalid byte strings. *) Bugfix: fixed String.prototype.split() for UTF8-invalid byte strings. *) Bugfix: fixed handling of empty block statements. From nginx-forum at forum.nginx.org Tue May 21 16:53:28 2019 From: nginx-forum at forum.nginx.org (NginxNewbee) Date: Tue, 21 May 2019 12:53:28 -0400 Subject: Reading large request body using ngx_http_read_client_request_body In-Reply-To: <20190521125157.GK1877@mdounin.ru> References: <20190521125157.GK1877@mdounin.ru> Message-ID: Maxim - Thank you so much ! Much appreciated. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284214,284246#msg-284246 From kworthington at gmail.com Tue May 21 17:23:55 2019 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 21 May 2019 13:23:55 -0400 Subject: [nginx-announce] nginx-1.17.0 In-Reply-To: <20190521143935.GN1877@mdounin.ru> References: <20190521143935.GN1877@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.17.0 for Windows https://kevinworthington.com/nginxwin1170 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington On Tue, May 21, 2019 at 10:39 AM Maxim Dounin wrote: > Changes with nginx 1.17.0 21 May > 2019 > > *) Feature: variables support in the "limit_rate" and > "limit_rate_after" > directives. > > *) Feature: variables support in the "proxy_upload_rate" and > "proxy_download_rate" directives in the stream module. > > *) Change: minimum supported OpenSSL version is 0.9.8. > > *) Change: now the postpone filter is always built. > > *) Bugfix: the "include" directive did not work inside the "if" and > "limit_except" blocks. > > *) Bugfix: in byte ranges processing. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jennie.Jia at amdocs.com Tue May 21 20:54:24 2019 From: Jennie.Jia at amdocs.com (Jennie Jia) Date: Tue, 21 May 2019 20:54:24 +0000 Subject: Nginx with Java library Message-ID: Hi I am new to Nginx...I have a Create-React-App as a Front-end, using Nginx as serving web server. I want integrate with Portal. Portal uses a Java library to encrypt the cookie. After I received the request from Portal, I want decrypt the cookie using same Java library. I did some google search, it seems Nginx-Clojure is the choice. https://www.nginx.com/resources/wiki/modules/java_handler/# But It mentions this 3rd party Module is typically used for Ring handler. Does someone have a similar use case like me? Is Nginx-Clojure a good choice for this? Any other way to make Nginx talk to Java library? Any help are appreciated! Jennie This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service -------------- next part -------------- An HTML attachment was scrubbed... URL: From 201904-nginx at jslf.app Wed May 22 00:27:13 2019 From: 201904-nginx at jslf.app (Patrick) Date: Wed, 22 May 2019 08:27:13 +0800 Subject: Nginx with Java library In-Reply-To: References: Message-ID: <20190522002713.GA23641@haller.ws> On 2019-05-21 20:54, Jennie Jia wrote: > I am new to Nginx...I have a Create-React-App as a Front-end, using Nginx as serving web server. I want integrate with Portal. Portal uses a Java library to encrypt the cookie. After I received the request from Portal, I want decrypt the cookie using same Java library. Hi! A) Which java library? (have a url for it?) B) is the setup going to be: browser -> nginx -> portal or browser -> portal -> nginx ? When you say you receive requests from Portal, it makes it seem like the latter. Patrick From enderulusoy at gmail.com Wed May 22 11:48:38 2019 From: enderulusoy at gmail.com (ender ulusoy) Date: Wed, 22 May 2019 14:48:38 +0300 Subject: split traffic Message-ID: Hi all, I want to split %10 of the traffic on nginx reverse proxy. I have a setup that runs 2 versions of the website on same servers. new.domain.com domain.com I want to route the traffic and redirect the %10 to new.domain.com on / location. Currently I only redirect internal requests to new.domain.com to test the site. But now we want to test the new one with real visitors and want their feedback. There's not so much example on the internet. So I've decided to ask after several failed attempts. How can I achive this, any ideas? Thank you. here is the basic configuration upstream servers { server 100.50.10.1:80 server 100.50.10.2:80; } map $remote_addr $is_web_internal { 202.212.93.190 1; default 0; } server { server_name domain.com; location / { if ($is_web_internal) { return 301 https://new.domain.com.tr$uri ; } proxy_set_header Host $host; proxy_set_header Connection ""; add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; always"; add_header X-Frame-Options SAMEORIGIN; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; proxy_set_header Accept-Encoding ""; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://servers; } } server { server_name new.domain.com; location / { proxy_set_header Host $host; proxy_set_header Connection ""; add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; always"; add_header X-Frame-Options SAMEORIGIN; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; proxy_set_header Accept-Encoding ""; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://servers; } } -- um Gottes Willen! -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Wed May 22 12:31:14 2019 From: r at roze.lv (Reinis Rozitis) Date: Wed, 22 May 2019 15:31:14 +0300 Subject: split traffic In-Reply-To: References: Message-ID: <000201d5109a$3eea5410$bcbefc30$@roze.lv> > How can I achive this, any ideas? Thank you. > > > map $remote_addr $is_web_internal { > 202.212.93.190 1; > default 0; > } > > if ($is_web_internal) { > return 301 https://new.domain.com.tr$uri ; > } Basically the same - you can just replace the map directive to split_clients split_clients $remote_addr $is_web_internal { 10% 1; * 0; } if ($is_web_internal) { return 301 https://new.domain.com.tr$uri ; } Then roughly ~10% of the clients will get the redirect. rr From enderulusoy at gmail.com Wed May 22 12:47:05 2019 From: enderulusoy at gmail.com (ender ulusoy) Date: Wed, 22 May 2019 15:47:05 +0300 Subject: split traffic In-Reply-To: <000201d5109a$3eea5410$bcbefc30$@roze.lv> References: <000201d5109a$3eea5410$bcbefc30$@roze.lv> Message-ID: <8E28A5FD-DAD2-488D-9084-CEFEF4F22A70@gmail.com> Thanks but I got an error. I can not find the required module. root at Net:~# nginx -t nginx: [emerg] unknown directive "split_clients" in /etc/nginx/nginx.conf:598 nginx: configuration file /etc/nginx/nginx.conf test failed nginx version: nginx/1.14.1 built by gcc 7.3.0 (Ubuntu 7.3.0-27ubuntu1~18.04) built with LibreSSL 2.7.4 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --user=nginx --group=nginx --with-cc-opt=-Wno-deprecated-declarations --without-http_ssi_module --without-http_scgi_module --without-http_uwsgi_module --without-http_geo_module --without-http_split_clients_module --without-http_memcached_module --without-http_empty_gif_module --without-http_browser_module --with-threads --with-file-aio --with-http_ssl_module --with-http_v2_module --with-http_mp4_module --with-http_auth_request_module --with-http_slice_module --with-http_stub_status_module --with-http_realip_module --with-openssl=/usr/local/src/nginx/modules/libressl-2.7.4 --add-module=/usr/local/src/nginx/modules/incubator-pagespeed-ngx-1.13.35.2-stable --add-module=/usr/local/src/nginx/modules/ngx_brotli --add-module=/usr/local/src/nginx/modules/headers-more-nginx-module-0.33 --with-http_geoip_module --add-module=/usr/local/src/nginx/modules/ngx_cache_purge I think the right module is ngx_http_tnt_module.so but I can not find how to install this module. Any other chance without split module? > On 22 May 2019, at 15:31, Reinis Rozitis wrote: > >> How can I achive this, any ideas? Thank you. >> >> >> map $remote_addr $is_web_internal { >> 202.212.93.190 1; >> default 0; >> } >> >> if ($is_web_internal) { >> return 301 https://new.domain.com.tr$uri ; >> } > > > Basically the same - you can just replace the map directive to split_clients > > split_clients $remote_addr $is_web_internal { > 10% 1; > * 0; > } > > if ($is_web_internal) { > return 301 https://new.domain.com.tr$uri ; > } > > > Then roughly ~10% of the clients will get the redirect. > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Wed May 22 13:26:41 2019 From: r at roze.lv (Reinis Rozitis) Date: Wed, 22 May 2019 16:26:41 +0300 Subject: split traffic In-Reply-To: <8E28A5FD-DAD2-488D-9084-CEFEF4F22A70@gmail.com> References: <000201d5109a$3eea5410$bcbefc30$@roze.lv> <8E28A5FD-DAD2-488D-9084-CEFEF4F22A70@gmail.com> Message-ID: <000801d510a1$fe437330$faca5990$@roze.lv> > Thanks but I got an error. I can not find the required module. > > root at Net:~# nginx -t > nginx: [emerg] unknown directive "split_clients" in /etc/nginx/nginx.conf:598 > nginx: configuration file /etc/nginx/nginx.conf test failed Well your nginx is compiled using --without-http_split_clients_module so the module (which by default is enabled) is not available. I'm not very familiar with Ubuntu packaging and repository content but you could try to install 'nginx-full'. rr From francis at daoine.org Wed May 22 13:33:39 2019 From: francis at daoine.org (Francis Daly) Date: Wed, 22 May 2019 14:33:39 +0100 Subject: split traffic In-Reply-To: <8E28A5FD-DAD2-488D-9084-CEFEF4F22A70@gmail.com> References: <000201d5109a$3eea5410$bcbefc30$@roze.lv> <8E28A5FD-DAD2-488D-9084-CEFEF4F22A70@gmail.com> Message-ID: <20190522133339.xctoq4wzzn66qldg@daoine.org> On Wed, May 22, 2019 at 03:47:05PM +0300, ender ulusoy wrote: Hi there, > Thanks but I got an error. I can not find the required module. > > root at Net:~# nginx -t > nginx: [emerg] unknown directive "split_clients" in /etc/nginx/nginx.conf:598 > nginx: configuration file /etc/nginx/nginx.conf test failed http://nginx.org/r/split_clients says it is from ngx_http_split_clients_module > nginx version: nginx/1.14.1 > built by gcc 7.3.0 (Ubuntu 7.3.0-27ubuntu1~18.04) > built with LibreSSL 2.7.4 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx ... > --without-http_split_clients_module ... which your nginx builder explicitly removed. It's probably simplest to build the nginx that you want, or to acquire a nginx that includes the things that you want. > Any other chance without split module? Probably not without doing more work than using the module that is designed to do the thing that you want to do. Note, though, that by using an external redirect in 10% of requests to one server_name, you will possibly end up much more than 10% of users on the other server_name, because a user on that one will possibly stay on that one. That may not matter in this case. Good luck with it, f -- Francis Daly francis at daoine.org From enderulusoy at gmail.com Wed May 22 13:36:58 2019 From: enderulusoy at gmail.com (ender ulusoy) Date: Wed, 22 May 2019 16:36:58 +0300 Subject: split traffic In-Reply-To: <20190522133339.xctoq4wzzn66qldg@daoine.org> References: <000201d5109a$3eea5410$bcbefc30$@roze.lv> <8E28A5FD-DAD2-488D-9084-CEFEF4F22A70@gmail.com> <20190522133339.xctoq4wzzn66qldg@daoine.org> Message-ID: I've recompiled the nginx. That was the only way. Now it works fine. Thanks guys. 22 May 2019 ?ar 16:33 tarihinde Francis Daly ?unu yazd?: > On Wed, May 22, 2019 at 03:47:05PM +0300, ender ulusoy wrote: > > Hi there, > > > Thanks but I got an error. I can not find the required module. > > > > root at Net:~# nginx -t > > nginx: [emerg] unknown directive "split_clients" in > /etc/nginx/nginx.conf:598 > > nginx: configuration file /etc/nginx/nginx.conf test failed > > http://nginx.org/r/split_clients says it is from > ngx_http_split_clients_module > > > nginx version: nginx/1.14.1 > > built by gcc 7.3.0 (Ubuntu 7.3.0-27ubuntu1~18.04) > > built with LibreSSL 2.7.4 > > TLS SNI support enabled > > configure arguments: --prefix=/etc/nginx > ... > > --without-http_split_clients_module > ... > > which your nginx builder explicitly removed. > > It's probably simplest to build the nginx that you want, or to acquire > a nginx that includes the things that you want. > > > Any other chance without split module? > > Probably not without doing more work than using the module that is > designed to do the thing that you want to do. > > > Note, though, that by using an external redirect in 10% of requests to > one server_name, you will possibly end up much more than 10% of users > on the other server_name, because a user on that one will possibly stay > on that one. That may not matter in this case. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jennie.Jia at amdocs.com Wed May 22 14:02:34 2019 From: Jennie.Jia at amdocs.com (Jennie Jia) Date: Wed, 22 May 2019 14:02:34 +0000 Subject: Nginx with Java library In-Reply-To: <20190522002713.GA23641@haller.ws> References: <20190522002713.GA23641@haller.ws> Message-ID: Hi Patrick, The java library is ONAP Portal/SDK The steps is: I login to Portal first, from the Portal, I click the Icon of my application component.... So for your question B), it is Brower-portal-nginx. I am trying do POC to see if nginx can do the work through the nginx.conf + using Portal SDK decryption method. Thanks Jennie -----Original Message----- From: nginx On Behalf Of Patrick Sent: Tuesday, May 21, 2019 8:27 PM To: nginx at nginx.org Subject: Re: Nginx with Java library On 2019-05-21 20:54, Jennie Jia wrote: > I am new to Nginx...I have a Create-React-App as a Front-end, using Nginx as serving web server. I want integrate with Portal. Portal uses a Java library to encrypt the cookie. After I received the request from Portal, I want decrypt the cookie using same Java library. Hi! A) Which java library? (have a url for it?) B) is the setup going to be: browser -> nginx -> portal or browser -> portal -> nginx ? When you say you receive requests from Portal, it makes it seem like the latter. Patrick _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service From gfrankliu at gmail.com Wed May 22 17:30:57 2019 From: gfrankliu at gmail.com (Frank Liu) Date: Wed, 22 May 2019 10:30:57 -0700 Subject: nginx proxy and Date header Message-ID: Is there a reason why by default nginx doesn't pass the "Date" header from upstream? http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html seems to indicate Date header shouldn't be altered: The HTTP-date sent in a Date header SHOULD NOT represent a date and time subsequent to the generation of the message. It SHOULD represent the best available approximation of the date and time of message generation, unless the implementation has no means of generating a reasonably accurate date and time. In theory, the date ought to represent the moment just before the entity is generated. In practice, the date can be generated at any time during the message origination without affecting its semantic value. If nginx, as a proxy, changes the Date header, it may mess up the caching model of HTTP in some pretty subtle ways in the downstream. -------------- next part -------------- An HTML attachment was scrubbed... URL: From 201904-nginx at jslf.app Thu May 23 00:34:55 2019 From: 201904-nginx at jslf.app (Patrick) Date: Thu, 23 May 2019 08:34:55 +0800 Subject: Nginx with Java library In-Reply-To: References: <20190522002713.GA23641@haller.ws> Message-ID: <20190523003455.GA21970@haller.ws> On 2019-05-22 14:02, Jennie Jia wrote: > The java library is ONAP Portal/SDK If you can find the cookie decryption library, then you can call it from clojure -- though spinning up a clojure/java runtime to get access to a cookie seems to be expensive unless you're already writing clojure code. Patrick From mdounin at mdounin.ru Thu May 23 09:54:51 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 23 May 2019 12:54:51 +0300 Subject: nginx proxy and Date header In-Reply-To: References: Message-ID: <20190523095451.GZ1877@mdounin.ru> Hello! On Wed, May 22, 2019 at 10:30:57AM -0700, Frank Liu wrote: > Is there a reason why by default nginx doesn't pass the "Date" header from > upstream? > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header > > https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html seems to indicate > Date header shouldn't be altered: > > The HTTP-date sent in a Date header SHOULD NOT represent a date and time > subsequent to the generation of the message. It SHOULD represent the best > available approximation of the date and time of message generation, unless > the implementation has no means of generating a reasonably accurate date > and time. In theory, the date ought to represent the moment just before the > entity is generated. In practice, the date can be generated at any time > during the message origination without affecting its semantic value. > > If nginx, as a proxy, changes the Date header, it may mess up the caching > model of HTTP in some pretty subtle ways in the downstream. And not changing the Date will mess pretty badly with various things as introduced in nginx, such as by the "expires" directive (not to mention SSI, as well as returning various responses from cache). Since nginx is primarily developed as a frontend server, it is generally considered to be an origin server in terms of RFC 2616. Accordingly, "generation of the message" in the quote above happens in nginx, and it uses the current date by default. If you want to preserve Date as returned by the upstream server, you can use the proxy_pass_header directive to do so. -- Maxim Dounin http://mdounin.ru/ From satcse88 at gmail.com Thu May 23 12:33:51 2019 From: satcse88 at gmail.com (Sathish Kumar) Date: Thu, 23 May 2019 20:33:51 +0800 Subject: Proxy Pass Message-ID: Hi Team, Currently, we are using the below config to route the requests from one server to another backend server. Server1 location /abc { proxy_pass https://1.1.1.1/abc; } Server 2 (1.1.1.1) location /abc { proxy_pass http://127.0.0.1:1111/abc; } Instead of IP address, if we use FQDN with https, do we have to validate the SSL certificate on Proxy_Pass?. Due to IP address, multiple sites on the server Nginx access log logging the same requests. Is the above Nginx config, correct way of doing it?. -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Thu May 23 14:58:07 2019 From: r at roze.lv (Reinis Rozitis) Date: Thu, 23 May 2019 17:58:07 +0300 Subject: Proxy Pass In-Reply-To: References: Message-ID: <001f01d51177$ee667490$cb335db0$@roze.lv> > Instead of IP address, if we use FQDN with https, do we have to validate the SSL certificate on Proxy_Pass?. By default the certificate validation is turned off (and nginx just uses the ssl for traffic encryption). If needed you can enable it with proxy_ssl_verify on; ( http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ssl_verify ) > Due to IP address, multiple sites on the server Nginx access log logging the same requests. If you do the logging on the backend server (and there are multiple virtualhosts) and proxy_pass is via http://ip, then you need/have to pass also the Host header. Either by passing the Host header from original request: proxy_set_header Host $host; or you can specify a custom one: proxy_set_header Host "some.domain"; > Is the above Nginx config, correct way of doing it?. Depends on your setup and what you want to achieve. For example if your location blocks match the request on the backend you can omit the URI in the proxy_pass directive: location /abc { proxy_pass https://1.1.1.1; } rr From gfrankliu at gmail.com Thu May 23 16:41:58 2019 From: gfrankliu at gmail.com (Frank Liu) Date: Thu, 23 May 2019 09:41:58 -0700 Subject: nginx proxy and Date header In-Reply-To: <20190523095451.GZ1877@mdounin.ru> References: <20190523095451.GZ1877@mdounin.ru> Message-ID: I understand your argument when nginx is normally used as a frontend server and "generation of the message" happens in nginx, but in this case, the document is about the nginx proxy module where nginx proxies the "message" originated from upstream. When nginx is used as a reverse proxy, I think nginx proxy module should NOT remove the "Date" header from origin by default. Other reverse proxies/caches, like Varnish, are doing the same: http://ronwilliams.io/blog/post/rfc7231-compliant-http-date-headers On Thu, May 23, 2019 at 2:55 AM Maxim Dounin wrote: > Hello! > > On Wed, May 22, 2019 at 10:30:57AM -0700, Frank Liu wrote: > > > Is there a reason why by default nginx doesn't pass the "Date" header > from > > upstream? > > > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header > > > > https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html seems to > indicate > > Date header shouldn't be altered: > > > > The HTTP-date sent in a Date header SHOULD NOT represent a date and time > > subsequent to the generation of the message. It SHOULD represent the best > > available approximation of the date and time of message generation, > unless > > the implementation has no means of generating a reasonably accurate date > > and time. In theory, the date ought to represent the moment just before > the > > entity is generated. In practice, the date can be generated at any time > > during the message origination without affecting its semantic value. > > > > If nginx, as a proxy, changes the Date header, it may mess up the caching > > model of HTTP in some pretty subtle ways in the downstream. > > And not changing the Date will mess pretty badly with various > things as introduced in nginx, such as by the "expires" > directive (not to mention SSI, as well as returning various > responses from cache). > > Since nginx is primarily developed as a frontend server, it is > generally considered to be an origin server in terms of RFC 2616. > Accordingly, "generation of the message" in the quote above > happens in nginx, and it uses the current date by default. > > If you want to preserve Date as returned by the upstream server, > you can use the proxy_pass_header directive to do so. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jennie.Jia at amdocs.com Thu May 23 21:29:32 2019 From: Jennie.Jia at amdocs.com (Jennie Jia) Date: Thu, 23 May 2019 21:29:32 +0000 Subject: Nginx with Java library In-Reply-To: <20190523003455.GA21970@haller.ws> References: <20190522002713.GA23641@haller.ws> <20190523003455.GA21970@haller.ws> Message-ID: I am following the example posted by https://github.com/nginx-clojure/nginx-clojure/blob/master/example-projects/c-module-integration-example/src/java/example/MyHandler.java. Here is my code to do the logic, but I checking the java library provided by this nginx-clojure https://github.com/nginx-clojure/nginx-clojure/tree/master/src/java/nginx/clojure/java. I can not figure out how to get "cookie" header value from the http request ( it use NginxJavaRequest.java here. Any help are appreciated.! Here is my nginx.conf location /java { content_handler_type java; content_handler_name example.MyHandler; } Here is my java code package example; import static nginx.clojure.MiniConstants.*; import static nginx.clojure.NginxClojureRT.*; import java.io.IOException; import java.util.Map; import org.onap.portalsdk.core.onboarding.util.CipherUtil; import nginx.clojure.MiniConstants; import nginx.clojure.java.NginxJavaRequest; import nginx.clojure.java.NginxJavaRingHandler; public class MyHandler implements NginxJavaRingHandler { public MyHandler() { } public Object[] invoke(Map request) throws IOException { NginxJavaRequest req = ((NginxJavaRequest)request); // TODO Need Figure out how to get Cookie from the request String encryptedCookie = req.getVariable("cookie"); // not sure if it is correct if (encryptedCookie == null) return new Object[] { NGX_HTTP_UNAUTHORIZED, //http status 401 }; String decryptedCookie = ""; try { decryptedCookie = CipherUtil.decryptPKC(encryptedCookie); } catch (Exception e) { } return new Object[] { NGX_HTTP_OK, //http status 200 }; } } -----Original Message----- From: nginx On Behalf Of Patrick Sent: Wednesday, May 22, 2019 8:35 PM To: nginx at nginx.org Subject: Re: Nginx with Java library On 2019-05-22 14:02, Jennie Jia wrote: > The java library is ONAP Portal/SDK If you can find the cookie decryption library, then you can call it from clojure -- though spinning up a clojure/java runtime to get access to a cookie seems to be expensive unless you're already writing clojure code. Patrick _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service From satcse88 at gmail.com Thu May 23 23:35:30 2019 From: satcse88 at gmail.com (Sathish Kumar) Date: Fri, 24 May 2019 07:35:30 +0800 Subject: Proxy Pass In-Reply-To: <001f01d51177$ee667490$cb335db0$@roze.lv> References: <001f01d51177$ee667490$cb335db0$@roze.lv> Message-ID: Hi Rozitis, Thanks for your reply. On Thu, May 23, 2019, 10:58 PM Reinis Rozitis wrote: > > Instead of IP address, if we use FQDN with https, do we have to validate > the SSL certificate on Proxy_Pass?. > > By default the certificate validation is turned off (and nginx just uses > the ssl for traffic encryption). > If needed you can enable it with proxy_ssl_verify on; ( > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ssl_verify > ) > > > > Due to IP address, multiple sites on the server Nginx access log logging > the same requests. > > If you do the logging on the backend server (and there are multiple > virtualhosts) and proxy_pass is via http://ip, then you need/have to pass > also the Host header. > > Either by passing the Host header from original request: > > proxy_set_header Host $host; > > or you can specify a custom one: > > proxy_set_header Host "some.domain"; > > > > > Is the above Nginx config, correct way of doing it?. > > Depends on your setup and what you want to achieve. > > > For example if your location blocks match the request on the backend you > can omit the URI in the proxy_pass directive: > > location /abc { > proxy_pass https://1.1.1.1; > } > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satcse88 at gmail.com Fri May 24 00:04:31 2019 From: satcse88 at gmail.com (Sathish Kumar) Date: Fri, 24 May 2019 08:04:31 +0800 Subject: Proxy Pass In-Reply-To: References: <001f01d51177$ee667490$cb335db0$@roze.lv> Message-ID: Hi Team, I am already setting below headers. server 1 server_name abc.com; access_log /var/log/nginx/abc.access.log; error_log /var/log/nginx/abc.error.log warn; location /abc { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_pass_header X_CUSTOM_HEADER; proxy_pass https://1.1.1.1/abc; } Backend Server server_name def.com; access_log /var/log/nginx/def.access.log; error_log /var/log/nginx/def.error.log warn; location /abc { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://1.1.1.1/abc; } server_name ghi.co.com; access_log /var/log/nginx/ghi.access.log; error_log /var/log/nginx/ghi.error.log warn; location /xyz { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://1.1.1.1/xyz; } If I enable this Nginx Config, the virtual hosts on the Backend server like ghi.co.com are logging with the requests coming to def.com Can you let me know,what I am missing and why I am facing incorrect logging requests to the file /var/log/nginx/ghi.access.log. Thanks & Regards Sathish Kumar.V On Fri, May 24, 2019 at 7:35 AM Sathish Kumar wrote: > Hi Rozitis, > > Thanks for your reply. > > > > > On Thu, May 23, 2019, 10:58 PM Reinis Rozitis wrote: > >> > Instead of IP address, if we use FQDN with https, do we have to >> validate the SSL certificate on Proxy_Pass?. >> >> By default the certificate validation is turned off (and nginx just uses >> the ssl for traffic encryption). >> If needed you can enable it with proxy_ssl_verify on; ( >> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ssl_verify >> ) >> >> >> > Due to IP address, multiple sites on the server Nginx access log >> logging the same requests. >> >> If you do the logging on the backend server (and there are multiple >> virtualhosts) and proxy_pass is via http://ip, then you need/have to >> pass also the Host header. >> >> Either by passing the Host header from original request: >> >> proxy_set_header Host $host; >> >> or you can specify a custom one: >> >> proxy_set_header Host "some.domain"; >> >> >> >> > Is the above Nginx config, correct way of doing it?. >> >> Depends on your setup and what you want to achieve. >> >> >> For example if your location blocks match the request on the backend you >> can omit the URI in the proxy_pass directive: >> >> location /abc { >> proxy_pass https://1.1.1.1; >> } >> >> rr >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satcse88 at gmail.com Fri May 24 00:07:04 2019 From: satcse88 at gmail.com (Sathish Kumar) Date: Fri, 24 May 2019 08:07:04 +0800 Subject: Proxy Pass In-Reply-To: References: <001f01d51177$ee667490$cb335db0$@roze.lv> Message-ID: Hi Team, Please ignore my previous email. Kindly check the below config and suggest me a solution. server 1 server_name abc.com; access_log /var/log/nginx/abc.access.log; error_log /var/log/nginx/abc.error.log warn; location /abc { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_pass_header X_CUSTOM_HEADER; proxy_pass https://1.1.1.1/abc; } Backend Server server_name def.com; access_log /var/log/nginx/def.access.log; error_log /var/log/nginx/def.error.log warn; location /abc { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://localhost:1111; } server_name ghi.co.com; access_log /var/log/nginx/ghi.access.log; error_log /var/log/nginx/ghi.error.log warn; location /xyz { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://localhost:2222; } If I enable this Nginx Config, the virtual hosts on the Backend server like ghi.co.com are logging with the requests coming to def.com Can you let me know,what I am missing and why I am facing incorrect logging requests to the file /var/log/nginx/ghi.access.log. Thanks & Regards Sathish Kumar.V On Fri, May 24, 2019 at 8:04 AM Sathish Kumar wrote: > Hi Team, > > I am already setting below headers. > server 1 > server_name abc.com; > access_log /var/log/nginx/abc.access.log; > error_log /var/log/nginx/abc.error.log warn; > location /abc { > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header Host $http_host; > proxy_pass_header X_CUSTOM_HEADER; > proxy_pass https://1.1.1.1/abc; > > } > > Backend Server > > server_name def.com; > access_log /var/log/nginx/def.access.log; > error_log /var/log/nginx/def.error.log warn; > location /abc { > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header Host $host; > proxy_pass http://1.1.1.1/abc; > > } > > server_name ghi.co.com; > access_log /var/log/nginx/ghi.access.log; > error_log /var/log/nginx/ghi.error.log warn; > location /xyz { > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header Host $host; > proxy_pass http://1.1.1.1/xyz; > > } > > > If I enable this Nginx Config, the virtual hosts on the Backend server > like ghi.co.com are logging with the requests coming to def.com > > Can you let me know,what I am missing and why I am facing incorrect > logging requests to the file /var/log/nginx/ghi.access.log. > > Thanks & Regards > Sathish Kumar.V > > > On Fri, May 24, 2019 at 7:35 AM Sathish Kumar wrote: > >> Hi Rozitis, >> >> Thanks for your reply. >> >> >> >> >> On Thu, May 23, 2019, 10:58 PM Reinis Rozitis wrote: >> >>> > Instead of IP address, if we use FQDN with https, do we have to >>> validate the SSL certificate on Proxy_Pass?. >>> >>> By default the certificate validation is turned off (and nginx just uses >>> the ssl for traffic encryption). >>> If needed you can enable it with proxy_ssl_verify on; ( >>> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ssl_verify >>> ) >>> >>> >>> > Due to IP address, multiple sites on the server Nginx access log >>> logging the same requests. >>> >>> If you do the logging on the backend server (and there are multiple >>> virtualhosts) and proxy_pass is via http://ip, then you need/have to >>> pass also the Host header. >>> >>> Either by passing the Host header from original request: >>> >>> proxy_set_header Host $host; >>> >>> or you can specify a custom one: >>> >>> proxy_set_header Host "some.domain"; >>> >>> >>> >>> > Is the above Nginx config, correct way of doing it?. >>> >>> Depends on your setup and what you want to achieve. >>> >>> >>> For example if your location blocks match the request on the backend you >>> can omit the URI in the proxy_pass directive: >>> >>> location /abc { >>> proxy_pass https://1.1.1.1; >>> } >>> >>> rr >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From satcse88 at gmail.com Fri May 24 01:23:44 2019 From: satcse88 at gmail.com (Sathish Kumar) Date: Fri, 24 May 2019 09:23:44 +0800 Subject: Proxy Pass In-Reply-To: References: <001f01d51177$ee667490$cb335db0$@roze.lv> Message-ID: Hi All, I have now tried to use FQDN but still same issue. server 1 server_name abc.com; access_log /var/log/nginx/abc.access.log; error_log /var/log/nginx/abc.error.log warn; location /abc { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_pass_header X_CUSTOM_HEADER; proxy_pass https://def.com/abc; } Backend Server server_name def.com; access_log /var/log/nginx/def.access.log; error_log /var/log/nginx/def.error.log warn; location /abc { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://localhost:1111; } server_name ghi.co.com; access_log /var/log/nginx/ghi.access.log; error_log /var/log/nginx/ghi.error.log warn; location /xyz { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://localhost:2222; } Still the /var/log/nginx/ghi.access.log loaded with the requests which comes to def.com. Can you help me fix this issue. Thanks & Regards Sathish Kumar.V On Fri, May 24, 2019 at 8:07 AM Sathish Kumar wrote: > Hi Team, > > Please ignore my previous email. Kindly check the below config and suggest > me a solution. > > server 1 > server_name abc.com; > access_log /var/log/nginx/abc.access.log; > error_log /var/log/nginx/abc.error.log warn; > location /abc { > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header Host $http_host; > proxy_pass_header X_CUSTOM_HEADER; > proxy_pass https://1.1.1.1/abc; > > } > > Backend Server > > server_name def.com; > access_log /var/log/nginx/def.access.log; > error_log /var/log/nginx/def.error.log warn; > location /abc { > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header Host $host; > proxy_pass http://localhost:1111; > > } > > server_name ghi.co.com; > access_log /var/log/nginx/ghi.access.log; > error_log /var/log/nginx/ghi.error.log warn; > location /xyz { > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header Host $host; > proxy_pass http://localhost:2222; > > } > > > If I enable this Nginx Config, the virtual hosts on the Backend server > like ghi.co.com are logging with the requests coming to def.com > > Can you let me know,what I am missing and why I am facing incorrect > logging requests to the file /var/log/nginx/ghi.access.log. > > Thanks & Regards > Sathish Kumar.V > > > On Fri, May 24, 2019 at 8:04 AM Sathish Kumar wrote: > >> Hi Team, >> >> I am already setting below headers. >> server 1 >> server_name abc.com; >> access_log /var/log/nginx/abc.access.log; >> error_log /var/log/nginx/abc.error.log warn; >> location /abc { >> proxy_set_header X-Real-IP $remote_addr; >> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> proxy_set_header Host $http_host; >> proxy_pass_header X_CUSTOM_HEADER; >> proxy_pass https://1.1.1.1/abc; >> >> } >> >> Backend Server >> >> server_name def.com; >> access_log /var/log/nginx/def.access.log; >> error_log /var/log/nginx/def.error.log warn; >> location /abc { >> proxy_set_header X-Real-IP $remote_addr; >> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> proxy_set_header Host $host; >> proxy_pass http://1.1.1.1/abc; >> >> } >> >> server_name ghi.co.com; >> access_log /var/log/nginx/ghi.access.log; >> error_log /var/log/nginx/ghi.error.log warn; >> location /xyz { >> proxy_set_header X-Real-IP $remote_addr; >> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> proxy_set_header Host $host; >> proxy_pass http://1.1.1.1/xyz; >> >> } >> >> >> If I enable this Nginx Config, the virtual hosts on the Backend server >> like ghi.co.com are logging with the requests coming to def.com >> >> Can you let me know,what I am missing and why I am facing incorrect >> logging requests to the file /var/log/nginx/ghi.access.log. >> >> Thanks & Regards >> Sathish Kumar.V >> >> >> On Fri, May 24, 2019 at 7:35 AM Sathish Kumar wrote: >> >>> Hi Rozitis, >>> >>> Thanks for your reply. >>> >>> >>> >>> >>> On Thu, May 23, 2019, 10:58 PM Reinis Rozitis wrote: >>> >>>> > Instead of IP address, if we use FQDN with https, do we have to >>>> validate the SSL certificate on Proxy_Pass?. >>>> >>>> By default the certificate validation is turned off (and nginx just >>>> uses the ssl for traffic encryption). >>>> If needed you can enable it with proxy_ssl_verify on; ( >>>> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ssl_verify >>>> ) >>>> >>>> >>>> > Due to IP address, multiple sites on the server Nginx access log >>>> logging the same requests. >>>> >>>> If you do the logging on the backend server (and there are multiple >>>> virtualhosts) and proxy_pass is via http://ip, then you need/have to >>>> pass also the Host header. >>>> >>>> Either by passing the Host header from original request: >>>> >>>> proxy_set_header Host $host; >>>> >>>> or you can specify a custom one: >>>> >>>> proxy_set_header Host "some.domain"; >>>> >>>> >>>> >>>> > Is the above Nginx config, correct way of doing it?. >>>> >>>> Depends on your setup and what you want to achieve. >>>> >>>> >>>> For example if your location blocks match the request on the backend >>>> you can omit the URI in the proxy_pass directive: >>>> >>>> location /abc { >>>> proxy_pass https://1.1.1.1; >>>> } >>>> >>>> rr >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From 201904-nginx at jslf.app Fri May 24 02:02:34 2019 From: 201904-nginx at jslf.app (Patrick) Date: Fri, 24 May 2019 10:02:34 +0800 Subject: Nginx with Java library In-Reply-To: References: <20190522002713.GA23641@haller.ws> <20190523003455.GA21970@haller.ws> Message-ID: <20190524020234.GA1911@haller.ws> On 2019-05-23 21:29, Jennie Jia wrote: > Here is my code to do the logic, but I checking the java library > provided by this nginx-clojure > https://github.com/nginx-clojure/nginx-clojure/tree/master/src/java/nginx/clojure/java. > I can not figure out how to get "cookie" header value from the http > request ( it use NginxJavaRequest.java here. Any help are > appreciated.! > public Object[] invoke(Map request) throws IOException { > > NginxJavaRequest req = ((NginxJavaRequest)request); > > // TODO Need Figure out how to get Cookie from the request //String encryptedCookie = req.getVariable("cookie"); String encryptedCookie = (String) ((Map)request.get(HEADERS)).get("cookie"); If all that is needed is the decrypted cookie contents, and the backend is written in another language, consider porting the cookie decryption code to that language as ONAP uses the SunJCE flavor of AES/CBC/NoPadding Note that to decrypt the cookie, the ONAP crypto config needs to be ported to the nginx-clojure app as well. Patrick From user24 at inbox.lv Fri May 24 08:27:23 2019 From: user24 at inbox.lv (User) Date: Fri, 24 May 2019 08:27:23 +0000 Subject: args and rewrite vars always empty Message-ID: <7866e28f-bf40-5982-7eb3-8095a9d297e1@inbox.lv> Hello, I'm trying to make simple rewrite to work and found that $args and other $1 vars from rewrite&try_files are always empty. nginx version: nginx/1.10.3 Server config: ? location /product/ { ??? rewrite ^/product/(.*)/$ /$1.txt last; ?? #??? try_files $uri/ /test.php?test=$uri; # tries, then server conf was simplifies and all php environment was switched off for testing ? } Request: domain.com/product/android/ Expected result: "android.txt" file Real result: read ".txt" file. Error log with notice: 2019/05/24 07:51:55 [notice] 24217#24217: *560218 rewritten data: "/.txt", args: "", client: 1.1.1.1, server: domain.com, request: "GET /product/android/ HTTP/1.1", host: "domain.com" 2019/05/24 07:51:55 [error] 24217#24217: *560218 open() "/home/user/domain.com/.txt" failed (2: No such file or directory), client: 1.1.1.1, server: domain.com, request: "GET /product/android/ HTTP/1.1", host: "domain.com" Thanks! From 201904-nginx at jslf.app Fri May 24 08:41:18 2019 From: 201904-nginx at jslf.app (Patrick) Date: Fri, 24 May 2019 16:41:18 +0800 Subject: args and rewrite vars always empty In-Reply-To: <7866e28f-bf40-5982-7eb3-8095a9d297e1@inbox.lv> References: <7866e28f-bf40-5982-7eb3-8095a9d297e1@inbox.lv> Message-ID: <20190524084118.GB20802@haller.ws> On 2019-05-24 08:27, User via nginx wrote: > I'm trying to make simple rewrite to work and found that $args and other > $1 vars from rewrite&try_files are always empty. > > ? location /product/ { > ??? rewrite ^/product/(.*)/$ /$1.txt last; > ? } Use 'break' instead of 'last' as per https://nginx.org/en/docs/http/ngx_http_rewrite_module.html#rewrite Patrick From user24 at inbox.lv Fri May 24 08:55:09 2019 From: user24 at inbox.lv (User) Date: Fri, 24 May 2019 08:55:09 +0000 Subject: args and rewrite vars always empty In-Reply-To: <20190524084118.GB20802@haller.ws> References: <7866e28f-bf40-5982-7eb3-8095a9d297e1@inbox.lv> <20190524084118.GB20802@haller.ws> Message-ID: On 5/24/19 8:41 AM, Patrick wrote: > Use 'break' instead of 'last' as per > > https://nginx.org/en/docs/http/ngx_http_rewrite_module.html#rewrite Unfortunately, it did not help. A small addition. Only try_files lose $args, rewrite show args at error_logs with following request: domain.com/product/android/?arg=test But the problem that it does not fill $1 variable from (.*) ( $1 must be "android"). But as seen at the logs, (.*) matches the url. This rewrite now:? rewrite ^/pics/(.*)/$ /$1 break; Always redirect to index page, because it never try to load "android" file :( Thanks From mdounin at mdounin.ru Fri May 24 09:50:11 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 24 May 2019 12:50:11 +0300 Subject: args and rewrite vars always empty In-Reply-To: <7866e28f-bf40-5982-7eb3-8095a9d297e1@inbox.lv> References: <7866e28f-bf40-5982-7eb3-8095a9d297e1@inbox.lv> Message-ID: <20190524095011.GC1877@mdounin.ru> Hello! On Fri, May 24, 2019 at 08:27:23AM +0000, User via nginx wrote: > Hello, > > I'm trying to make simple rewrite to work and found that $args and other > $1 vars from rewrite&try_files are always empty. > > nginx version: nginx/1.10.3 > > Server config: > > ? location /product/ { > ??? rewrite ^/product/(.*)/$ /$1.txt last; > ?? #??? try_files $uri/ /test.php?test=$uri; # tries, then server conf > was simplifies and all php environment was switched off for testing > ? } > > Request: domain.com/product/android/ > > Expected result: "android.txt" file > > Real result: read ".txt" file. > > Error log with notice: > > 2019/05/24 07:51:55 [notice] 24217#24217: *560218 rewritten data: > "/.txt", args: "", client: 1.1.1.1, server: domain.com, request: "GET > /product/android/ HTTP/1.1", host: "domain.com" > 2019/05/24 07:51:55 [error] 24217#24217: *560218 open() > "/home/user/domain.com/.txt" failed (2: No such file or directory), > client: 1.1.1.1, server: domain.com, request: "GET /product/android/ > HTTP/1.1", host: "domain.com" The "rewrite" directive operates on the current - possibly modified - URI, and most likely reason is that something went wrong elsewhere in your config, so rewrite in question tests wrong URI. With "rewrite_log on;" you should get something like this in the log: 2019/05/24 12:43:20 [notice] 31939#100103: *1 "^/product/(.*)/$" matches "/product/android/", client: 127.0.0.1, server: , request: "GET /product/android/ HTTP/1.0" 2019/05/24 12:43:20 [notice] 31939#100103: *1 rewritten data: "/android.txt", args: "", client: 127.0.0.1, server: , request: "GET /product/android/ HTTP/1.0" The first line shows actual matching - regular expression itself and the string it matches, and the second one shows the result. The above two lines were obtained with the following trivial configuration: server { listen 8080; rewrite_log on; location /product/ { rewrite ^/product/(.*)/$ /$1.txt last; } } And it seems to work fine without any problems. If it doesn't work for you, please show exact configuration and both log lines produced. -- Maxim Dounin http://mdounin.ru/ From user24 at inbox.lv Fri May 24 10:43:35 2019 From: user24 at inbox.lv (User) Date: Fri, 24 May 2019 10:43:35 +0000 Subject: args and rewrite vars always empty In-Reply-To: <20190524095011.GC1877@mdounin.ru> References: <7866e28f-bf40-5982-7eb3-8095a9d297e1@inbox.lv> <20190524095011.GC1877@mdounin.ru> Message-ID: On 5/24/19 9:50 AM, Maxim Dounin wrote: > The first line shows actual matching - regular expression itself > and the string it matches, and the second one shows the result.? > The above two lines were obtained with the following trivial > configuration: Yes, thanks. I clearly understand it all. I know how regex work and $1 expected result must be directory name. My "rewrite_log on" logs attached at the first message. It's almost the save server {} configuration. Currently: server { ? listen SERVERIP:80; ? server_name domain.com; ? error_log? /var/log/nginx/domain.com.nginx notice; ? access_log off; ? rewrite_log on; ? index index.php; ? root /home/user/domain.com; ? location /product/ { ?? rewrite ^/product/(.*)/$ /$1/$2/$3/$0/end.txt last; ? } } Look at this! My little investigation. I try to find where $1 is lost and for this purposes added other possible variables $0 $2 $3 from regex. And what I see? $0 is not empty and guess what?? Seems that it contain some of my root user command from shell. All other variables are empty as I wrote at the first post. Logs: ( for request url domain.com/product/moon/?xxxx=yes ) 2019/05/24 10:16:23 [notice] 7030#7030: *39639 "^/product/(.*)/$" matches "/product/moon/", client: 1.1.1.1, server: domain.com, request: "GET /product/moon/?xxxx=yes HTTP/1.1", host: "domain.com" 2019/05/24 10:16:23 [notice] 7030#7030: *39639 rewritten data: "////./script.sh/end.txt", args: "xxxx=yes", client: 1.1.1.1, server: domain.com, request: "GET /product/moon/?xxxx=yes HTTP/1.1", host: "domain.com" Looks like that nginx execute preg_match (regex) at bash shell(?) and get results from it and they are wrong for unknown reason..... "./script.sh" is one of my scripts that I run from root user manually. I don't even have any idea why nginx add it to $0 variable from regex. P.S. Sorry, previously send this message to your email directly not to list. And I found that yes, nginx not build with pcre library, but seems it execute it at the shell each time: https://github.com/nginx/nginx/blob/4bf4650f2f10f7bbacfe7a33da744f18951d416d/src/core/ngx_regex.h And it return not expected results for some reasons. From mdounin at mdounin.ru Fri May 24 11:38:46 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 24 May 2019 14:38:46 +0300 Subject: args and rewrite vars always empty In-Reply-To: References: <7866e28f-bf40-5982-7eb3-8095a9d297e1@inbox.lv> <20190524095011.GC1877@mdounin.ru> Message-ID: <20190524113846.GE1877@mdounin.ru> Hello! On Fri, May 24, 2019 at 10:43:35AM +0000, User via nginx wrote: > On 5/24/19 9:50 AM, Maxim Dounin wrote: > > The first line shows actual matching - regular expression itself > > and the string it matches, and the second one shows the result.? > > The above two lines were obtained with the following trivial > > configuration: > > Yes, thanks. I clearly understand it all. I know how regex work and $1 > expected result must be directory name. My "rewrite_log on" logs > attached at the first message. It's almost the save server {} > configuration. Currently: There were no first line in your original message, hence the question. > server { > ? listen SERVERIP:80; > ? server_name domain.com; > ? error_log? /var/log/nginx/domain.com.nginx notice; > ? access_log off; > ? rewrite_log on; > ? index index.php; > ? root /home/user/domain.com; > > ? location /product/ { > ?? rewrite ^/product/(.*)/$ /$1/$2/$3/$0/end.txt last; > ? } > > } > > > Look at this! My little investigation. I try to find where $1 is lost > and for this purposes added other possible variables $0 $2 $3 from > regex. And what I see? $0 is not empty and guess what?? Seems that it > contain some of my root user command from shell. All other variables are > empty as I wrote at the first post. There is no special $0 variable in nginx, and the above configuration is expected to produce: nginx: [emerg] unknown "0" variable error on start (just checked with 1.10.3 to be sure). If it doesn't for some reason, and nginx starts - this means that either the $0 variable is defined elsewhere, or there is something wrong with nginx you are using (wierd 3rd party modules/patches?). Please show "nginx -V" to find out how nginx was compiled, and "nginx -T" for full configuration. [...] > And I found that yes, nginx not build with pcre library, but seems it > execute it at the shell each time: > > https://github.com/nginx/nginx/blob/4bf4650f2f10f7bbacfe7a33da744f18951d416d/src/core/ngx_regex.h > > And it return not expected results for some reasons. No, your understanding is wrong. If nginx is compiled without PCRE, the rewrite directive will not be available at all. And nginx never tries to execute anything at the shell. -- Maxim Dounin http://mdounin.ru/ From user24 at inbox.lv Fri May 24 11:50:31 2019 From: user24 at inbox.lv (User) Date: Fri, 24 May 2019 11:50:31 +0000 Subject: args and rewrite vars always empty In-Reply-To: <20190524113846.GE1877@mdounin.ru> References: <7866e28f-bf40-5982-7eb3-8095a9d297e1@inbox.lv> <20190524095011.GC1877@mdounin.ru> <20190524113846.GE1877@mdounin.ru> Message-ID: On 5/24/19 11:38 AM, Maxim Dounin wrote: > There is no special $0 variable in nginx, and the above > configuration is expected to produce: > > nginx: [emerg] unknown "0" variable > > error on start (just checked with 1.10.3 to be sure). Thanks! I found my mistake. I'm so stupid :) I generate my nginx hosts file with bash script and it execute this vars $1 at the generation time. Thank you for your time and help again. Maybe you can also suggest what is better for performance purposes: rewrite or try_files ? In my case I want only to make friendly SEO urls, there will be no files at /products/ directory. It's full virtual dir. From francis at daoine.org Fri May 24 12:43:46 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 24 May 2019 13:43:46 +0100 Subject: Proxy Pass In-Reply-To: References: <001f01d51177$ee667490$cb335db0$@roze.lv> Message-ID: <20190524124346.3x2j7zvn3n53kuav@daoine.org> On Fri, May 24, 2019 at 09:23:44AM +0800, Sathish Kumar wrote: Hi there, I am not certain what server_name values correspond to what IP addresses or ports used; and I am not certain what nginx servers use ssl and what ones don't. If you don't get an answer to your question, perhaps it will be worth following up with that information; and possibly also including answers to: * what request do you make? * what response do you get? * what response do you want to get? That said... > proxy_pass https://def.com/abc; http://nginx.org/r/proxy_ssl_server_name might be useful, if the IP address associated with def.com runs https services that require SNI. f -- Francis Daly francis at daoine.org From satcse88 at gmail.com Fri May 24 12:54:23 2019 From: satcse88 at gmail.com (Sathish Kumar) Date: Fri, 24 May 2019 20:54:23 +0800 Subject: Proxy Pass In-Reply-To: <20190524124346.3x2j7zvn3n53kuav@daoine.org> References: <001f01d51177$ee667490$cb335db0$@roze.lv> <20190524124346.3x2j7zvn3n53kuav@daoine.org> Message-ID: Hi Francis, All the requests are processing successfully but its logging to incorrect access log. Server1: 2.2.2.2 - abc.domain.com, Port 443 Server2: 1.1.1.1 - def.domain.com and def.abc.com - Port 443 On Fri, May 24, 2019, 8:44 PM Francis Daly wrote: > On Fri, May 24, 2019 at 09:23:44AM +0800, Sathish Kumar wrote: > > Hi there, > > I am not certain what server_name values correspond to what IP addresses > or ports used; and I am not certain what nginx servers use ssl and what > ones don't. > > If you don't get an answer to your question, perhaps it will be worth > following up with that information; and possibly also including answers > to: > > * what request do you make? > * what response do you get? > * what response do you want to get? > > That said... > > > proxy_pass https://def.com/abc; > > http://nginx.org/r/proxy_ssl_server_name > > might be useful, if the IP address associated with def.com runs https > services that require SNI. > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Fri May 24 15:31:13 2019 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 24 May 2019 18:31:13 +0300 Subject: Max_fails for proxy_pass without an upstream block In-Reply-To: <01f10e410294bf9bcc1e756153217596.NginxMailingListEnglish@forum.nginx.org> References: <01f10e410294bf9bcc1e756153217596.NginxMailingListEnglish@forum.nginx.org> Message-ID: <35407D61-6F5A-4B58-AF77-E3FA3C8757FA@nginx.com> > On 3 May 2019, at 02:12, jarstewa wrote: > > Is there an equivalent of max_fails > (http://nginx.org/en/docs/http/ngx_http_upstream_module.html#max_fails) if > I'm using proxy_pass without an upstream block? > Not that I'm aware of. -- Sergey Kandaurov From Jennie.Jia at amdocs.com Fri May 24 19:32:29 2019 From: Jennie.Jia at amdocs.com (Jennie Jia) Date: Fri, 24 May 2019 19:32:29 +0000 Subject: Nginx with Java library In-Reply-To: <20190524020234.GA1911@haller.ws> References: <20190522002713.GA23641@haller.ws> <20190523003455.GA21970@haller.ws> <20190524020234.GA1911@haller.ws> Message-ID: Thank you, Patrick! I am trying figure out how to port the ONAP crypto config to the nginx-clojure app. But no luck so far... Here is what did: 1) I added the dependency in the pom.xml here https://github.com/nginx-clojure/nginx-clojure/blob/master/example-projects/c-module-integration-example/pom.xml org.onap.portal.sdk epsdk-fw 2.5.0 2) Build jar and put it under the jar/lib folder of nginx-clojure-0.4.5. 3) Start nginx 4) Send HTTP request (with cookie) to nginx, My java code is triggered, but got NoClassDefFoundError 2019-05-24 13:41:56[error][16860][main]server unhandled exception! java.lang.NoClassDefFoundError: org/onap/portalsdk/core/onboarding/util/CipherUtil at example.MyHandler.invoke(MyHandler.java:37) at nginx.clojure.java.NginxJavaHandler.process(NginxJavaHandler.java:125) at nginx.clojure.NginxSimpleHandler.handleRequest(NginxSimpleHandler.java:187) at nginx.clojure.NginxSimpleHandler.execute(NginxSimpleHandler.java:105) at nginx.clojure.NginxClojureRT.eval(NginxClojureRT.java:1133) Caused by: java.lang.ClassNotFoundException: org.onap.portalsdk.core.onboarding.util.CipherUtil at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) -----Original Message----- From: nginx On Behalf Of Patrick Sent: Thursday, May 23, 2019 10:03 PM To: nginx at nginx.org Subject: Re: Nginx with Java library On 2019-05-23 21:29, Jennie Jia wrote: > Here is my code to do the logic, but I checking the java library > provided by this nginx-clojure > https://github.com/nginx-clojure/nginx-clojure/tree/master/src/java/nginx/clojure/java. > I can not figure out how to get "cookie" header value from the http > request ( it use NginxJavaRequest.java here. Any help are > appreciated.! > public Object[] invoke(Map request) throws > IOException { > > NginxJavaRequest req = ((NginxJavaRequest)request); > > // TODO Need Figure out how to get Cookie from the request //String encryptedCookie = req.getVariable("cookie"); String encryptedCookie = (String) ((Map)request.get(HEADERS)).get("cookie"); If all that is needed is the decrypted cookie contents, and the backend is written in another language, consider porting the cookie decryption code to that language as ONAP uses the SunJCE flavor of AES/CBC/NoPadding Note that to decrypt the cookie, the ONAP crypto config needs to be ported to the nginx-clojure app as well. Patrick _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service From nginx-forum at forum.nginx.org Fri May 24 21:15:40 2019 From: nginx-forum at forum.nginx.org (NginxNewbee) Date: Fri, 24 May 2019 17:15:40 -0400 Subject: Reading large request body using ngx_http_read_client_request_body In-Reply-To: References: <20190521125157.GK1877@mdounin.ru> Message-ID: Hey Maxim, Should other nginx methods like ngx_http_output_filter and ngx_http_finalize_request be used on main thread as well ? I am using ngx_http_finalize_request in the thread's completion handler already. However I was wondering if our business logic (running on thread) needs to send response body multiple times as soon as it is generates it, in that case ngx_http_output_filter will need be called on thread. Will it will be safe or should I wait until all of the response is generated and do one call to ngx_http_output_filter on main thread ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284214,284308#msg-284308 From francis at daoine.org Fri May 24 22:11:38 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 24 May 2019 23:11:38 +0100 Subject: Proxy Pass In-Reply-To: References: <001f01d51177$ee667490$cb335db0$@roze.lv> <20190524124346.3x2j7zvn3n53kuav@daoine.org> Message-ID: <20190524221138.iygkmvgnsuzerdm3@daoine.org> On Fri, May 24, 2019 at 08:54:23PM +0800, Sathish Kumar wrote: Hi there, > Server1: 2.2.2.2 - abc.domain.com, Port 443 > > Server2: 1.1.1.1 - def.domain.com and def.abc.com - Port 443 That suggests that your back-end server is running more than one https server on the same IP:port. Does it use a single cert with multiple Subject Alternate Names; or does it use individual certs and SNI? Do you see what you expect when you run from Server1: curl -v -k -H Host:def.domain.com https://1.1.1.1/abc and curl -v -k -H Host:def.abc.com https://1.1.1.1/abc ? f -- Francis Daly francis at daoine.org From Jennie.Jia at amdocs.com Fri May 24 22:28:54 2019 From: Jennie.Jia at amdocs.com (Jennie Jia) Date: Fri, 24 May 2019 22:28:54 +0000 Subject: Nginx with Java library In-Reply-To: References: <20190522002713.GA23641@haller.ws> <20190523003455.GA21970@haller.ws> <20190524020234.GA1911@haller.ws> Message-ID: Actually by dumping a few dependency jar inside of the jar folder of the ngnix-clojure-0.4.5. (jvm_classpath "jars/*";). It seems working.. but I am not sure if it is the correct or best way to do it -----Original Message----- From: nginx On Behalf Of Jennie Jia Sent: Friday, May 24, 2019 3:32 PM To: nginx at nginx.org Subject: RE: Nginx with Java library Thank you, Patrick! I am trying figure out how to port the ONAP crypto config to the nginx-clojure app. But no luck so far... Here is what did: 1) I added the dependency in the pom.xml here https://github.com/nginx-clojure/nginx-clojure/blob/master/example-projects/c-module-integration-example/pom.xml org.onap.portal.sdk epsdk-fw 2.5.0 2) Build jar and put it under the jar/lib folder of nginx-clojure-0.4.5. 3) Start nginx 4) Send HTTP request (with cookie) to nginx, My java code is triggered, but got NoClassDefFoundError 2019-05-24 13:41:56[error][16860][main]server unhandled exception! java.lang.NoClassDefFoundError: org/onap/portalsdk/core/onboarding/util/CipherUtil at example.MyHandler.invoke(MyHandler.java:37) at nginx.clojure.java.NginxJavaHandler.process(NginxJavaHandler.java:125) at nginx.clojure.NginxSimpleHandler.handleRequest(NginxSimpleHandler.java:187) at nginx.clojure.NginxSimpleHandler.execute(NginxSimpleHandler.java:105) at nginx.clojure.NginxClojureRT.eval(NginxClojureRT.java:1133) Caused by: java.lang.ClassNotFoundException: org.onap.portalsdk.core.onboarding.util.CipherUtil at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) -----Original Message----- From: nginx On Behalf Of Patrick Sent: Thursday, May 23, 2019 10:03 PM To: nginx at nginx.org Subject: Re: Nginx with Java library On 2019-05-23 21:29, Jennie Jia wrote: > Here is my code to do the logic, but I checking the java library > provided by this nginx-clojure > https://github.com/nginx-clojure/nginx-clojure/tree/master/src/java/nginx/clojure/java. > I can not figure out how to get "cookie" header value from the http > request ( it use NginxJavaRequest.java here. Any help are > appreciated.! > public Object[] invoke(Map request) throws > IOException { > > NginxJavaRequest req = ((NginxJavaRequest)request); > > // TODO Need Figure out how to get Cookie from the request //String encryptedCookie = req.getVariable("cookie"); String encryptedCookie = (String) ((Map)request.get(HEADERS)).get("cookie"); If all that is needed is the decrypted cookie contents, and the backend is written in another language, consider porting the cookie decryption code to that language as ONAP uses the SunJCE flavor of AES/CBC/NoPadding Note that to decrypt the cookie, the ONAP crypto config needs to be ported to the nginx-clojure app as well. Patrick _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service From mdounin at mdounin.ru Mon May 27 13:29:29 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 27 May 2019 16:29:29 +0300 Subject: Reading large request body using ngx_http_read_client_request_body In-Reply-To: References: <20190521125157.GK1877@mdounin.ru> Message-ID: <20190527132929.GF1877@mdounin.ru> Hello! On Fri, May 24, 2019 at 05:15:40PM -0400, NginxNewbee wrote: > Should other nginx methods like ngx_http_output_filter and > ngx_http_finalize_request be used on main thread as well ? I am using > ngx_http_finalize_request in the thread's completion handler already. > However I was wondering if our business logic (running on thread) needs to > send response body multiple times as soon as it is generates it, in that > case ngx_http_output_filter will need be called on thread. Will it will be > safe or should I wait until all of the response is generated and do one call > to ngx_http_output_filter on main thread ? No request-related operations can be used in threads, including the functions mentioned. -- Maxim Dounin http://mdounin.ru/ From cotjoey at gmail.com Mon May 27 17:51:06 2019 From: cotjoey at gmail.com (=?UTF-8?B?Sm9leSBDw7R0w6k=?=) Date: Mon, 27 May 2019 14:51:06 -0300 Subject: reverse proxy - http (80) to https (back-end) in Docker container Message-ID: Hello all, I am attempting to use nginx as a reverse proxy to funnel HTTP traffic to a HTTPS back-end (both in a Docker container). I cannot enable HTTPS on my front-end yet, so this would be a temporary solution to my issue. Only one of my back-end application is giving me an issue right now, and that issue seems to be characterized by the JSESSIOND cookie coming from the back-end being lost at the nginx level. If I test outside Docker with SSL, I get a JSESSIONID cookie in my request back. My two URLs are [example]: http://myurl.example.com/AppA http://myurl.example.com/AppB I believe that I may need to enable SSL certificates between nginx and my back-ends, but I couldn't find clear indications on how to do this online. My nginx config file looks like this: worker_processes 1; daemon off; events { worker_connections 1024; } http { access_log /dev/stdout; error_log /dev/stderr warn; server { listen 80; server_name myurl.example.com; location /AppA { proxy_pass https://localhost:9443; } location /AppB { proxy_pass https://localhost:9043; } } } Thank you for any assistance you can provide. JC -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Mon May 27 21:14:15 2019 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 27 May 2019 23:14:15 +0200 Subject: reverse proxy - http (80) to https (back-end) in Docker container In-Reply-To: References: Message-ID: <33a3ed4b-7337-4393-3cb6-fd92e579691e@none.at> Hi. Am 27.05.2019 um 19:51 schrieb Joey C?t?: > Hello all, > > I am attempting to use nginx as a reverse proxy to funnel HTTP traffic to a > HTTPS back-end (both in a Docker container). I cannot enable HTTPS on my > front-end yet, so this would be a temporary solution to my issue. > > Only one of my back-end application is giving me an issue right now, and that > issue seems to be characterized by the JSESSIOND cookie coming from the > back-end? being lost at the nginx level. If I test outside Docker with SSL, I > get a JSESSIONID cookie in my request back. To verify your assumption please try to run haproxy in debug and take a look what the backend sends you. https://nginx.org/en/docs/debugging_log.html > My two URLs are [example]: > http://myurl.example.com/AppA > http://myurl.example.com/AppB > > I believe that I may need to enable SSL certificates between nginx and my > back-ends, but I couldn't find clear indications on how to do this online. > > My nginx config file looks like this: > > worker_processes 1; > daemon off; > > events { > ? ? worker_connections 1024; > } > > http { > ? ? access_log /dev/stdout; > ? ? error_log /dev/stderr warn; > > ? ? server { > ? ? ? ? listen? ? ? ? ? ? ? 80; > ? ? ? ? server_name? ? ? ? ?myurl.example.com ; > > ? ? ? ? location /AppA { > ? ? ? ? ? ? proxy_pass? ? ? https://localhost:9443; > ? ? ? ? } > > ? ? ? ? location /AppB { > ? ? ? ? ? ? proxy_pass? ? ? https://localhost:9043; > ? ? ? ? } > ? ? } > } > > Thank you for any assistance you can provide. > > JC > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at forum.nginx.org Tue May 28 18:48:34 2019 From: nginx-forum at forum.nginx.org (avinashjammula) Date: Tue, 28 May 2019 14:48:34 -0400 Subject: Nginx Ingress Annotations failing Message-ID: <54453e40932b2734d91b7df088362bca.NginxMailingListEnglish@forum.nginx.org> I am trying use nginx ingress controller annotations for app deployment. annotations are not creating inside nginx ingress controller pod. can you Please direct me how to use. Thanks Avinash Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284339,284339#msg-284339 From nginx-forum at forum.nginx.org Tue May 28 22:19:07 2019 From: nginx-forum at forum.nginx.org (NginxNewbee) Date: Tue, 28 May 2019 18:19:07 -0400 Subject: Reading large request body using ngx_http_read_client_request_body In-Reply-To: <20190527132929.GF1877@mdounin.ru> References: <20190527132929.GF1877@mdounin.ru> Message-ID: <8904f79e8fe255bfbf53a65d1e5457c6.NginxMailingListEnglish@forum.nginx.org> Cool. Thanks a lot, Maxim. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284214,284340#msg-284340 From vincent.mc.li at gmail.com Tue May 28 22:40:57 2019 From: vincent.mc.li at gmail.com (Vincent Li) Date: Tue, 28 May 2019 15:40:57 -0700 Subject: systemd nginx service unable to start nginx 1.11.10 Message-ID: Hi, I am running ubuntu 16.04, the nginx package from ubuntu can be started from systemd ok as below shown. ======COMMAND OUTPUT===== #/usr/sbin/nginx-from-ubuntu -v nginx version: nginx/1.10.3 (Ubuntu) # ls -l /usr/sbin/nginx lrwxrwxrwx 1 root root 27 May 28 15:29 /usr/sbin/nginx -> /usr/sbin/nginx-from-ubuntu # cat /lib/systemd/system/nginx.service [Unit] Description=The NGINX HTTP and reverse proxy server After=syslog.target network.target remote-fs.target nss-lookup.target [Service] Type=forking PIDFile=/run/nginx.pid ExecStartPre=/usr/sbin/nginx -t ExecStart=/usr/sbin/nginx ExecReload=/usr/sbin/nginx -s reload ExecStop=/bin/kill -s QUIT $MAINPID PrivateTmp=true [Install] WantedBy=multi-user.target # systemctl start nginx # systemctl status nginx ? nginx.service - The NGINX HTTP and reverse proxy server Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2019-05-28 15:29:25 PDT; 5s ago ==========OUTPUT END===== but if I download the nginx 1.11.10 source tar ball from nginx download site and compile and run the nginx 1.11.10 binary, systemd unable to start it, systemctl hanging there and eventually failed =========COMMAND OUTPUT====== # /usr/local/nginx/sbin/nginx -v nginx version: nginx/1.11.10 #rm -rf /usr/sbin/nginx # ln -s /usr/local/nginx/sbin/nginx /usr/sbin/nginx # systemctl start nginx Job for nginx.service failed because a timeout was exceeded. See "systemctl status nginx.service" and "journalctl -xe" for details. -- Unit nginx.service has begun starting up. May 28 15:36:47 controller-dell710 nginx[12643]: nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok May 28 15:36:47 controller-dell710 nginx[12643]: nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful May 28 15:36:47 controller-dell710 systemd[1]: nginx.service: PID file /run/nginx.pid not readable (yet?) after start: No such file or directory May 28 15:38:17 controller-dell710 systemd[1]: nginx.service: Start operation timed out. Terminating. May 28 15:38:17 controller-dell710 systemd[1]: Failed to start The NGINX HTTP and reverse proxy server. -- Subject: Unit nginx.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit nginx.service has failed. -- -- The result is failed. May 28 15:38:17 controller-dell710 systemd[1]: nginx.service: Unit entered failed state. May 28 15:38:17 controller-dell710 systemd[1]: nginx.service: Failed with result 'timeout'. =======OUTPUT END====== From mdounin at mdounin.ru Tue May 28 23:45:32 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 29 May 2019 02:45:32 +0300 Subject: systemd nginx service unable to start nginx 1.11.10 In-Reply-To: References: Message-ID: <20190528234531.GM1877@mdounin.ru> Hello! On Tue, May 28, 2019 at 03:40:57PM -0700, Vincent Li wrote: > I am running ubuntu 16.04, the nginx package from ubuntu can be > started from systemd ok as below shown. [...] > PIDFile=/run/nginx.pid [...] > but if I download the nginx 1.11.10 source tar ball from nginx > download site and compile and run the nginx 1.11.10 binary, systemd > unable to start it, systemctl hanging there and eventually failed By default, nginx uses /usr/local/nginx/logs/nginx.pid as a PID file path, see "--pid-path=" here: http://nginx.org/en/docs/configure.html Unless you've redefined it during compilation, or set in nginx configuration, or changed systemd service file appropriately, the result: > May 28 15:36:47 controller-dell710 systemd[1]: nginx.service: PID file > /run/nginx.pid not readable (yet?) after start: No such file or > directory is expected, as your nginx configuration does not match your systemd service file. -- Maxim Dounin http://mdounin.ru/ From vincent.mc.li at gmail.com Wed May 29 00:01:02 2019 From: vincent.mc.li at gmail.com (Vincent Li) Date: Tue, 28 May 2019 17:01:02 -0700 Subject: systemd nginx service unable to start nginx 1.11.10 In-Reply-To: <20190528234531.GM1877@mdounin.ru> References: <20190528234531.GM1877@mdounin.ru> Message-ID: On Tue, May 28, 2019 at 4:45 PM Maxim Dounin wrote: > By default, nginx uses /usr/local/nginx/logs/nginx.pid as a PID > file path, see "--pid-path=" here: > > http://nginx.org/en/docs/configure.html > > Unless you've redefined it during compilation, or set in nginx > configuration, or changed systemd service file appropriately, > the result: > > > May 28 15:36:47 controller-dell710 systemd[1]: nginx.service: PID file > > /run/nginx.pid not readable (yet?) after start: No such file or > > directory > > is expected, as your nginx configuration does not match your > systemd service file. thank you! change PIDFile fixed it [Service] PIDFile=/usr/local/nginx/logs/nginx.pid From nginx-forum at forum.nginx.org Wed May 29 02:41:15 2019 From: nginx-forum at forum.nginx.org (isuru) Date: Tue, 28 May 2019 22:41:15 -0400 Subject: GRPC reverse proxy using same port for multiple grpc host/ports Message-ID: Hi All, I am trying to reverseproxy http2 base grpc using nginx.I attempted with nginx port 9092 to proxy to singl e grpc host port using a conf file inside conf.d. map $http_upgrade $connection_upgrade { default upgrade; '' close; } upstream sendssmgrpcservers { # The docker endpoint of your grpc servers, you can have multiple here server 10.2.4.25:6568; #server 10.2.4.200:6566; } #upstream sendSSMgrpcservers { #server 10.2.4.25:6568; #} server { listen 9092 http2; location / { # The 'grpc://' prefix is optional; unencrypted gRPC is the default grpc_pass grpc://sendssmgrpcservers; #grpc_pass grpc://10.2.4.226:6566; }} When I add another conf file for the same port with a different domain name in conf and to a different grpc host/port as upstream It does not forward to right grpc host port. Does http2 grpc reverseproxy not support multiple grpc host/port forward on same port?If yes how to configure that Thanks Isuru Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284344,284344#msg-284344 From smntov at gmail.com Wed May 29 11:53:57 2019 From: smntov at gmail.com (Sim Tov) Date: Wed, 29 May 2019 14:53:57 +0300 Subject: nginx inside docker: mapping localhost:port in one container to another_container:port Message-ID: Hello, I have a custom network and two containers A and B connected to it. Container A (with Debian inside) runs nginx and an application that listens to localhost:8083 and expects certain service there. Once deployed this service runs in container B and not locally. For certain reasons I can't change localhost to something else inside the application. 1. How can I map localhost:8083 from A to name_of_container_B:8083 on the Docker/nginx/infrastructure/OS level? name_of_container_B is not known during creation of A image, but I can pass it as docker entrypoint parameter during launch of conatiner A. 2. I prefer not to add additional packages in order not to increase overall complexity. So is it, for example, possible to configure nginx somehow to listen on localhost:8083 and forward it to name_of_container_B:8083? 3. I actually use Ansible to start containers, so the solution needs to be applicable using Ansible? Thank you! Link to this question: https://stackoverflow.com/q/56356422/1876484 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed May 29 16:18:24 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 29 May 2019 19:18:24 +0300 Subject: GRPC reverse proxy using same port for multiple grpc host/ports In-Reply-To: References: Message-ID: <20190529161824.GN1877@mdounin.ru> Hello! On Tue, May 28, 2019 at 10:41:15PM -0400, isuru wrote: > Hi All, > I am trying to reverseproxy http2 base grpc using nginx.I attempted with > nginx port 9092 to proxy to singl e grpc host port using a conf file inside > conf.d. > > map $http_upgrade $connection_upgrade { > default upgrade; > '' close; > } > > upstream sendssmgrpcservers { > # The docker endpoint of your grpc servers, you can have multiple here > server 10.2.4.25:6568; > #server 10.2.4.200:6566; > } > > #upstream sendSSMgrpcservers { > #server 10.2.4.25:6568; > #} > > server { > listen 9092 http2; > location / { > > # The 'grpc://' prefix is optional; unencrypted gRPC is the default > grpc_pass grpc://sendssmgrpcservers; > #grpc_pass grpc://10.2.4.226:6566; > > }} > > When I add another conf file for the same port with a different domain name > in conf and to a different grpc host/port as upstream It does not forward > to right grpc host port. > > Does http2 grpc reverseproxy not support multiple grpc host/port forward on > same port?If yes how to configure that nginx itself can handle name-based virtual servers for gRPC much like for any other HTTP requests, just configuring appropriate names via the server_name directive should be enough. But this may not work if your gRPC client does not provide appropriate Host and/or :authority headers in requests. To find out what exactly happens with requests and if appropriate headers are present, consider using debugging log. See here for details: http://nginx.org/en/docs/debugging_log.html -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Wed May 29 17:02:08 2019 From: nginx-forum at forum.nginx.org (guy1976) Date: Wed, 29 May 2019 13:02:08 -0400 Subject: njs: how to define a global variable Message-ID: <654f3d8357ff11150eae90c432622598.NginxMailingListEnglish@forum.nginx.org> hi is it possible to define a global variable that will be persist for different requests? thank you, Guy. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284350,284350#msg-284350 From tjlp at sina.com Thu May 30 00:16:20 2019 From: tjlp at sina.com (tjlp at sina.com) Date: Thu, 30 May 2019 08:16:20 +0800 Subject: How to config Nginx for gRPC session sticky? Message-ID: <20190530001620.4723718C008E@webmail.sinamail.sina.com.cn> I want to use nginx for gRPC loadbalance. And Nginx need to forward all the gRPC requests of a session to the same backend server. What is the recommended implementation? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu May 30 16:32:07 2019 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 30 May 2019 19:32:07 +0300 Subject: Unit 1.9.0 release Message-ID: <2269725.yZtGl3x9Jz@vbart-workstation> Hi, I'm glad to announce a new release of NGINX Unit. In this release, we continue improving routing capabilities for more advanced and precise request matching. Besides that, the control API was extended with POST operations to simplify array manipulation in configuration. Please check the documentation about new features: - Matching rules: https://unit.nginx.org/configuration/#condition-matching - API operations: https://unit.nginx.org/configuration/#configuration-management If you prefer to perceive information visually, here's a recording of NGINX Meetup that gives a good overview of dynamic application routing, although doesn't discuss new features from this release: - https://www.youtube.com/watch?v=5O4TjbbxTxw Also, a number of annoying bugs were fixed; thanks to your feedback, the Node.js module now works fine with more applications. Changes with Unit 1.9.0 30 May 2019 *) Feature: request routing by arguments, headers, and cookies. *) Feature: route matching patterns allow a wildcard in the middle. *) Feature: POST operation for appending elements to arrays in configuration. *) Feature: support for changing credentials using CAP_SETUID and CAP_SETGID capabilities on Linux without running main process as privileged user. *) Bugfix: memory leak in the router process might have happened when a client prematurely closed the connection. *) Bugfix: applying a large configuration might have failed. *) Bugfix: PUT and DELETE operations on array elements in configuration did not work. *) Bugfix: request schema in applications did not reflect TLS connections. *) Bugfix: restored compatibility with Node.js applications that use ServerResponse._implicitHeader() function; the bug had appeared in 1.7. *) Bugfix: various compatibility issues with Node.js applications. With this release, packages for Ubuntu 19.04 "disco" are also available. See the website for a full list of available repositories: - https://unit.nginx.org/installation/ Meanwhile, we continue working on WebSocket support. It's almost ready and has great chances to be included in the next release for Node.js and Java modules. Work on proxying and static files serving is also in progress; this will take a bit more time. wbr, Valentin V. Bartenev From mat999 at gmail.com Fri May 31 05:25:42 2019 From: mat999 at gmail.com (Mathew Heard) Date: Fri, 31 May 2019 15:25:42 +1000 Subject: Google QUIC support in nginx In-Reply-To: References: <829d371d1e545cf278fcd05c96c63a7f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hey nginx team, As I understand it QUIC support is road mapped for this year? Any chance of some confirmation, or any information that can be made available? Regards, Mathew On Fri, Jan 30, 2015 at 6:28 PM jtan wrote: > This would be interesting. But I guess we would need to wait. > > On Fri, Jan 30, 2015 at 2:35 PM, justink101 wrote: > >> Any plans to support Google QUIC[1] in nginx? >> >> [1] http://en.wikipedia.org/wiki/QUIC >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,256352,256352#msg-256352 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > Freelance Grails > and Java > developer > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri May 31 06:54:05 2019 From: nginx-forum at forum.nginx.org (George) Date: Fri, 31 May 2019 02:54:05 -0400 Subject: Google QUIC support in nginx In-Reply-To: References: Message-ID: <423d86fdb50880a10d4a8312ce7072c0.NginxMailingListEnglish@forum.nginx.org> Roadmap suggests it is in Nginx 1.17 mainline QUIC = HTTP/3 https://trac.nginx.org/nginx/roadmap :) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,256352,284367#msg-284367 From nginx-forum at forum.nginx.org Fri May 31 07:15:18 2019 From: nginx-forum at forum.nginx.org (George) Date: Fri, 31 May 2019 03:15:18 -0400 Subject: duplicate listen options for backlog directive for ip:80 and ip:443 pairs ? Message-ID: <7fe678b6b9d47ed9f25f6eed9969c44a.NginxMailingListEnglish@forum.nginx.org> I am trying to troubleshoot a duplicate listen options error that only happens on one server and not the other. >From docs at http://nginx.org/en/docs/http/ngx_http_core_module.html backlog listen directive works for each ip:port pair so I should be able to set backlog directive on listen directive once on port 80 and once on port 443. But on one server I am not able to and can't see where the problem is coming from ? How shall I debug this ? --- working --- On working Nginx 1.17.0 server I have 2 nginx vhosts that set backlog properly and have not problems vhost 1 listen 80 default_server backlog=2048 reuseport fastopen=256; vhost 2 listen 443 ssl http2 reuseport backlog=2048; --- not working --- Now on another Nginx 1.17.0 server I have 3 nginx vhosts but nginx restarts complain of duplicate listen options once I add vhost 3 and the error is related for vhost 2's listen directive nginx: [emerg] duplicate listen options for 0.0.0.0:443 in /path/to/vhost2/vhost vhost 1 listen 80 default_server backlog=4095 reuseport fastopen=256; vhost 2 listen 443 ssl http2 reuseport; vhost 3 listen 443 ssl http2 backlog=4095; if i remove vhost 3 backlog=4095 directive there's no error though ? --- working --- Now if I reverse it so backlog=4095 is set in vhost 2 and not vhost 3, then it works and nginx doesn't complain of errors ? No idea why that is the case or if it's a bug ? vhost 2 listen 443 ssl http2 reuseport backlog=4095; vhost 3 listen 443 ssl http2; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,284368,284368#msg-284368 From brentgclarklist at gmail.com Fri May 31 09:33:08 2019 From: brentgclarklist at gmail.com (Brent Clark) Date: Fri, 31 May 2019 11:33:08 +0200 Subject: Proxy to server based on domain name in the auth details Message-ID: <03c2aa01-708e-e7f3-0e97-7fc43a7f44da@gmail.com> Good day Guys Is it possible to have nginx issue an auth prompt, and then proxy to a backend server based on the domain in the username? Regards Brent