From nginx-forum at forum.nginx.org Sat Jan 1 16:00:33 2022 From: nginx-forum at forum.nginx.org (BonVonKnobloch) Date: Sat, 01 Jan 2022 11:00:33 -0500 Subject: Setting up a webDAV server In-Reply-To: References: Message-ID: <7ae8070922b2eaa7449d3e78fc8c4b92.NginxMailingListEnglish@forum.nginx.org> Thanks for your help Maxim, but I am afraid that I still can't get PUT to work. File permissions are fine, so I assume that my nginx.conf is still wrong: location calendar { root html/calendar; dav_methods PUT DELETE MKCOL COPY MOVE; dav_access group:rw all:r; limit_except GET { allow 172.21.42.0/24; deny all; } Which still produces '405 Not Allowed' with PUT (reading is fine) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293215,293220#msg-293220 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From lists at lazygranch.com Sat Jan 1 19:33:36 2022 From: lists at lazygranch.com (lists) Date: Sat, 01 Jan 2022 11:33:36 -0800 Subject: Setting up a webDAV server In-Reply-To: <7ae8070922b2eaa7449d3e78fc8c4b92.NginxMailingListEnglish@forum.nginx.org> Message-ID: Being that you are using Opensuse I think the answer is no but do you SELINUX enabled? Usually when a file permission doesn't solve the problem for me then it is some "policy" feature I don't know about. I do the file 777 permission during testing to debug something but I can't think of a case when it ever was the right solution.   Original Message   From: nginx-forum at forum.nginx.org Sent: January 1, 2022 8:00 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: Re: Setting up a webDAV server Thanks for your help Maxim, but I am afraid that I still can't get PUT to work. File permissions are fine, so I assume that my nginx.conf is still wrong: location calendar {     root html/calendar;     dav_methods PUT DELETE MKCOL COPY MOVE;     dav_access group:rw all:r;     limit_except GET { allow 172.21.42.0/24; deny all;     } Which still produces '405 Not Allowed' with PUT (reading is fine) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293215,293220#msg-293220 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Sun Jan 2 10:00:12 2022 From: nginx-forum at forum.nginx.org (BonVonKnobloch) Date: Sun, 02 Jan 2022 05:00:12 -0500 Subject: Setting up a webDAV server In-Reply-To: References: Message-ID: <80bc2e4e382bd8e20ec2fe3780fdcf0c.NginxMailingListEnglish@forum.nginx.org> Thanks for the thought Gariac, but no I don't have SELinux enabled, have also disabled AppArmor (had some trouble in the past). I also set 'open' permissions until the thing runs OK, then tighten them. Still getting '405' on PUT. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293215,293223#msg-293223 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Sun Jan 2 12:26:33 2022 From: francis at daoine.org (Francis Daly) Date: Sun, 2 Jan 2022 12:26:33 +0000 Subject: Setting up a webDAV server In-Reply-To: <80bc2e4e382bd8e20ec2fe3780fdcf0c.NginxMailingListEnglish@forum.nginx.org> References: <80bc2e4e382bd8e20ec2fe3780fdcf0c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20220102122633.GF12557@daoine.org> On Sun, Jan 02, 2022 at 05:00:12AM -0500, BonVonKnobloch wrote: Hi there, > I also set 'open' permissions until the thing runs OK, then tighten them. > Still getting '405' on PUT. The dav module documentation and example config is at https://nginx.org/en/docs/http/ngx_http_dav_module.html Can you show the config that you use, plus the filesystem permissions, plus the details of a sample request that works and a sample request that fails? The nginx error log (possibly at a more-detailed level than the default) might also show why nginx is responding the way that it is. Maybe having all of that information in one place will make it clear why your system is not responding the way that you want it to. For what it's worth: my suspicion is that the "location" blocks that you have shown are not being used with the sample requests that you make; GET works because you have another location (or the default) that is being used to read from this part of the filesystem, and PUT fails with 405 because "dav_methods off;" is the effective configuration that applies. If you change your most recent location calendar to be location /calendar then GET will possibly start to fail 404 until a PUT is done -- unless you also adjust the "root" so that your url-names and filesystem-names are compatible. (Or maybe it will all Just Work -- it is not immediately clear to me which file on your filesystem you want nginx-dav to create-or-overwrite when you do "PUT /calendar/Geburtstage.ics" -- is it /usr/local/nginx/html/calendar/calendar/Geburtstage.ics, or something else?) Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Sun Jan 2 12:53:13 2022 From: francis at daoine.org (Francis Daly) Date: Sun, 2 Jan 2022 12:53:13 +0000 Subject: django app static files on the different server not get loaded In-Reply-To: References: Message-ID: <20220102125313.GG12557@daoine.org> On Fri, Dec 31, 2021 at 05:45:07PM -0500, ningja wrote: Hi there, > I have two server test1.com and test2.com. test1 is internet public face > server. Test2 is intranet only server. Both servers have nginx docker > running. I think you are saying that the world can access https://test1.com/app1/, and can access the Django that is on test1 -- but the static files are below the url /static/, not the url /app1/static/. And the world can access https://test1.com/app2/, and can access the Django that is on test2, but cannot access the static files there, because they are also below the url /static/ and not the url /app2/static/. In that case, the simplest thing from the nginx perspective will probably be for you to change the config of both Django apps so that the one on test1.com believes that it is installed below /app1/ with all of its static files below the url /app1/static/; and so that the one on test2.com believes that it is installed below /app2/ with all of its static files below the url /app2/static/. There are other things you could do instead -- but fundamentally, if "app2" does not want to be reverse-proxied (at a different part of the local url hierarchy than it thinks it is installed at), then "app2" can make itself be difficult to reverse-proxy; and it is often simpler to configure "app2" to let itself be reverse-proxied, than to fight it. > How can config nginx1 to looking for app2 static file under test2.com > https://test2.com/app2/? nginx gets a request for /static/img/logo-2.jpg If you know that that request should be reverse-proxied (with proxy_pass) to the test2.com server, then you must tell nginx to proxy_pass it to test2.com. And (presumably) you must tell nginx not to proxy_pass the request for /static/img/logo-1.jpg. You can do it; but it will probably be easier to do if all of app2 static image requests are below /app2/static/img/, not /static/img/. > #test2 server > server{ > listen 444; > server_name https://test2.com; I don't think it's directly relevant here, but that possibly would be better as listen 444 ssl; server_name test2.com; (although I'm not quite clear which parts of your config are in the test1 server nginx.conf and which are in the test2 server nginx.conf). Cheers, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Sun Jan 2 13:51:40 2022 From: nginx-forum at forum.nginx.org (BonVonKnobloch) Date: Sun, 02 Jan 2022 08:51:40 -0500 Subject: Setting up a webDAV server In-Reply-To: <20220102122633.GF12557@daoine.org> References: <20220102122633.GF12557@daoine.org> Message-ID: <5e23dd12942c17be969cfeb2ec453f99.NginxMailingListEnglish@forum.nginx.org> Thank you Francis, I had one part of the pathname double and could not see it. Seemingly I have PUT & GET functionality - now for further testing and tightening permissions. I confess to never being sure where a path relative to the nginx directory or a full path is required (reading the docs did not help me). Bob Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293215,293227#msg-293227 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Sun Jan 2 15:29:28 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 2 Jan 2022 18:29:28 +0300 Subject: Setting up a webDAV server In-Reply-To: <7ae8070922b2eaa7449d3e78fc8c4b92.NginxMailingListEnglish@forum.nginx.org> References: <7ae8070922b2eaa7449d3e78fc8c4b92.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Sat, Jan 01, 2022 at 11:00:33AM -0500, BonVonKnobloch wrote: > Thanks for your help Maxim, but I am afraid that I still can't get PUT to > work. > File permissions are fine, so I assume that my nginx.conf is still wrong: > > location calendar { It is. The location prefix doesn't start with "/", so it won't match any valid URI. You have to use "location /calendar/" instead to match requests to "/calendar/...". > root html/calendar; And you probably want to use "root html;" instead (or just comment it out to use the server's one), unless your files are in + "html/calendar/calendar" directory. See http://nginx.org/en/docs/http/request_processing.html for some basic explanation on how nginx processes requests, and the "location" and "root" directives documentation for additional details: http://nginx.org/r/location http://nginx.org/r/root Beginner's guide (http://nginx.org/en/docs/beginners_guide.html) might be also helpful. Hope this helps. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Sun Jan 2 21:26:40 2022 From: nginx-forum at forum.nginx.org (ningja) Date: Sun, 02 Jan 2022 16:26:40 -0500 Subject: django app static files on the different server not get loaded In-Reply-To: <20220102125313.GG12557@daoine.org> References: <20220102125313.GG12557@daoine.org> Message-ID: Hi Francis, Thank you for spent time to answer my question. I am sorry for some of the confusion here. App1 can load the static files and run correctly from URL https://test1.com/app1. Test2 has a Django app2 which has static files under /app/public/static on server test2. I can access it from URL https://test2.com/app2. Everything works including static files. The issue is I need to configure nginx1 to allow people to access app2 from the public internet. The config file I post here is from test1 server. With this config I can access app2 html pages from the internet (just what I wanted) but the page did NOT try load the static files from https://test2.com/app2/ instead it try to load the static from https://test1.com/app2/. How can I have the nginx to look app2's static files under https://test2.com? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293219,293229#msg-293229 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From paul at stormy.ca Sun Jan 2 23:32:10 2022 From: paul at stormy.ca (Paul) Date: Sun, 2 Jan 2022 18:32:10 -0500 Subject: django app static files on the different server not get loaded In-Reply-To: References: <20220102125313.GG12557@daoine.org> Message-ID: <005b8e1b-5014-9753-485c-863f6c78541c@stormy.ca> A happy new 2022 to all. A few thoughts, Sunday afternoon, watching the snow fall... On 2022-01-02 4:26 p.m., ningja wrote: [snip] > App1 can load the static files and run correctly from URL > https://test1.com/app1. Test2 has a Django app2 which has static files under > /app/public/static on server test2. I can access it from URL > https://test2.com/app2. Everything works including static files. What happens if you curl and with no trailing slash and app number? Assuming that they have unique IP addresses: -- you write that your "index.html equivalent page" that you call app1 for IP test1 serves static content and runs correctly -- you also say that for your "index.html equivalent page" that you call app2 for IP test2 "Everything works including static files." > > The issue is I need to configure nginx1 Assuming this is test1.com (or do you physically have two separate instances of nginx on two separate servers?): > to allow people to access app2 from the public internet. [you maybe mentioned it earlier] so the "index.html equivalent page" that you call app2 is LAN only? Conceptually, you suggest that you want app1 and app2 available WAN. Why not write a simplistic entry page with two s to the two pages? You could possibly also use a 301 or meta "refresh" to simplify your users' experience? > The config file I post here is from test1 server. > With this config I can access app2 html pages from the internet (just what I > wanted) but the page did NOT try load the static files from > https://test2.com/app2/ instead it try to load the static from > https://test1.com/app2/. How can I have the nginx to look app2's static > files under https://test2.com? I didn't see the "config file I [you] post here is from test1 server", but maybe you are asking nginx to do something that could trivially be done with symbolic links? They work well, fast, and with suitable permissions pose few security risks. Reminiscing while watching the snow fall, my first computers (Elea 9000, IBM 7090) glowed in the dark, were marvelously intriguing until you had to read a two foot pile of "fan-fold" to find which 80-column card had a glitch. You're talking Django/Python, I'm remembering machine language, UNIVAC and COBOL, FORTRAN -- but the world has not changed much, you're still talking to a tube or transistor. Please don't think that nginx is your initial "I can't get it to work." A tad of curiosity, creativity, imagination and (heaven forfend) thinking, will always prevail and prove rewarding. Again, happy new year to all; and with my deepest appreciation of all the participants on this list. Yours aye, Paul \\\||// (@ @) ooO_(_)_Ooo__________________________________ |______|_____|_____|_____|_____|_____|_____|_____| |___|____|_____|_____|_____|_____|_____|_____|____| |_____|_____| mailto:paul at stormy.ca _|____|____| _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Jan 3 01:56:27 2022 From: nginx-forum at forum.nginx.org (ningja) Date: Sun, 02 Jan 2022 20:56:27 -0500 Subject: django app static files on the different server not get loaded In-Reply-To: <005b8e1b-5014-9753-485c-863f6c78541c@stormy.ca> References: <005b8e1b-5014-9753-485c-863f6c78541c@stormy.ca> Message-ID: Hi Paul, Thank you for reply my question. We supposed to get snow tomorrow too. Looking forward to it... On test1 box I did curl https://test1.com and I got the landing page of the app1. On test1 box : curl -k https://test2.com I got the landing page of the app2. Please see the nginx config file on the test1 from my original post. Both server has their own IP. I have two separate instances of nginx on separate servers. I can access https://test1.com/app1 from wan. I can access https://test2.com/app2 from lan. I can not load static files https://test2.com/app2 from wan. What do you mean by "with symbolic links"? My django app2 will need access bundle files under static dir on app2. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293219,293231#msg-293231 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Mon Jan 3 10:52:48 2022 From: francis at daoine.org (Francis Daly) Date: Mon, 3 Jan 2022 10:52:48 +0000 Subject: django app static files on the different server not get loaded In-Reply-To: References: <20220102125313.GG12557@daoine.org> Message-ID: <20220103105248.GH12557@daoine.org> On Sun, Jan 02, 2022 at 04:26:40PM -0500, ningja wrote: Hi there, > The issue is I need to configure nginx1 to allow people to access app2 from > the public internet. The config file I post here is from test1 server. > With this config I can access app2 html pages from the internet (just what I > wanted) but the page did NOT try load the static files from > https://test2.com/app2/ instead it try to load the static from > https://test1.com/app2/. How can I have the nginx to look app2's static > files under https://test2.com? You need your nginx to know that a request for /static/img/for-app1.jpg should be handled by reading from the local filesystem; and that a request for /static/img/for-app2.jpg should be handled by doing a proxy_pass to https://test2.com. There is no (reliable) part of the request that nginx gets, that will let nginx know that this request "came from" app1 while that request "came from" app2. The only thing you have is the url. If you want to keep both sets of images under the url prefix /static/, and you are unable to write a "location" pattern that will match all-and-only the images from one of the apps, then you probably need to enumerate the urls that correspond to one of the apps. One possible way to do this could be to keep your "location /static/" reading from the local filesystem (so: defaults to app1); and add a set of other locations of the form location = /static/img/logo-2.jpg { proxy_pass https://test2.com:444; } I suspect it would be simpler if the app2-static files were below "/static2/", or below "/app2/static/"; but it's your system and you get to choose how to set it up. Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Mon Jan 3 11:53:07 2022 From: francis at daoine.org (Francis Daly) Date: Mon, 3 Jan 2022 11:53:07 +0000 Subject: Setting up a webDAV server In-Reply-To: <5e23dd12942c17be969cfeb2ec453f99.NginxMailingListEnglish@forum.nginx.org> References: <20220102122633.GF12557@daoine.org> <5e23dd12942c17be969cfeb2ec453f99.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20220103115307.GI12557@daoine.org> On Sun, Jan 02, 2022 at 08:51:40AM -0500, BonVonKnobloch wrote: Hi there, > I had one part of the pathname double and could not see it. > Seemingly I have PUT & GET functionality - now for further testing and > tightening permissions. Good stuff -- it sounds like you have a thing that works well enough for testing, at least :-) > I confess to never being sure where a path relative to the nginx directory > or a full path is required (reading the docs did not help me). I'm not certain it always applies, but in general for a "file" or "path" directive that relates to the filesystem, if the value starts with a / it is a full filesystem path, otherwise it is relative to the nginx default directory; there are things like "try_files" which describe the exception. For a url, it will usually be "full" (starts with /) unless you know why it should not. If there are specific places where the documentation is unclear, or was unclear when you read it for the first time, then I suspect that people will be happy to adjust things -- but it is hard for someone who already knows what is intended, to know how some text will be read by someone unfamiliar with it. So any feedback that leads to an improvement for the next reader, will be good. Thanks, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Jan 3 13:49:05 2022 From: nginx-forum at forum.nginx.org (ningja) Date: Mon, 03 Jan 2022 08:49:05 -0500 Subject: django app static files on the different server not get loaded In-Reply-To: <20220103105248.GH12557@daoine.org> References: <20220103105248.GH12557@daoine.org> Message-ID: <51aea218e73c086df4db9e091f6cc751.NginxMailingListEnglish@forum.nginx.org> Hi Francis, Thank you. "Yes" both apps have static directory. From the error message that seems nginx IS looking under different path "app1/static" for app1 and "app2/static" for app2 . Does it means "static" did not confuse nginx? (I am not 100% sure ) another thing I should mention : I added a URL for app2 on the app1 landing page which is django path. I think nginx "think" the app2 is part of the app1 so it looking for the static files under test1. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293219,293234#msg-293234 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Jan 3 13:52:14 2022 From: nginx-forum at forum.nginx.org (ningja) Date: Mon, 03 Jan 2022 08:52:14 -0500 Subject: django app static files on the different server not get loaded In-Reply-To: References: <005b8e1b-5014-9753-485c-863f6c78541c@stormy.ca> Message-ID: <7769946e2ec537cfa567698b7e9fbcf8.NginxMailingListEnglish@forum.nginx.org> p.s. If I do curl from my local without connect vpn. curl https://test1.com and I got the landing page of the app1. curl https://test2.com I got 301. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293219,293235#msg-293235 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Jan 3 14:59:10 2022 From: nginx-forum at forum.nginx.org (ningja) Date: Mon, 03 Jan 2022 09:59:10 -0500 Subject: django app static files on the different server not get loaded In-Reply-To: <20220103105248.GH12557@daoine.org> References: <20220103105248.GH12557@daoine.org> Message-ID: <2f1fc69c153e9ece8b194480588df4e9.NginxMailingListEnglish@forum.nginx.org> Hi Francis, I tried your suggestion. location = /static2/img/logo-2.jpg { proxy_pass https://test2.com:444; } and I can access the img from http://test1/static2/img/logo-2.jpg. I'll try to edit my app2 django to use static2 instead static. Thank you for your suggestion, Sue Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293219,293236#msg-293236 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Jan 3 22:09:09 2022 From: nginx-forum at forum.nginx.org (graphite_123) Date: Mon, 03 Jan 2022 17:09:09 -0500 Subject: How to add lua support into latest nginx version Message-ID: <92e3f91ebfd2f22e839f28f779dd3c9a.NginxMailingListEnglish@forum.nginx.org> OS: Linux, distribution: Debian (buster) Currently, we using nginx:1.14.2, and to support Lua we are using libnginx-mod-http-lua:1.14.2 package. We are working on upgrading Nginx to nginx:1.21.0 version, but I'm unable to find the compatible libnginx-mod-http-lua package. Can someone please help me to find compatible version. The only alternate way I'm able to find is using lua-nginx-module (https://github.com/openresty/lua-nginx-module) (it recommend to directly use openresty). Is there no other way to resolve this? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293239,293239#msg-293239 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From teward at thomas-ward.net Mon Jan 3 22:29:29 2022 From: teward at thomas-ward.net (Thomas Ward) Date: Mon, 3 Jan 2022 22:29:29 +0000 Subject: How to add lua support into latest nginx version In-Reply-To: <92e3f91ebfd2f22e839f28f779dd3c9a.NginxMailingListEnglish@forum.nginx.org> References: <92e3f91ebfd2f22e839f28f779dd3c9a.NginxMailingListEnglish@forum.nginx.org> Message-ID: The Lua module is a third party module, part of the Open Resty variant of nginx. It will not be in the nginx.org repos and only in the OpenResty nginx repos unless you compile it and its dependencies alongside nginx directly. (not including libs that get installed via apt or packages, I mean - but actual other module dependencies to be compiled in to make things work proper) Sent from my Galaxy -------- Original message -------- From: graphite_123 Date: 1/3/22 17:09 (GMT-05:00) To: nginx at nginx.org Subject: How to add lua support into latest nginx version OS: Linux, distribution: Debian (buster) Currently, we using nginx:1.14.2, and to support Lua we are using libnginx-mod-http-lua:1.14.2 package. We are working on upgrading Nginx to nginx:1.21.0 version, but I'm unable to find the compatible libnginx-mod-http-lua package. Can someone please help me to find compatible version. The only alternate way I'm able to find is using lua-nginx-module (https://github.com/openresty/lua-nginx-module) (it recommend to directly use openresty). Is there no other way to resolve this? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293239,293239#msg-293239 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From osa at freebsd.org.ru Mon Jan 3 22:43:08 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Tue, 4 Jan 2022 01:43:08 +0300 Subject: How to add lua support into latest nginx version In-Reply-To: <92e3f91ebfd2f22e839f28f779dd3c9a.NginxMailingListEnglish@forum.nginx.org> References: <92e3f91ebfd2f22e839f28f779dd3c9a.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi there, On Mon, Jan 03, 2022 at 05:09:09PM -0500, graphite_123 wrote: > OS: Linux, distribution: Debian (buster) > > Currently, we using nginx:1.14.2, and to support Lua we are using > libnginx-mod-http-lua:1.14.2 package. > > We are working on upgrading Nginx to nginx:1.21.0 version, but I'm unable to > find the compatible libnginx-mod-http-lua package. Can someone please help > me to find compatible version. > > The only alternate way I'm able to find is using lua-nginx-module > (https://github.com/openresty/lua-nginx-module) (it recommend to directly > use openresty). Is there no other way to resolve this? It's possible to build a lua dynamic module for NGINX OSS and NGINX Plus, https://www.nginx.com/blog/compiling-dynamic-modules-nginx-plus/ -- Sergey Osokin _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Jan 3 22:58:32 2022 From: nginx-forum at forum.nginx.org (ningja) Date: Mon, 03 Jan 2022 17:58:32 -0500 Subject: django app static files on the different server not get loaded In-Reply-To: <2f1fc69c153e9ece8b194480588df4e9.NginxMailingListEnglish@forum.nginx.org> References: <20220103105248.GH12557@daoine.org> <2f1fc69c153e9ece8b194480588df4e9.NginxMailingListEnglish@forum.nginx.org> Message-ID: After I modified my app2 and put all my static files to static_app2 and with Francis' suggestion location = /static_app2/img/logo-2.jpg { proxy_pass https://test2.com:444; } . I was able to solve my problem. Thank you Francis and Paul. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293219,293242#msg-293242 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From Ueli.Marti at ch.glory-global.com Tue Jan 4 11:10:33 2022 From: Ueli.Marti at ch.glory-global.com (Marti, Ueli (Marin)) Date: Tue, 4 Jan 2022 11:10:33 +0000 Subject: OCSP, client certificate verification with chained CA Message-ID: Hi, i am trying to setup nginx for OCSP client certificate verification and have troubles getting it to work with chained CA's. My setup is as follows, all referenced files are in the attached archive. - RootCa (pki/root/RootCa*.*): Self signed root CA certificate - IntermediateCa (pki/intermediate/IntermediateCa*.*): Intermediate CA certificate signed by RootCa - ServerCertificate (pki/intermediate/ ServerCertificate *.*): Server certificate, signed by Intermediate CA - IntermediateClientA (pki/intermediate/IntermediateClientA*.*): Intermediate client certificate A, signed by Intermediate CA (password for p12: umtest) - IntermediateClientB (pki/intermediate/IntermediateClientB*.*): Intermediate client certificate B, signed by Intermediate CA, REVOKED (password for p12: umtest) - IntermediateOcspResponder (pki/intermediate/ IntermediateOcspResponder *.*): Intermediate OCSP responder certificate, extendedKeyUsage=OCSPSigning, signed by Intermediate CA - nginx 1.20.2 runs on a Manjaro virtual machine - openssl ocsp responder runs on the same Manjaro box, port 8080 (started with pki/startOcspResponder.sh): openssl ocsp -index intermediate/index.txt -port 8080 -rsigner intermediate/IntermediateOcspResponderCert.pem -rkey intermediate/IntermediateOcspResponderKey.pem -CA intermediate/IntermediateChainCaCert.pem -text & nginx mTls configuration is as follows (full nginx.conf attached): ssl_ocsp on; ssl_verify_client on; ssl_client_certificate /etc/nginx/pki/intermediate/IntermediateChainCaCert.pem; ssl_ocsp_responder http://127.0.0.1:8080; ssl_verify_depth 2; I am trying to connect from Chrome, running on the Windows host, using alternatively Client A/B certificates. With the above configuration, connection with Client A fails, which is NOT expected, Client A should be able to connect. nginx error.log indicates: 2022/01/04 10:06:29 [error] 2920#2920: *4 OCSP_basic_verify() failed (SSL: error:27069070:OCSP routines:OCSP_basic_verify:root ca not trusted) while requesting certificate status, responder: 127.0.0.1, peer: 127.0.0.1:8080 Connection with Client B fails too, this is expected as Client B certificate is revoked, nginx error.log indicates: 2022/01/04 10:06:42 [info] 2920#2920: *14 client SSL certificate verify error: certificate revoked while reading client request headers, client: 192.168.1.115, server: localhost, request: "GET / HTTP/1.1", host: "192.168.1.110" when changing nginx configuration to: ssl_ocsp leaf; everything works as expected, Client A can connect, Client B not. Trying ocsp verification dircetly using openssl cli, works as expected as well: openssl ocsp -issuer intermediate/IntermediateCaCert.pem -CAfile intermediate/IntermediateChainCaCert.pem -cert intermediate/IntermediateClientACert.pem -url http://127.0.0.1:8080 Response verify OK intermediate/IntermediateClientACert.pem: good This Update: Jan 4 09:20:56 2022 GMT openssl ocsp -issuer intermediate/IntermediateCaCert.pem -CAfile intermediate/IntermediateChainCaCert.pem -cert intermediate/IntermediateClientBCert.pem -url http://127.0.0.1:8080 Response verify OK intermediate/IntermediateClientBCert.pem: revoked This Update: Jan 4 09:21:37 2022 GMT Revocation Time: Dec 23 09:33:07 2021 GMT How do i need to configure nginx to make OCSP validation working for the certificate chain, not only the leaf ? Thanks This e-mail and any files attached are strictly confidential, may be legally privileged and are intended solely for the addressee. If you are not the intended recipient please notify the sender immediately by return email and then delete the e-mail and any attachments immediately. The views and or opinions expressed in this e-mail are not necessarily the views of Glory Ltd, Glory Global Solutions Limited or any of their subsidiaries or affiliates and the GLORY Group of companies, their directors, officers and employees make no representation about and accept no liability for its accuracy or completeness. You should ensure that you have adequate virus protection as the GLORY Group of companies do not accept liability for any viruses. Glory Global Solutions Limited Registered No. 07945417 and Glory Global Solutions (International) Limited Registered No 6569621 are both registered in England and Wales with their registered office at: Infinity View, 1 Hazelwood, Lime Tree Way, Chineham, Basingstoke, Hampshire RG24 8WZ, United Kingdom -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx_pki.tar.gz Type: application/x-gzip Size: 37207 bytes Desc: nginx_pki.tar.gz URL: -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From vahan at helix.am Tue Jan 4 13:20:06 2022 From: vahan at helix.am (Vahan Yerkanian) Date: Tue, 4 Jan 2022 17:20:06 +0400 Subject: OCSP, client certificate verification with chained CA In-Reply-To: References: Message-ID: Have you tried increasing the depth? ssl_verify_depth 3; > On 4 Jan 2022, at 15:10, Marti, Ueli (Marin) wrote: > > Hi, > i am trying to setup nginx for OCSP client certificate verification and have troubles getting it to work with chained CA's. > My setup is as follows, all referenced files are in the attached archive. > - RootCa (pki/root/RootCa*.*): Self signed root CA certificate > - IntermediateCa (pki/intermediate/IntermediateCa*.*): Intermediate CA certificate signed by RootCa > - ServerCertificate (pki/intermediate/ ServerCertificate *.*): Server certificate, signed by Intermediate CA > - IntermediateClientA (pki/intermediate/IntermediateClientA*.*): Intermediate client certificate A, signed by Intermediate CA (password for p12: umtest) > - IntermediateClientB (pki/intermediate/IntermediateClientB*.*): Intermediate client certificate B, signed by Intermediate CA, REVOKED (password for p12: umtest) > - IntermediateOcspResponder (pki/intermediate/ IntermediateOcspResponder *.*): Intermediate OCSP responder certificate, extendedKeyUsage=OCSPSigning, signed by Intermediate CA > > - nginx 1.20.2 runs on a Manjaro virtual machine > - openssl ocsp responder runs on the same Manjaro box, port 8080 (started with pki/startOcspResponder.sh): > openssl ocsp -index intermediate/index.txt -port 8080 -rsigner intermediate/IntermediateOcspResponderCert.pem -rkey intermediate/IntermediateOcspResponderKey.pem -CA intermediate/IntermediateChainCaCert.pem -text & > > nginx mTls configuration is as follows (full nginx.conf attached): > ssl_ocsp on; > ssl_verify_client on; > ssl_client_certificate /etc/nginx/pki/intermediate/IntermediateChainCaCert.pem; > ssl_ocsp_responder http://127.0.0.1:8080; > ssl_verify_depth 2; > > I am trying to connect from Chrome, running on the Windows host, using alternatively Client A/B certificates. > > With the above configuration, connection with Client A fails, which is NOT expected, Client A should be able to connect. > nginx error.log indicates: > 2022/01/04 10:06:29 [error] 2920#2920: *4 OCSP_basic_verify() failed (SSL: error:27069070:OCSP routines:OCSP_basic_verify:root ca not trusted) while requesting certificate status, responder: 127.0.0.1, peer: 127.0.0.1:8080 > > Connection with Client B fails too, this is expected as Client B certificate is revoked, > nginx error.log indicates: > 2022/01/04 10:06:42 [info] 2920#2920: *14 client SSL certificate verify error: certificate revoked while reading client request headers, client: 192.168.1.115, server: localhost, request: "GET / HTTP/1.1", host: "192.168.1.110" > > when changing nginx configuration to: > ssl_ocsp leaf; > everything works as expected, Client A can connect, Client B not. > > Trying ocsp verification dircetly using openssl cli, works as expected as well: > openssl ocsp -issuer intermediate/IntermediateCaCert.pem -CAfile intermediate/IntermediateChainCaCert.pem -cert intermediate/IntermediateClientACert.pem -url http://127.0.0.1:8080 > Response verify OK > intermediate/IntermediateClientACert.pem: good > This Update: Jan 4 09:20:56 2022 GMT > > openssl ocsp -issuer intermediate/IntermediateCaCert.pem -CAfile intermediate/IntermediateChainCaCert.pem -cert intermediate/IntermediateClientBCert.pem -url http://127.0.0.1:8080 > Response verify OK > intermediate/IntermediateClientBCert.pem: revoked > This Update: Jan 4 09:21:37 2022 GMT > Revocation Time: Dec 23 09:33:07 2021 GMT > > How do i need to configure nginx to make OCSP validation working for the certificate chain, not only the leaf ? > Thanks > This e-mail and any files attached are strictly confidential, may be legally privileged and are intended solely for the addressee. If you are not the intended recipient please notify the sender immediately by return email and then delete the e-mail and any attachments immediately. The views and or opinions expressed in this e-mail are not necessarily the views of Glory Ltd, Glory Global Solutions Limited or any of their subsidiaries or affiliates and the GLORY Group of companies, their directors, officers and employees make no representation about and accept no liability for its accuracy or completeness. You should ensure that you have adequate virus protection as the GLORY Group of companies do not accept liability for any viruses. Glory Global Solutions Limited Registered No. 07945417 and Glory Global Solutions (International) Limited Registered No 6569621 are both registered in England and Wales with their registered office at: Infinity View, 1 Hazelwood, Lime Tree Way, Chineham, Basingstoke, Hampshire RG24 8WZ, United Kingdom > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From Ueli.Marti at ch.glory-global.com Tue Jan 4 14:44:06 2022 From: Ueli.Marti at ch.glory-global.com (Marti, Ueli (Marin)) Date: Tue, 4 Jan 2022 14:44:06 +0000 Subject: OCSP, client certificate verification with chained CA In-Reply-To: References: Message-ID: Good idea, i have tried it now but doesn't change anything From: nginx On Behalf Of Vahan Yerkanian Sent: Tuesday, January 4, 2022 2:20 PM To: nginx at nginx.org Subject: Re: OCSP, client certificate verification with chained CA Have you tried increasing the depth? ssl_verify_depth 3; On 4 Jan 2022, at 15:10, Marti, Ueli (Marin) > wrote: Hi, i am trying to setup nginx for OCSP client certificate verification and have troubles getting it to work with chained CA's. My setup is as follows, all referenced files are in the attached archive. - RootCa (pki/root/RootCa*.*): Self signed root CA certificate - IntermediateCa (pki/intermediate/IntermediateCa*.*): Intermediate CA certificate signed by RootCa - ServerCertificate (pki/intermediate/ ServerCertificate *.*): Server certificate, signed by Intermediate CA - IntermediateClientA (pki/intermediate/IntermediateClientA*.*): Intermediate client certificate A, signed by Intermediate CA (password for p12: umtest) - IntermediateClientB (pki/intermediate/IntermediateClientB*.*): Intermediate client certificate B, signed by Intermediate CA, REVOKED (password for p12: umtest) - IntermediateOcspResponder (pki/intermediate/ IntermediateOcspResponder *.*): Intermediate OCSP responder certificate, extendedKeyUsage=OCSPSigning, signed by Intermediate CA - nginx 1.20.2 runs on a Manjaro virtual machine - openssl ocsp responder runs on the same Manjaro box, port 8080 (started with pki/startOcspResponder.sh): openssl ocsp -index intermediate/index.txt -port 8080 -rsigner intermediate/IntermediateOcspResponderCert.pem -rkey intermediate/IntermediateOcspResponderKey.pem -CA intermediate/IntermediateChainCaCert.pem -text & nginx mTls configuration is as follows (full nginx.conf attached): ssl_ocsp on; ssl_verify_client on; ssl_client_certificate /etc/nginx/pki/intermediate/IntermediateChainCaCert.pem; ssl_ocsp_responder http://127.0.0.1:8080; ssl_verify_depth 2; I am trying to connect from Chrome, running on the Windows host, using alternatively Client A/B certificates. With the above configuration, connection with Client A fails, which is NOT expected, Client A should be able to connect. nginx error.log indicates: 2022/01/04 10:06:29 [error] 2920#2920: *4 OCSP_basic_verify() failed (SSL: error:27069070:OCSP routines:OCSP_basic_verify:root ca not trusted) while requesting certificate status, responder: 127.0.0.1, peer: 127.0.0.1:8080 Connection with Client B fails too, this is expected as Client B certificate is revoked, nginx error.log indicates: 2022/01/04 10:06:42 [info] 2920#2920: *14 client SSL certificate verify error: certificate revoked while reading client request headers, client: 192.168.1.115, server: localhost, request: "GET / HTTP/1.1", host: "192.168.1.110" when changing nginx configuration to: ssl_ocsp leaf; everything works as expected, Client A can connect, Client B not. Trying ocsp verification dircetly using openssl cli, works as expected as well: openssl ocsp -issuer intermediate/IntermediateCaCert.pem -CAfile intermediate/IntermediateChainCaCert.pem -cert intermediate/IntermediateClientACert.pem -url http://127.0.0.1:8080 Response verify OK intermediate/IntermediateClientACert.pem: good This Update: Jan 4 09:20:56 2022 GMT openssl ocsp -issuer intermediate/IntermediateCaCert.pem -CAfile intermediate/IntermediateChainCaCert.pem -cert intermediate/IntermediateClientBCert.pem -url http://127.0.0.1:8080 Response verify OK intermediate/IntermediateClientBCert.pem: revoked This Update: Jan 4 09:21:37 2022 GMT Revocation Time: Dec 23 09:33:07 2021 GMT How do i need to configure nginx to make OCSP validation working for the certificate chain, not only the leaf ? Thanks This e-mail and any files attached are strictly confidential, may be legally privileged and are intended solely for the addressee. If you are not the intended recipient please notify the sender immediately by return email and then delete the e-mail and any attachments immediately. The views and or opinions expressed in this e-mail are not necessarily the views of Glory Ltd, Glory Global Solutions Limited or any of their subsidiaries or affiliates and the GLORY Group of companies, their directors, officers and employees make no representation about and accept no liability for its accuracy or completeness. You should ensure that you have adequate virus protection as the GLORY Group of companies do not accept liability for any viruses. Glory Global Solutions Limited Registered No. 07945417 and Glory Global Solutions (International) Limited Registered No 6569621 are both registered in England and Wales with their registered office at: Infinity View, 1 Hazelwood, Lime Tree Way, Chineham, Basingstoke, Hampshire RG24 8WZ, United Kingdom _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From jamesread5737 at gmail.com Wed Jan 5 00:18:31 2022 From: jamesread5737 at gmail.com (James Read) Date: Wed, 5 Jan 2022 00:18:31 +0000 Subject: Nginx performance data Message-ID: Hi, I have some questions about Nginx performance. How many concurrent connections can Nginx handle? What throughput can Nginx achieve when serving a large number of small pages to a large number of clients (the maximum number supported)? How does Nginx achieve its performance? Is the epoll event loop all done in a single thread or are multiple threads used to split the work of serving so many different clients? thanks in advance James Read -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Wed Jan 5 14:22:47 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 5 Jan 2022 17:22:47 +0300 Subject: OCSP, client certificate verification with chained CA In-Reply-To: References: Message-ID: Hello! On Tue, Jan 04, 2022 at 11:10:33AM +0000, Marti, Ueli (Marin) wrote: > Hi, > i am trying to setup nginx for OCSP client certificate verification and have troubles getting it to work with chained CA's. > My setup is as follows, all referenced files are in the attached archive. > - RootCa (pki/root/RootCa*.*): Self signed root CA certificate > - IntermediateCa (pki/intermediate/IntermediateCa*.*): Intermediate CA certificate signed by RootCa > - ServerCertificate (pki/intermediate/ ServerCertificate *.*): Server certificate, signed by Intermediate CA > - IntermediateClientA (pki/intermediate/IntermediateClientA*.*): Intermediate client certificate A, signed by Intermediate CA (password for p12: umtest) > - IntermediateClientB (pki/intermediate/IntermediateClientB*.*): Intermediate client certificate B, signed by Intermediate CA, REVOKED (password for p12: umtest) > - IntermediateOcspResponder (pki/intermediate/ IntermediateOcspResponder *.*): Intermediate OCSP responder certificate, extendedKeyUsage=OCSPSigning, signed by Intermediate CA > > - nginx 1.20.2 runs on a Manjaro virtual machine > - openssl ocsp responder runs on the same Manjaro box, port 8080 (started with pki/startOcspResponder.sh): > openssl ocsp -index intermediate/index.txt -port 8080 -rsigner intermediate/IntermediateOcspResponderCert.pem -rkey intermediate/IntermediateOcspResponderKey.pem -CA intermediate/IntermediateChainCaCert.pem -text & > > nginx mTls configuration is as follows (full nginx.conf attached): > ssl_ocsp on; > ssl_verify_client on; > ssl_client_certificate /etc/nginx/pki/intermediate/IntermediateChainCaCert.pem; > ssl_ocsp_responder http://127.0.0.1:8080; > ssl_verify_depth 2; > > I am trying to connect from Chrome, running on the Windows host, using alternatively Client A/B certificates. > > With the above configuration, connection with Client A fails, which is NOT expected, Client A should be able to connect. > nginx error.log indicates: > 2022/01/04 10:06:29 [error] 2920#2920: *4 OCSP_basic_verify() failed (SSL: error:27069070:OCSP routines:OCSP_basic_verify:root ca not trusted) while requesting certificate status, responder: 127.0.0.1, peer: 127.0.0.1:8080 > > Connection with Client B fails too, this is expected as Client B certificate is revoked, > nginx error.log indicates: > 2022/01/04 10:06:42 [info] 2920#2920: *14 client SSL certificate verify error: certificate revoked while reading client request headers, client: 192.168.1.115, server: localhost, request: "GET / HTTP/1.1", host: "192.168.1.110" > > when changing nginx configuration to: > ssl_ocsp leaf; > everything works as expected, Client A can connect, Client B not. So the OCSP check of the intermediate CA certificate is not working. Given you only have one OCSP responder running, which is only capable of signing responses for the intermediate CA, this looks like an expected result. Have you tried to also run OCSP responder for the root CA, so the intermediate CA certificate can be checked? [...] -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From Ueli.Marti at ch.glory-global.com Wed Jan 5 15:33:29 2022 From: Ueli.Marti at ch.glory-global.com (Marti, Ueli (Marin)) Date: Wed, 5 Jan 2022 15:33:29 +0000 Subject: OCSP, client certificate verification with chained CA In-Reply-To: References: Message-ID: Ok, good point thanks. However, it seems nginx accepts only one ssl_ocsp_responder instance. Or is there a syntax to specify multiple instances ? So this would need to be solved on the responder side which would need to be able to handle multiple CAs. Openssl ocsp doesn't seem to support that. Any chance for nginx to support multiple ssl_ocsp_responder instances in the future ? Thanks -----Original Message----- From: nginx On Behalf Of Maxim Dounin Sent: Wednesday, January 5, 2022 3:23 PM To: nginx at nginx.org Subject: Re: OCSP, client certificate verification with chained CA Hello! On Tue, Jan 04, 2022 at 11:10:33AM +0000, Marti, Ueli (Marin) wrote: > Hi, > i am trying to setup nginx for OCSP client certificate verification and have troubles getting it to work with chained CA's. > My setup is as follows, all referenced files are in the attached archive. > - RootCa (pki/root/RootCa*.*): Self signed root CA certificate > - IntermediateCa (pki/intermediate/IntermediateCa*.*): Intermediate CA > certificate signed by RootCa > - ServerCertificate (pki/intermediate/ ServerCertificate *.*): Server > certificate, signed by Intermediate CA > - IntermediateClientA (pki/intermediate/IntermediateClientA*.*): > Intermediate client certificate A, signed by Intermediate CA (password > for p12: umtest) > - IntermediateClientB (pki/intermediate/IntermediateClientB*.*): > Intermediate client certificate B, signed by Intermediate CA, REVOKED > (password for p12: umtest) > - IntermediateOcspResponder (pki/intermediate/ > IntermediateOcspResponder *.*): Intermediate OCSP responder > certificate, extendedKeyUsage=OCSPSigning, signed by Intermediate CA > > - nginx 1.20.2 runs on a Manjaro virtual machine > - openssl ocsp responder runs on the same Manjaro box, port 8080 (started with pki/startOcspResponder.sh): > openssl ocsp -index intermediate/index.txt -port 8080 -rsigner > intermediate/IntermediateOcspResponderCert.pem -rkey > intermediate/IntermediateOcspResponderKey.pem -CA > intermediate/IntermediateChainCaCert.pem -text & > > nginx mTls configuration is as follows (full nginx.conf attached): > ssl_ocsp on; > ssl_verify_client on; > ssl_client_certificate /etc/nginx/pki/intermediate/IntermediateChainCaCert.pem; > ssl_ocsp_responder https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2F127.0.0.1%3A8080%2F&data=04%7C01%7Cueli.marti%40ch.glory-global.com%7C8afad579295e49b6b7b708d9d056e7bb%7C28825646ef414c9bb69e305d76fc24e5%7C0%7C0%7C637769893946036936%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=QTMzgw76b3PMrRpoWUXozv2UH1%2FO40A2KkJ1h5btsyQ%3D&reserved=0; > ssl_verify_depth 2; > > I am trying to connect from Chrome, running on the Windows host, using alternatively Client A/B certificates. > > With the above configuration, connection with Client A fails, which is NOT expected, Client A should be able to connect. > nginx error.log indicates: > 2022/01/04 10:06:29 [error] 2920#2920: *4 OCSP_basic_verify() failed > (SSL: error:27069070:OCSP routines:OCSP_basic_verify:root ca not > trusted) while requesting certificate status, responder: 127.0.0.1, > peer: 127.0.0.1:8080 > > Connection with Client B fails too, this is expected as Client B > certificate is revoked, nginx error.log indicates: > 2022/01/04 10:06:42 [info] 2920#2920: *14 client SSL certificate verify error: certificate revoked while reading client request headers, client: 192.168.1.115, server: localhost, request: "GET / HTTP/1.1", host: "192.168.1.110" > > when changing nginx configuration to: > ssl_ocsp leaf; > everything works as expected, Client A can connect, Client B not. So the OCSP check of the intermediate CA certificate is not working. Given you only have one OCSP responder running, which is only capable of signing responses for the intermediate CA, this looks like an expected result. Have you tried to also run OCSP responder for the root CA, so the intermediate CA certificate can be checked? [...] -- Maxim Dounin https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmdounin.ru%2F&data=04%7C01%7Cueli.marti%40ch.glory-global.com%7C8afad579295e49b6b7b708d9d056e7bb%7C28825646ef414c9bb69e305d76fc24e5%7C0%7C0%7C637769893946036936%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=1LsRd0E4hSz4q1Sk9YXeXANpWPlXDllta35KS6WIJ0I%3D&reserved=0 _______________________________________________ nginx mailing list nginx at nginx.org https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx&data=04%7C01%7Cueli.marti%40ch.glory-global.com%7C8afad579295e49b6b7b708d9d056e7bb%7C28825646ef414c9bb69e305d76fc24e5%7C0%7C0%7C637769893946036936%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=BjQFDS%2FC0SEiMNNjuxXAaWw0gb9a0U22iSMcmJRSJkI%3D&reserved=0 This e-mail and any files attached are strictly confidential, may be legally privileged and are intended solely for the addressee. If you are not the intended recipient please notify the sender immediately by return email and then delete the e-mail and any attachments immediately. The views and or opinions expressed in this e-mail are not necessarily the views of Glory Ltd, Glory Global Solutions Limited or any of their subsidiaries or affiliates and the GLORY Group of companies, their directors, officers and employees make no representation about and accept no liability for its accuracy or completeness. You should ensure that you have adequate virus protection as the GLORY Group of companies do not accept liability for any viruses. Glory Global Solutions Limited Registered No. 07945417 and Glory Global Solutions (International) Limited Registered No 6569621 are both registered in England and Wales with their registered office at: Infinity View, 1 Hazelwood, Lime Tree Way, Chineham, Basingstoke, Hampshire RG24 8WZ, United Kingdom _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Wed Jan 5 15:50:39 2022 From: francis at daoine.org (Francis Daly) Date: Wed, 5 Jan 2022 15:50:39 +0000 Subject: django app static files on the different server not get loaded In-Reply-To: References: <20220103105248.GH12557@daoine.org> <2f1fc69c153e9ece8b194480588df4e9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20220105155039.GJ12557@daoine.org> On Mon, Jan 03, 2022 at 05:58:32PM -0500, ningja wrote: Hi there, > After I modified my app2 and put all my static files to static_app2 and with > Francis' suggestion location = /static_app2/img/logo-2.jpg { proxy_pass > https://test2.com:444; } . I was able to solve my problem. Good stuff; thanks for sharing the result. You probably already have this in place, but just in case not: If all of your app2-static files are below the url prefix "/static_app2/", you can add one location to proxy_pass them all to the test2 server, like location /static_app2/ { proxy_pass https://test2.com:444; } and remove the "location =" piece that was there previously. And if an "app3" in another container is added later, the central nginx.conf should only need the two new locations (/app3, and /static-app3/) for things to work as wanted too. Cheers, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed Jan 5 16:36:40 2022 From: nginx-forum at forum.nginx.org (ningja) Date: Wed, 05 Jan 2022 11:36:40 -0500 Subject: django app static files on the different server not get loaded In-Reply-To: <20220105155039.GJ12557@daoine.org> References: <20220105155039.GJ12557@daoine.org> Message-ID: <2e110f7c48ed4257860b9e47e671f17d.NginxMailingListEnglish@forum.nginx.org> Thank you. Francis. I did just as you said. location /static_app2/ { proxy_pass https://test2.com:444; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293219,293265#msg-293265 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu Jan 6 03:03:52 2022 From: nginx-forum at forum.nginx.org (chakras) Date: Wed, 05 Jan 2022 22:03:52 -0500 Subject: ipV6 failing first time, working everytime subsequently Message-ID: <3632288951fa463a718a996b3eded039.NginxMailingListEnglish@forum.nginx.org> Hello, I am using nginx on a VPS to proxy requests to a GCP server. When working with IPv4, everything works as desired. However, if we enable IPv6 on client machine, we see that the first call (/auth) fails with 401, and all subsequent calls are successful. We cannot see the first call on GCP. I have made sure that AAAA records do exist, GCP endpoint supports IPv6. This is my configuration for Nginx (some values redacted). server { listen 80; listen [::]:80; server_name xxxxx.us www.xxxxx.us; location / { return 301 https://xxxxx.us$request_uri; } } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name xxxxx.us www.xxxxx.us; root /var/www/xxxxx.us/html; index index.html; # SSL ssl_certificate fullchain.pem; ssl_certificate_key privkey.pem; ssl_trusted_certificate chain.pem; # Security include options-ssl-nginx.conf; ssl_dhparam ssl-dhparams.pem; client_max_body_size 10M; location / { proxy_pass https://xxxxx.appspot.com/; } location /gcp/ { proxy_pass https://xxxxx.appspot.com/; } Proxy pass /gcp/ is the one we are using for redirecting to GCP. I am always getting the first call as 401, but second call (same), succeeds. Please let me know if you have seen this before and what I can change to fix this. Also, just to reiterate, if we disable IPv6 on the calling machine, there are no issues. Logs (two subsequent calls): "POST /gcp/users/auth HTTP/2.0" 401 0 "http://localhost:3000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.11 0 Safari/537.36" "POST /gcp/users/auth HTTP/2.0" 200 606 "http://localhost:3000 /" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664. 110 Safari/537.36" Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293266,293266#msg-293266 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu Jan 6 13:37:46 2022 From: nginx-forum at forum.nginx.org (David27) Date: Thu, 06 Jan 2022 08:37:46 -0500 Subject: curious .conf problem Message-ID: "https://www.thomas-walker-lynch.com" is picked up by a server block with a different virtual host name. A bit of a head scratcher so perhaps the experts here can tell me what rookie mistake I have made? Detailed information and .txt file versions of the conf files at this link. thomaswlynch.com/conf/info.txt Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293269,293269#msg-293269 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu Jan 6 16:48:02 2022 From: nginx-forum at forum.nginx.org (catadetest) Date: Thu, 06 Jan 2022 11:48:02 -0500 Subject: Alert: ignore long locked inactive cache entry In-Reply-To: <3c00b8441d6495fed6aa6d7eefa99e82.NginxMailingListEnglish@forum.nginx.org> References: <3c00b8441d6495fed6aa6d7eefa99e82.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hey, did you manage to solve it somehow? Today I had the same issue: First, an error about "ignore long locked inactive cache": 2022/01/06 16:25:57 [alert] 1851#1851: ignore long locked inactive cache entry 296f4c8a7a86be9ab8b0b1f7a36a24c9, count:1 2022/01/06 16:25:57 [alert] 1851#1851: ignore long locked inactive cache entry b7955e11728bc786ee1f3dfe4431ed46, count:1 2022/01/06 16:26:00 [alert] 1851#1851: ignore long locked inactive cache entry 994c71d9331e2ce359343c5a0585cc12, count:1 2022/01/06 16:26:45 [alert] 1851#1851: ignore long locked inactive cache entry 257227d10b2e8313fe28ae26e54c26bf, count:1 2022/01/06 16:26:45 [alert] 1851#1851: ignore long locked inactive cache entry 464329e85103586eb975860f942d2e01, count:1 2022/01/06 16:26:46 [alert] 1851#1851: ignore long locked inactive cache entry 77b4d4013a0462ba32073c9ae8b17b3f, count:2 ... 2022/01/06 16:32:21 [alert] 1851#1851: ignore long locked inactive cache entry c423a4bc3ff5e1938e6e3feaaa97bd26, count:6 2022/01/06 16:32:21 [alert] 1851#1851: ignore long locked inactive cache entry fa16439b534bc481f0c165d98650e306, count:6 2022/01/06 16:32:29 [alert] 1851#1851: ignore long locked inactive cache entry 61faee7679ad426fdc296b6f14915ef4, count:2 2022/01/06 16:32:29 [alert] 1851#1851: ignore long locked inactive cache entry ff1bd077430d4b87c1c1424c3746d850, count:2 Then, an error about "too many open files": 2022/01/06 16:32:30 [crit] 1816#1816: accept4() failed (24: Too many open files) 2022/01/06 16:32:31 [crit] 1816#1816: accept4() failed (24: Too many open files) 2022/01/06 16:32:31 [crit] 1816#1816: accept4() failed (24: Too many open files) 2022/01/06 16:32:32 [crit] 1816#1816: accept4() failed (24: Too many open files) 2022/01/06 16:32:32 [crit] 1816#1816: accept4() failed (24: Too many open files) 2022/01/06 16:32:33 [crit] 1816#1816: accept4() failed (24: Too many open files) Here's my nginx -V output: root at my-precious-server# nginx -V nginx version: nginx/1.14.0 (Ubuntu) built with OpenSSL 1.1.1 11 Sep 2018 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fdebug-prefix-map=/build/nginx-H4cN7P/nginx-1.14.0=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-mail=dynamic --with-mail_ssl_module What I did was to restart the php-fpm service, and than it worked....but this is just a work-around :( Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291199,293272#msg-293272 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu Jan 6 17:00:52 2022 From: nginx-forum at forum.nginx.org (BonVonKnobloch) Date: Thu, 06 Jan 2022 12:00:52 -0500 Subject: Setting up a webDAV server In-Reply-To: <20220103115307.GI12557@daoine.org> References: <20220103115307.GI12557@daoine.org> Message-ID: <5cd2533cd658a23eab6dc0fc794a63e8.NginxMailingListEnglish@forum.nginx.org> Thanks Francis, Maxim, I think that the docs are fine. Coming from an OS/network world and not steeped in HTTP, I had mentally reversed the concepts of location and root. All is (almost) clear now. Bob Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293215,293273#msg-293273 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Thu Jan 6 20:36:03 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 6 Jan 2022 23:36:03 +0300 Subject: OCSP, client certificate verification with chained CA In-Reply-To: References: Message-ID: Hello! On Wed, Jan 05, 2022 at 03:33:29PM +0000, Marti, Ueli (Marin) wrote: > Ok, good point thanks. > However, it seems nginx accepts only one ssl_ocsp_responder > instance. Or is there a syntax to specify multiple instances ? > So this would need to be solved on the responder side which > would need to be able to handle multiple CAs. Openssl ocsp > doesn't seem to support that. > > Any chance for nginx to support multiple ssl_ocsp_responder > instances in the future ? Normally you shouldn't use ssl_ocsp_responder responder at all: instead, certificate's Authority Information Access (AIA) extension is used to obtain appropriate OCSP responder address. The ssl_ocsp_responder directive is something to be used to manually override information from AIA extension, either for testing or for complex configurations when you want to redefine OCSP server address for some reason. If you do this, you can distinguish OCSP requests to different certificates based on the information in the requests, such as issuer name and issuer key hashes. If the OCSP responder you use is not capable of doing this, consider removing the ssl_ocsp_responder directive, so nginx will use the AIA extension instead. (Note well that using OpenSSL's builtin OCSP responder for anything but tests might not be a good idea.) -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Thu Jan 6 23:01:22 2022 From: francis at daoine.org (Francis Daly) Date: Thu, 6 Jan 2022 23:01:22 +0000 Subject: curious .conf problem In-Reply-To: References: Message-ID: <20220106230122.GK12557@daoine.org> On Thu, Jan 06, 2022 at 08:37:46AM -0500, David27 wrote: Hi there, It's better for future readers if you include the full details in the email; but in this case it seems straightforward based on the url content I see. > "https://www.thomas-walker-lynch.com" is picked up by a server block with a > different virtual host name. A bit of a head scratcher so perhaps the > experts here can tell me what rookie mistake I have made? http://nginx.org/en/docs/http/request_processing.html If you have no "server_name www.thomas-walker-lynch.com;" (or equivalent patterns, per https://nginx.org/r/server_name) in a suitable server{}, then the request will have to be handled in a server block with a different virtual host name. Cheers, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Thu Jan 6 23:13:28 2022 From: francis at daoine.org (Francis Daly) Date: Thu, 6 Jan 2022 23:13:28 +0000 Subject: ipV6 failing first time, working everytime subsequently In-Reply-To: <3632288951fa463a718a996b3eded039.NginxMailingListEnglish@forum.nginx.org> References: <3632288951fa463a718a996b3eded039.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20220106231328.GL12557@daoine.org> On Wed, Jan 05, 2022 at 10:03:52PM -0500, chakras wrote: Hi there, > I am using nginx on a VPS to proxy requests to a GCP server. When working > with IPv4, everything works as desired. However, if we enable IPv6 on client > machine, we see that the first call (/auth) fails with 401, and all > subsequent calls are successful. That does seem odd. Does the nginx error log or debug log give any indication of why nginx sends a 401? Nothing obvious in the config would generate that within nginx. The log snippet does not include the usual first parts that would indicate the authentication username sent with the request. Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Jan 7 02:19:44 2022 From: nginx-forum at forum.nginx.org (chakras) Date: Thu, 06 Jan 2022 21:19:44 -0500 Subject: ipV6 failing first time, working everytime subsequently In-Reply-To: <20220106231328.GL12557@daoine.org> References: <20220106231328.GL12557@daoine.org> Message-ID: <804189ccf0fbc51df044ff02ef0044f4.NginxMailingListEnglish@forum.nginx.org> Hello Francis, Thanks for your response. I have verified from Network logs on the browser that POST request is sending the username and password in both cases. So, I wasn't sure if additional headers need to be sent for switching to happen seamlessly. The issue is after that first failure there is no subsequent failure for entire session of the application. I am still trying to debug what can cause this issue. Thanks, Suvendra Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293266,293277#msg-293277 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From Ueli.Marti at ch.glory-global.com Fri Jan 7 07:24:53 2022 From: Ueli.Marti at ch.glory-global.com (Marti, Ueli (Marin)) Date: Fri, 7 Jan 2022 07:24:53 +0000 Subject: OCSP, client certificate verification with chained CA In-Reply-To: References: Message-ID: Hi, ok that makes sense. Thank you for the feedback. -----Original Message----- From: nginx On Behalf Of Maxim Dounin Sent: Thursday, January 6, 2022 9:36 PM To: nginx at nginx.org Subject: Re: OCSP, client certificate verification with chained CA CAUTION: This email originated outside the company. Do not click links or open attachments unless you are expecting them from the sender. Hello! On Wed, Jan 05, 2022 at 03:33:29PM +0000, Marti, Ueli (Marin) wrote: > Ok, good point thanks. > However, it seems nginx accepts only one ssl_ocsp_responder instance. > Or is there a syntax to specify multiple instances ? > So this would need to be solved on the responder side which would need > to be able to handle multiple CAs. Openssl ocsp doesn't seem to > support that. > > Any chance for nginx to support multiple ssl_ocsp_responder instances > in the future ? Normally you shouldn't use ssl_ocsp_responder responder at all: instead, certificate's Authority Information Access (AIA) extension is used to obtain appropriate OCSP responder address. The ssl_ocsp_responder directive is something to be used to manually override information from AIA extension, either for testing or for complex configurations when you want to redefine OCSP server address for some reason. If you do this, you can distinguish OCSP requests to different certificates based on the information in the requests, such as issuer name and issuer key hashes. If the OCSP responder you use is not capable of doing this, consider removing the ssl_ocsp_responder directive, so nginx will use the AIA extension instead. (Note well that using OpenSSL's builtin OCSP responder for anything but tests might not be a good idea.) -- Maxim Dounin https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmdounin.ru%2F&data=04%7C01%7Cueli.marti%40ch.glory-global.com%7Cc91775dcee084c36b08d08d9d1543ac5%7C28825646ef414c9bb69e305d76fc24e5%7C0%7C0%7C637770982414205536%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=ho6wwhioU3yMWqNdOTqM7cDuShgG6wS9GiFRC6RmW3g%3D&reserved=0 _______________________________________________ nginx mailing list nginx at nginx.org https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmailman.nginx.org%2Fmailman%2Flistinfo%2Fnginx&data=04%7C01%7Cueli.marti%40ch.glory-global.com%7Cc91775dcee084c36b08d08d9d1543ac5%7C28825646ef414c9bb69e305d76fc24e5%7C0%7C0%7C637770982414205536%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=lyBobLI13LJ75NXAqv74l8UoNNhB2RxujohzH8oTFRA%3D&reserved=0 This e-mail and any files attached are strictly confidential, may be legally privileged and are intended solely for the addressee. If you are not the intended recipient please notify the sender immediately by return email and then delete the e-mail and any attachments immediately. The views and or opinions expressed in this e-mail are not necessarily the views of Glory Ltd, Glory Global Solutions Limited or any of their subsidiaries or affiliates and the GLORY Group of companies, their directors, officers and employees make no representation about and accept no liability for its accuracy or completeness. You should ensure that you have adequate virus protection as the GLORY Group of companies do not accept liability for any viruses. Glory Global Solutions Limited Registered No. 07945417 and Glory Global Solutions (International) Limited Registered No 6569621 are both registered in England and Wales with their registered office at: Infinity View, 1 Hazelwood, Lime Tree Way, Chineham, Basingstoke, Hampshire RG24 8WZ, United Kingdom _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Fri Jan 7 08:14:42 2022 From: francis at daoine.org (Francis Daly) Date: Fri, 7 Jan 2022 08:14:42 +0000 Subject: ipV6 failing first time, working everytime subsequently In-Reply-To: <804189ccf0fbc51df044ff02ef0044f4.NginxMailingListEnglish@forum.nginx.org> References: <20220106231328.GL12557@daoine.org> <804189ccf0fbc51df044ff02ef0044f4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20220107081442.GM12557@daoine.org> On Thu, Jan 06, 2022 at 09:19:44PM -0500, chakras wrote: Hi there, > Thanks for your response. I have verified from Network logs on the browser > that POST request is sending the username and password in both cases. Where in the data flow is the username/password validated, that might generate the 401 response that you see? Should it happen in part of the nginx config that you have not shown; or should it happen in the proxy_pass'ed request that you do not see happen? > So, I > wasn't sure if additional headers need to be sent for switching to happen > seamlessly. The issue is after that first failure there is no subsequent > failure for entire session of the application. In general, for a browser doing normal web browsing, 401 followed by 200 is normal and expected. In your specific case, it may not be; and the fact that it only-and-reliably happens when the client uses IPv6 not IPv4 does seem odd. I think you indicated that the only mentions of a 401 response in your nginx logs are when the client has connected from an IPv6 address. > I am still trying to debug what can cause this issue. Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From jamesread5737 at gmail.com Fri Jan 7 11:37:03 2022 From: jamesread5737 at gmail.com (James Read) Date: Fri, 7 Jan 2022 11:37:03 +0000 Subject: Nginx performance data In-Reply-To: References: Message-ID: On Wed, Jan 5, 2022 at 12:18 AM James Read wrote: > Hi, > > I have some questions about Nginx performance. How many concurrent > connections can Nginx handle? What throughput can Nginx achieve when > serving a large number of small pages to a large number of clients (the > maximum number supported)? How does Nginx achieve its performance? Is the > epoll event loop all done in a single thread or are multiple threads used > to split the work of serving so many different clients? > > Anyone? > thanks in advance > James Read > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From anoopalias01 at gmail.com Fri Jan 7 11:55:44 2022 From: anoopalias01 at gmail.com (Anoop Alias) Date: Fri, 7 Jan 2022 17:25:44 +0530 Subject: Nginx performance data In-Reply-To: References: Message-ID: This basically depends on your hardware and network speed etc Nginx is event-driven and does not fork a separate process for handling new connections which basically makes it different from Apache httpd On Wed, Jan 5, 2022 at 5:48 AM James Read wrote: > Hi, > > I have some questions about Nginx performance. How many concurrent > connections can Nginx handle? What throughput can Nginx achieve when > serving a large number of small pages to a large number of clients (the > maximum number supported)? How does Nginx achieve its performance? Is the > epoll event loop all done in a single thread or are multiple threads used > to split the work of serving so many different clients? > > thanks in advance > James Read > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From r at roze.lv Fri Jan 7 12:17:59 2022 From: r at roze.lv (Reinis Rozitis) Date: Fri, 7 Jan 2022 14:17:59 +0200 Subject: Nginx performance data In-Reply-To: References: Message-ID: <000c01d803c0$9c652c70$d52f8550$@roze.lv> > Anyone? Since the questions are quite general (like the upper limits are usually hardware bound so the performance numbers vary based on that) maybe reading these blog posts can give some insight: https://www.nginx.com/blog/testing-the-performance-of-nginx-and-nginx-plus-web-servers/ and other articles https://www.nginx.com/blog/tag/performance-testing/ rr _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From jamesread5737 at gmail.com Fri Jan 7 12:45:45 2022 From: jamesread5737 at gmail.com (James Read) Date: Fri, 7 Jan 2022 12:45:45 +0000 Subject: Nginx performance data In-Reply-To: <000c01d803c0$9c652c70$d52f8550$@roze.lv> References: <000c01d803c0$9c652c70$d52f8550$@roze.lv> Message-ID: On Fri, Jan 7, 2022 at 12:18 PM Reinis Rozitis wrote: > > Anyone? > > Since the questions are quite general (like the upper limits are usually > hardware bound so the performance numbers vary based on that) maybe reading > these blog posts can give some insight: > > > https://www.nginx.com/blog/testing-the-performance-of-nginx-and-nginx-plus-web-servers/ I don't view the test described as valid because the test is between one client and one server. I'm interested in testing with one server and many clients. The reason I'm asking is because I developed a epoll based client that connects to many servers and the performance was not impressive when over 1024 concurrent connections was reached. See this question on stackoverflow for more details https://stackoverflow.com/questions/70584121/why-doesnt-my-epoll-based-program-improve-performance-by-increasing-the-number In contrast, when I test with wrk which is also epoll based and creates many concurrent connections from one client to one server the performance is markedly improved. This is why I'm interested in Nginx performance with many clients. I want to see if there is a buy in my epoll based client that is limiting performance. If Nginx performs well with multiple clients over 1024 clients then that would seem to indicate there is a bug in my epoll based client. James Read > > and other articles https://www.nginx.com/blog/tag/performance-testing/ > > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From jamesread5737 at gmail.com Fri Jan 7 13:03:05 2022 From: jamesread5737 at gmail.com (James Read) Date: Fri, 7 Jan 2022 13:03:05 +0000 Subject: Nginx performance data In-Reply-To: References: Message-ID: On Fri, Jan 7, 2022 at 11:56 AM Anoop Alias wrote: > This basically depends on your hardware and network speed etc > > Nginx is event-driven and does not fork a separate process for handling > new connections which basically makes it different from Apache httpd > Just to be clear Nginx is entirely single threaded? James Read > > On Wed, Jan 5, 2022 at 5:48 AM James Read wrote: > >> Hi, >> >> I have some questions about Nginx performance. How many concurrent >> connections can Nginx handle? What throughput can Nginx achieve when >> serving a large number of small pages to a large number of clients (the >> maximum number supported)? How does Nginx achieve its performance? Is the >> epoll event loop all done in a single thread or are multiple threads used >> to split the work of serving so many different clients? >> >> thanks in advance >> James Read >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > *Anoop P Alias* > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From anoopalias01 at gmail.com Fri Jan 7 13:13:27 2022 From: anoopalias01 at gmail.com (Anoop Alias) Date: Fri, 7 Jan 2022 18:43:27 +0530 Subject: Nginx performance data In-Reply-To: References: Message-ID: https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/ On Fri, Jan 7, 2022 at 6:33 PM James Read wrote: > > > On Fri, Jan 7, 2022 at 11:56 AM Anoop Alias > wrote: > >> This basically depends on your hardware and network speed etc >> >> Nginx is event-driven and does not fork a separate process for handling >> new connections which basically makes it different from Apache httpd >> > > Just to be clear Nginx is entirely single threaded? > > James Read > > >> >> On Wed, Jan 5, 2022 at 5:48 AM James Read >> wrote: >> >>> Hi, >>> >>> I have some questions about Nginx performance. How many concurrent >>> connections can Nginx handle? What throughput can Nginx achieve when >>> serving a large number of small pages to a large number of clients (the >>> maximum number supported)? How does Nginx achieve its performance? Is the >>> epoll event loop all done in a single thread or are multiple threads used >>> to split the work of serving so many different clients? >>> >>> thanks in advance >>> James Read >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> -- >> *Anoop P Alias* >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From jamesread5737 at gmail.com Fri Jan 7 13:36:04 2022 From: jamesread5737 at gmail.com (James Read) Date: Fri, 7 Jan 2022 13:36:04 +0000 Subject: Nginx performance data In-Reply-To: References: Message-ID: On Fri, Jan 7, 2022 at 1:13 PM Anoop Alias wrote: > > https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/ > Nice article. So the short answer is that Nginx does in fact use multiple processes. Does anybody know what timeout is used in Nginx call to epoll_wait()? Is there some heuristic for calculating the optimal timeout on each iteration? James Read > > > On Fri, Jan 7, 2022 at 6:33 PM James Read wrote: > >> >> >> On Fri, Jan 7, 2022 at 11:56 AM Anoop Alias >> wrote: >> >>> This basically depends on your hardware and network speed etc >>> >>> Nginx is event-driven and does not fork a separate process for handling >>> new connections which basically makes it different from Apache httpd >>> >> >> Just to be clear Nginx is entirely single threaded? >> >> James Read >> >> >>> >>> On Wed, Jan 5, 2022 at 5:48 AM James Read >>> wrote: >>> >>>> Hi, >>>> >>>> I have some questions about Nginx performance. How many concurrent >>>> connections can Nginx handle? What throughput can Nginx achieve when >>>> serving a large number of small pages to a large number of clients (the >>>> maximum number supported)? How does Nginx achieve its performance? Is the >>>> epoll event loop all done in a single thread or are multiple threads used >>>> to split the work of serving so many different clients? >>>> >>>> thanks in advance >>>> James Read >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >>> >>> -- >>> *Anoop P Alias* >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > *Anoop P Alias* > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Jan 7 17:02:03 2022 From: nginx-forum at forum.nginx.org (chakras) Date: Fri, 07 Jan 2022 12:02:03 -0500 Subject: ipV6 failing first time, working everytime subsequently In-Reply-To: <20220107081442.GM12557@daoine.org> References: <20220107081442.GM12557@daoine.org> Message-ID: Hello Francis, To answer your question partially, username/ password validation happens on GCP server. We do a POST request and send a JSON object with those values filled in. If the login succeeds, we send back a token. Nginx is really working just as a proxy here. Something like this on browser Network debug log, Request URL: https://xxxxx.us/gcp/users/auth Request Method: POST Status Code: 401 Remote Address: xxx.xxx.xxx.xxx Referrer Policy: strict-origin-when-cross-origin In this case 401 is valid (I am on IPv4) as the user/ pass was wrong. Payload: {username: "johnie", password: "yespapa", remember: null} Successful login will return 200 and an object that looks something like this, access_token: "a.b.c.d" attributes: { "username": "string", "roles": [] etc... } Thanks, Suvendra Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293266,293287#msg-293287 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Jan 7 18:46:06 2022 From: nginx-forum at forum.nginx.org (David27) Date: Fri, 07 Jan 2022 13:46:06 -0500 Subject: curious .conf problem In-Reply-To: <20220106230122.GK12557@daoine.org> References: <20220106230122.GK12557@daoine.org> Message-ID: Hello Francis, > If you have no "server_name www.thomas-walker-lynch.com;" ( There is indeed a server block already defined with a server name www.thomas-walker-lynch.com. Nginx just chooses to use a different block that has a different server name. (Or perhaps I messed up the syntax for it lol) Note that everything else seems to be working. As for copying *everything* into the email ... 1) I thought folks would rather see the entire files rather than quotes from them, so I provided txt links. Did the link not work for you? 2) Gosh I am also hesitant to put the entire sites .conf file in a public email list where for evermore they are viewable by people who might notice exploits, even those unrelated to the issue at hand. Well in any case, find the entire .conf file for the second virtual host below. Also Here is a detailed description of the problem: thomaswlynch.com/conf/info.txt This is the .conf file used for the main website (which is working). It is the only other .conf file currently in sites-enabled thomaswlynch.com/conf/customer_gateway.txt This is the .conf file for the second host, which has the problem: thomaswlynch.com/conf/thomas-walker-lynch.txt The following is the .conf file found at the above link: server { server_name .thomaswlynch.com; return 301 https://thomas-walker-lynch.com$request_uri; } server { server_name .thomaswalkerlynch.com; return 301 https://thomas-walker-lynch.com$request_uri; } server { server_name www.thomas-walker-lynch.com; return 301 https://thomas-walker-lynch.com$request_uri; } server { server_name thomas-walker-lynch.com; listen 80; listen [::]:80; return 301 https://thomas-walker-lynch.com$request_uri; } server { server_name thomas-walker-lynch.com charset utf-8; listen [::]:443 ssl; listen 443 ssl; access_log /var/log/nginx/thomas-walker-lynch.com_access.log; error_log /var/log/nginx/thomas-walker-lynch.com_error.log; root /var/www/html/thomas-walker-lynch.com; index index.php; location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php7.3-fpm.sock; } ssl_certificate /etc/letsencrypt/live/reasoningtechnology.com-0001/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/reasoningtechnology.com-0001/privkey.pem; # managed by Certbot } server { server_name _; return 404; } upstream wsgi_server_location{ server unix://home/nginx_customer_gateway_mediary/socket; } server { server_name reasoningtechnology.com; charset utf-8; listen [::]:443 ssl; listen 443 ssl; access_log /var/log/nginx/cg.reasoningtechnology.com_access.log; error_log /var/log/nginx/cg.reasoningtechnology.com_error.log; client_max_body_size 75M; location / { uwsgi_pass wsgi_server_location; include uwsgi_params; } location /static/{ root /var/www/html/customer_gateway; } ssl_certificate /etc/letsencrypt/live/reasoningtechnology.com-0001/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/reasoningtechnology.com-0001/privkey.pem; # managed by Certbot } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293269,293288#msg-293288 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Fri Jan 7 21:29:39 2022 From: francis at daoine.org (Francis Daly) Date: Fri, 7 Jan 2022 21:29:39 +0000 Subject: curious .conf problem In-Reply-To: References: <20220106230122.GK12557@daoine.org> Message-ID: <20220107212939.GN12557@daoine.org> On Fri, Jan 07, 2022 at 01:46:06PM -0500, David27 wrote: Hi there, > > If you have no "server_name www.thomas-walker-lynch.com;" ( > > There is indeed a server block already defined with a server name > www.thomas-walker-lynch.com. "in a suitable server{}". For a connection to port 80, there are 12 server{} blocks in your config. They are irrelevant for a connection to port 443. For a https connection to port 443, there are 2 server{} blocks. One has one server_name: reasoningtechnology.com; and the other has three server_names: thomas-walker-lynch.com, charset, and utf-8. (The missing semi-colon is not a syntax error when the directive takes any number of arguments; it's also not a problem that affects this specific issue.) None of those four are names or patterns that match "www.thomas-walker-lynch.com". > Nginx just chooses to use a different block > that has a different server name. (Or perhaps I messed up the syntax for it > lol) Note that everything else seems to be working. Yes. http://most-things should probably be working, because they redirect to the expected https names; but I would expect that https://anything-except-thomas-walker-lynch.com would give you the content from the reasoningtechnology.com server. (Because if you do not declare which server{} should be the "default_server" for a particular IP:port when the request does not match any server_name values, nginx knows that you intended its default choice to be the one: http://nginx.org/r/listen) > As for copying *everything* into the email ... 1) I thought folks would > rather see the entire files rather than quotes from them, so I provided txt > links. Did the link not work for you? The entire files are good, yes. And the links worked for me yesterday. They may or may not work for someone reading the mail archives in three years, when they find they have a similar observed problem and they'd love to know whether the specific problem-and-solution here applies to them. > 2) Gosh I am also hesitant to put > the entire sites .conf file in a public email list where for evermore they > are viewable by people who might notice exploits, even those unrelated to > the issue at hand. That is a valid concern; best is if you can include a complete-but-minimal config that shows the problem but shows no secrets. All of the unrelated parts can be omitted, or replaced consistently with a different value. The difficulty is that if you knew for certain which were the unrelated parts, you probably wouldn't be asking the question. I don't think there is a great resolution to that conflict, unfortunately. > Well in any case, find the entire .conf file for the > second virtual host below. Thanks for making the config available. In this case: for a https connection, only the server{} blocks with a "listen 443"-type directive matter. > server { > server_name thomas-walker-lynch.com > charset utf-8; > listen [::]:443 ssl; > listen 443 ssl; > server { > server_name reasoningtechnology.com; > charset utf-8; > listen [::]:443 ssl; > listen 443 ssl; There is no explicit "default_server", so "the first" is the default. If they are part of nginx.conf because of an "include sites-enabled/*.conf" directive, then the files are loaded in alphabetical filename order -- "c" before "t". The fix is (probably) to change the server_name directive to include all of the names that you want (https://nginx.org/r/server_name); being aware that the client will likely be unhappy if it is presented with a certificate that does not cover the name they used to access the service -- so you might also want to change the certificate, if relevant. Cheers, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Fri Jan 7 21:37:02 2022 From: francis at daoine.org (Francis Daly) Date: Fri, 7 Jan 2022 21:37:02 +0000 Subject: ipV6 failing first time, working everytime subsequently In-Reply-To: References: <20220107081442.GM12557@daoine.org> Message-ID: <20220107213702.GO12557@daoine.org> On Fri, Jan 07, 2022 at 12:02:03PM -0500, chakras wrote: Hi there, > To answer your question partially, username/ password validation happens on > GCP server. We do a POST request and send a JSON object with those values > filled in. If the login succeeds, we send back a token. Nginx is really > working just as a proxy here. Something like this on browser Network debug > log, > > Request URL: https://xxxxx.us/gcp/users/auth > Request Method: POST > Status Code: 401 > Remote Address: xxx.xxx.xxx.xxx > Referrer Policy: strict-origin-when-cross-origin > > In this case 401 is valid (I am on IPv4) as the user/ pass was wrong. I think you have said: * the initial 401 response comes from GCP * this initial request does not go to GCP I don't think that both of those can be correct simultaneously. Possibly the nginx logs, the GCP logs, or a tcpdump in the middle, can show what actually happens in this initial request? That might help you see where things go wrong. Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Fri Jan 7 23:56:01 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 8 Jan 2022 02:56:01 +0300 Subject: Nginx performance data In-Reply-To: References: Message-ID: Hello! On Fri, Jan 07, 2022 at 01:36:04PM +0000, James Read wrote: > Nice article. So the short answer is that Nginx does in fact use multiple > processes. Does anybody know what timeout is used in Nginx call to > epoll_wait()? Is there some heuristic for calculating the optimal timeout > on each iteration? I think you might have some better responses to your questions if you'll follow the "How To Ask Questions The Smart Way" guide. In particular, the "Before You Ask" section: http://www.catb.org/~esr/faqs/smart-questions.html#before Hope this helps. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From r at roze.lv Sat Jan 8 10:40:18 2022 From: r at roze.lv (Reinis Rozitis) Date: Sat, 8 Jan 2022 12:40:18 +0200 Subject: Nginx performance data In-Reply-To: References: <000c01d803c0$9c652c70$d52f8550$@roze.lv> Message-ID: <004301d8047c$21402f60$63c08e20$@roze.lv> >> https://www.nginx.com/blog/testing-the-performance-of-nginx-and-nginx-plus-web-servers/ > > I don't view the test described as valid because the test is between one client and one server. I'm interested in testing with one server and many clients. wrk (used in the tests) [1] is a multithread/connection benchmark tool. You can simulate how many clients you want (basically you are just limited by ephemeral port tuple count ~65k .. but then again can work around it by assigning multiple IPs to the server/client). [1] https://github.com/wg/wrk rr _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From jamesread5737 at gmail.com Sat Jan 8 20:50:41 2022 From: jamesread5737 at gmail.com (James Read) Date: Sat, 8 Jan 2022 20:50:41 +0000 Subject: Nginx performance data In-Reply-To: <004301d8047c$21402f60$63c08e20$@roze.lv> References: <000c01d803c0$9c652c70$d52f8550$@roze.lv> <004301d8047c$21402f60$63c08e20$@roze.lv> Message-ID: On Sat, Jan 8, 2022 at 10:40 AM Reinis Rozitis wrote: > >> > https://www.nginx.com/blog/testing-the-performance-of-nginx-and-nginx-plus-web-servers/ > > > > I don't view the test described as valid because the test is between one > client and one server. I'm interested in testing with one server and many > clients. > > wrk (used in the tests) [1] is a multithread/connection benchmark tool. > You can simulate how many clients you want (basically you are just limited > by ephemeral port tuple count ~65k .. but then again can work around it by > assigning multiple IPs to the server/client). > > > Yes. I've used wrk in my own tests. It's a very good tool. The problem is I don't think wrk really can simulate communication from many clients. There is something different between simulating many clients with one real client and actually testing with many real different clients. I can't put my finger on what the difference is. But there must be a difference. Otherwise why is my application running into such performance limits as mentioned in this question on stackoverflow https://stackoverflow.com/questions/70584121/why-doesnt-my-epoll-based-program-improve-performance-by-increasing-the-number ? > [1] https://github.com/wg/wrk > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From jamesread5737 at gmail.com Sat Jan 8 20:53:44 2022 From: jamesread5737 at gmail.com (James Read) Date: Sat, 8 Jan 2022 20:53:44 +0000 Subject: Nginx performance data In-Reply-To: References: Message-ID: On Fri, Jan 7, 2022 at 11:56 PM Maxim Dounin wrote: > Hello! > > On Fri, Jan 07, 2022 at 01:36:04PM +0000, James Read wrote: > > > Nice article. So the short answer is that Nginx does in fact use multiple > > processes. Does anybody know what timeout is used in Nginx call to > > epoll_wait()? Is there some heuristic for calculating the optimal timeout > > on each iteration? > > I think you might have some better responses to your questions if > you'll follow the "How To Ask Questions The Smart Way" guide. In > particular, the "Before You Ask" section: > Of course, reading the source code would answer the question if my code reading and comprehension skills were that good. Just to get me started which of the many source files should I look in to get the answer? thanks, James Read > > http://www.catb.org/~esr/faqs/smart-questions.html#before > > Hope this helps. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Sat Jan 8 21:20:57 2022 From: francis at daoine.org (Francis Daly) Date: Sat, 8 Jan 2022 21:20:57 +0000 Subject: Nginx performance data In-Reply-To: References: Message-ID: <20220108212057.GP12557@daoine.org> On Sat, Jan 08, 2022 at 08:53:44PM +0000, James Read wrote: > > On Fri, Jan 07, 2022 at 01:36:04PM +0000, James Read wrote: Hi there, > > > Does anybody know what timeout is used in Nginx call to > > > epoll_wait()? > which of the many source files should I look in to get the answer? http://nginx.org/en/download.html $ grep -rl epoll_wait nginx-1.21.5 nginx-1.21.5/src/http/ngx_http_upstream.c nginx-1.21.5/src/event/modules/ngx_epoll_module.c I'd probably start there. Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From jamesread5737 at gmail.com Sat Jan 8 22:02:00 2022 From: jamesread5737 at gmail.com (James Read) Date: Sat, 8 Jan 2022 22:02:00 +0000 Subject: Nginx performance data In-Reply-To: <20220108212057.GP12557@daoine.org> References: <20220108212057.GP12557@daoine.org> Message-ID: On Sat, Jan 8, 2022 at 9:21 PM Francis Daly wrote: > On Sat, Jan 08, 2022 at 08:53:44PM +0000, James Read wrote: > > > On Fri, Jan 07, 2022 at 01:36:04PM +0000, James Read wrote: > > Hi there, > > > > > Does anybody know what timeout is used in Nginx call to > > > > epoll_wait()? > > > which of the many source files should I look in to get the answer? > > http://nginx.org/en/download.html > > $ grep -rl epoll_wait nginx-1.21.5 > nginx-1.21.5/src/http/ngx_http_upstream.c > nginx-1.21.5/src/event/modules/ngx_epoll_module.c > > I'd probably start there. > Yeah. Thanks. In line 800 of nginx-1.21.5/src/event/modules/ngx_epoll_module.c it says events = epoll_wait(ep, event_list, (int) nevents, timer); Thus the variable timer is used. Line 123 and 124 seem to give the definition of the function call which supplies the timer variable static ngx_int_t ngx_epoll_process_events(ngx_cycle_t *cycle, ngx_msec_t timer, ngx_uint_t flags); So then I did: grep -rl ngx_epoll_process_events nginx-1.21.5 and got: nginx-1.21.5/src/event/modules/ngx_epoll_module.c I can't find any calls to that function in that file. Only the definition of the function itself. So I still have no idea how the timer variable is calculated. thanks, James Read > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Sat Jan 8 23:49:39 2022 From: nginx-forum at forum.nginx.org (David27) Date: Sat, 08 Jan 2022 18:49:39 -0500 Subject: curious .conf problem In-Reply-To: <20220107212939.GN12557@daoine.org> References: <20220107212939.GN12557@daoine.org> Message-ID: <2141a0063b5556fb98989b51b1f166f9.NginxMailingListEnglish@forum.nginx.org> Hello Francis, thanks for the hints. It is working now. >> 2) Gosh I am also hesitant to put >> the entire sites .conf file in a public email list where for evermore they >> are viewable by people who might notice exploits, even those unrelated to >> the issue at hand. > >That is a valid concern; best is if you can include a complete-but-minimal > config I will endeavor to provide a description that a reader can follow here. First, these things made it more difficult to test/debug: 1. I copied over another configuration file that was in use and "working", thinking that it must be correct, but it wasn't. 2. The browser navigation bar was prefixing www automatically the second time a host was typed when the first time it had the www and second time it didn't. 3. The nginx access log was not complaining, but the site was not coming up. This is because by the rules it picked another server block with a different name, and by some nginx logic that was OK. (Francis explained it picked the first server block when there is no server name match for the given port. However, there was a catch all at the bottom of the conf file. That too was copied ... hmm 4. when sites were accessed via the browser without stating an explicit protocol, and no https protocol was specified in a server block, they defaulted to http rather than defaulting to https, then redirected by the server block for port 80. Hence some of the redirections on the original config that I copied were actually untested. My mistakes: 1. I assumed that the server block would apply independent of the port if there was no listen port attributed to it. That seemed to be implied by the config file I had copied.. Francis points out here that is not the case, rather if there is no listen attribute, the server block will apply only to port 80. 2. The proximate cause for all these problems: Certbot did not generate an SSL certificate for a server block with a 'dot' prefix name even when it was listening to 443. It didn't complain, it just didn't expand the certificate. Solution: This original redirect server blocks of this form: server { server_name .thomaswlynch.com; # listen 80; # listen [::]:80; # listen [::]:443 ssl; # listen 443 ssl; return 301 https://thomas-walker-lynch.com$request_uri; } were modified to the form that follows, then certbot was run again: server { server_name thomaswlynch.com www.thomaswlynch.com; listen 80; listen [::]:80; listen [::]:443 ssl; listen 443 ssl; return 301 https://thomas-walker-lynch.com$request_uri; ssl_certificate /etc/letsencrypt/live/reasoningtechnology.com-0001/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/reasoningtechnology.com-0001/privkey.pem; # managed by Certbot } With the server names listed with explicit subdomains, instead of using a dot prefix, certbot expanded the certificate to contain them. It only makes sense that certbot can not issue a certificate against a wildcard subdomain, though it would have been nice had it issued a warning. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293269,293298#msg-293298 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From r at roze.lv Sun Jan 9 00:47:06 2022 From: r at roze.lv (Reinis Rozitis) Date: Sun, 9 Jan 2022 02:47:06 +0200 Subject: Nginx performance data In-Reply-To: References: <000c01d803c0$9c652c70$d52f8550$@roze.lv> <004301d8047c$21402f60$63c08e20$@roze.lv> Message-ID: <000001d804f2$6d799170$486cb450$@roze.lv> > Otherwise why is my application running into such performance limits as mentioned in this question on stackoverflow https://stackoverflow.com/questions/70584121/why-doesnt-my-epoll-based-program-improve-performance-by-increasing-the-number ? You are testing something (public third party dns servers) you have no control over (how can you be sure there are no rate limits?) with third party libraries without actually measuring/showing what takes up your time. That's not the best way to test low level things. I would suggest to at minimum do at least 'strace -c' to see what syscalls takes most of the time. But that's something out of scope of this mailing list. rr _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From jamesread5737 at gmail.com Sun Jan 9 01:36:36 2022 From: jamesread5737 at gmail.com (James Read) Date: Sun, 9 Jan 2022 01:36:36 +0000 Subject: Nginx performance data In-Reply-To: <000001d804f2$6d799170$486cb450$@roze.lv> References: <000c01d803c0$9c652c70$d52f8550$@roze.lv> <004301d8047c$21402f60$63c08e20$@roze.lv> <000001d804f2$6d799170$486cb450$@roze.lv> Message-ID: On Sun, Jan 9, 2022 at 12:47 AM Reinis Rozitis wrote: > > Otherwise why is my application running into such performance limits as > mentioned in this question on stackoverflow > https://stackoverflow.com/questions/70584121/why-doesnt-my-epoll-based-program-improve-performance-by-increasing-the-number > ? > > You are testing something (public third party dns servers) you have no > control over (how can you be sure there are no rate limits?) with third > party libraries without actually measuring/showing what takes up your time. > That's not the best way to test low level things. > I am satisfied with the speed of the DNS resolution. It's the http communication that I want to speed up. > > I would suggest to at minimum do at least 'strace -c' to see what syscalls > takes most of the time. > Nice tool. Never used it before. Thanks. James Read > > But that's something out of scope of this mailing list. > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From al-nginx at none.at Mon Jan 10 11:41:57 2022 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 10 Jan 2022 12:41:57 +0100 Subject: Nginx performance data In-Reply-To: References: Message-ID: On 07.01.22 14:13, Anoop Alias wrote: > https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/ In addition please also take a look into this post. https://www.nginx.com/blog/thread-pools-boost-performance-9x/ Regards Alex > On Fri, Jan 7, 2022 at 6:33 PM James Read wrote: > > > > On Fri, Jan 7, 2022 at 11:56 AM Anoop Alias wrote: > > This basically depends on your hardware and network speed etc > > Nginx is event-driven and does not fork a separate process for handling new connections which basically makes it different from Apache httpd > > > Just to be clear Nginx is entirely single threaded? > > James Read > > > On Wed, Jan 5, 2022 at 5:48 AM James Read wrote: > > Hi, > > I have some questions about Nginx performance. How many concurrent connections can Nginx handle? What throughput can Nginx achieve when serving a large number of small pages to a large number of clients (the maximum number supported)? How does Nginx achieve its performance? Is the epoll event loop all done in a single thread or are multiple threads used to split the work of serving so many different clients? > > thanks in advance > James Read _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From jamesread5737 at gmail.com Mon Jan 10 17:47:26 2022 From: jamesread5737 at gmail.com (James Read) Date: Mon, 10 Jan 2022 17:47:26 +0000 Subject: Nginx performance data In-Reply-To: References: Message-ID: On Mon, Jan 10, 2022 at 11:42 AM Aleksandar Lazic wrote: > On 07.01.22 14:13, Anoop Alias wrote: > > > https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/ > > In addition please also take a look into this post. > https://www.nginx.com/blog/thread-pools-boost-performance-9x/ Thanks. I've been doing some preliminary experiments with PACKET_MMAP style communication. I'm able to max out the available bandwidth using this technique. Could Nginx be improved in a similar way? James Read > > > Regards > Alex > > > On Fri, Jan 7, 2022 at 6:33 PM James Read > wrote: > > > > > > > > On Fri, Jan 7, 2022 at 11:56 AM Anoop Alias > wrote: > > > > This basically depends on your hardware and network speed etc > > > > Nginx is event-driven and does not fork a separate process for > handling new connections which basically makes it different from Apache > httpd > > > > > > Just to be clear Nginx is entirely single threaded? > > > > James Read > > > > > > On Wed, Jan 5, 2022 at 5:48 AM James Read < > jamesread5737 at gmail.com> wrote: > > > > Hi, > > > > I have some questions about Nginx performance. How many > concurrent connections can Nginx handle? What throughput can Nginx achieve > when serving a large number of small pages to a large number of clients > (the maximum number supported)? How does Nginx achieve its performance? Is > the epoll event loop all done in a single thread or are multiple threads > used to split the work of serving so many different clients? > > > > thanks in advance > > James Read > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From paul at stormy.ca Mon Jan 10 23:40:29 2022 From: paul at stormy.ca (Paul) Date: Mon, 10 Jan 2022 18:40:29 -0500 Subject: Nginx performance data In-Reply-To: References: Message-ID: <538411f0-fb29-b6b8-843a-3d5a6e99923a@stormy.ca> On 2022-01-10 12:47 p.m., James Read wrote: > I've been doing some preliminary experiments with PACKET_MMAP style > communication. With apologies for "snipping", and disclaimer that I am not an nginx developer, only a long term user. So, MMAP has given you "preliminary" analysis of what your kernel can do with your hardware. Would you care to share, in a meaningful manner, any results that you feel are relevant to any tcp processes - perhaps nginx in particular? > I'm able to max out the available bandwidth using this > technique. Available bandwidth? Please define. Is this local, or WAN? Are you on a 56k dial-up modem? or do you have multiple fail-over, load-balanced fibre connectivity? MMAP to the best of my knowledge, never claimed to be able to simulate live (live in the sense 'externally processed IP') tcp/http connections, so what "recognized benchmark" did you max out? Could Nginx be improved in a similar way? "improved"? From what and to what? Starting point? End-point? Similar to what "way"? You write (below) "a large number of small pages to a large number of clients..." Large number? 10 to what exponential? I've just looked at an nginx server that has dealt with ~88.3 GB/sec over the last few minutes, and cpu usage across 32 cores is bumbling along at less that 3%, temperatures barely 3 degrees above ambient, memcached transferring nothing to swap. Either you have badly explained what you are looking for, or, heaven forfend, you're trolling. Paul. Tired old sys-admin. > James Read > > > > Regards > Alex > > > On Fri, Jan 7, 2022 at 6:33 PM James Read > > wrote: > > > > > > > >     On Fri, Jan 7, 2022 at 11:56 AM Anoop Alias > > wrote: > > > >         This basically depends on your hardware and network speed etc > > > >         Nginx is event-driven and does not fork a > separate process for handling new connections which basically makes > it different from Apache httpd > > > > > >     Just to be clear Nginx is entirely single threaded? > > > >     James Read > > > > > >         On Wed, Jan 5, 2022 at 5:48 AM James Read > > wrote: > > > >             Hi, > > > >             I have some questions about Nginx performance. How > many concurrent connections can Nginx handle? What throughput can > Nginx achieve when serving a large number of small pages to a large > number of clients (the maximum number supported)? How does Nginx > achieve its performance? Is the epoll event loop all done in a > single thread or are multiple threads used to split the work of > serving so many different clients? > > > >             thanks in advance > >             James Read > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > \\\||// (@ @) ooO_(_)_Ooo__________________________________ |______|_____|_____|_____|_____|_____|_____|_____| |___|____|_____|_____|_____|_____|_____|_____|____| |_____|_____| mailto:paul at stormy.ca _|____|____| _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Tue Jan 11 00:28:44 2022 From: francis at daoine.org (Francis Daly) Date: Tue, 11 Jan 2022 00:28:44 +0000 Subject: curious .conf problem In-Reply-To: <2141a0063b5556fb98989b51b1f166f9.NginxMailingListEnglish@forum.nginx.org> References: <20220107212939.GN12557@daoine.org> <2141a0063b5556fb98989b51b1f166f9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20220111002844.GQ12557@daoine.org> On Sat, Jan 08, 2022 at 06:49:39PM -0500, David27 wrote: Hi there, > Hello Francis, thanks for the hints. It is working now. Good to hear you have something that works for you :-) > I will endeavor to provide a description that a reader can follow here. And thank you for this; it is very hard for someone who is *not* new to the application and the documentation, to know what a new person would do and where they would look. So you experience may help others, and may help the documentation improve. > 2. The browser navigation bar was prefixing www automatically the second > time a host was typed when the first time it had the www and second time it > didn't. Yes, common browsers tend to be "friendly", and hide lots of things for the typical reading session. For testing and checking what the server is actually returning, a client like "curl" is often morehelpful. > 3. The nginx access log was not complaining, but the site was not coming up. For future testing -- it might be useful to use a different access_log in each server{} block; or maybe to add $server_name to the log_format, to make it clearer which server{} (and location{}) the request was finally handled in. > This is because by the rules it picked another server block with a > different name, and by some nginx logic that was OK. (Francis explained it > picked the first server block when there is no server name match for the > given port. nginx, for better or worse, tends to believe that the configuration it was given is the configuration that it was intended to be given. It's one of those "user-friendly; but picky about its friends" balances. > However, there was a catch all at the bottom of the conf file. > That too was copied ... hmm I suspect you're referring to server { server_name _; return 404; } That's not a catch-all. If you can point at any documentation that caused you to think that "server_name _;" means "catch-all server", that documentation should be corrected. A possible catch-all could be "server_name ~.;"; but the "proper" one is of the form "listen [ip:]port default_server;" > 4. when sites were accessed via the browser without stating an explicit > protocol, and no https protocol was specified in a server block, they > defaulted to http rather than defaulting to https, then redirected by the > server block for port 80. Hence some of the redirections on the original > config that I copied were actually untested. That's possibly "browser" vs "curl" again. Although I think both will assume a protocol; curl will only follow redirects if you tell it to, so it is easier to see what exact steps are taken. > 1. I assumed that the server block would apply independent of the port if > there was no listen port attributed to it. That seemed to be implied by the > config file I had copied.. Francis points out here that is not the case, > rather if there is no listen attribute, the server block will apply only to > port 80. Strictly: 80 is started as root, and 8000 otherwise. But usually you will be starting as root. https://nginx.org/r/listen: Default: listen *:80 | *:8000; > 2. The proximate cause for all these problems: Certbot did not generate an > SSL certificate for a server block with a 'dot' prefix name even when it was > listening to 443. It didn't complain, it just didn't expand the > certificate. That seems like a curious thing to do. Maybe certbot "knows" that wildcard certificates are somehow special, and chooses not to try to create them automatically? It looks like you would have preferred for certbot to say which server_name values it was requesting certificate for, and which it was not? Or maybe the distinction between "wildcard" names and fixed names could have been clearer? > With the server names listed with explicit subdomains, instead of using a > dot prefix, certbot expanded the certificate to contain them. It only makes > sense that certbot can not issue a certificate against a wildcard subdomain, > though it would have been nice had it issued a warning. That's a possible useful feature for the certbot people to consider. Cheers, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From noloader at gmail.com Tue Jan 11 03:40:09 2022 From: noloader at gmail.com (Jeffrey Walton) Date: Mon, 10 Jan 2022 22:40:09 -0500 Subject: Use prebuilt Bzip, zLib and OpenSSL? Message-ID: Hi Everyone, I need to build a modern Nginx from sources on an older platform. The need arises because the organization can't upgrade a particular set of machines. We want to set up a Nginx proxy to handle the front-end work. I've got Bzip, zLib and OpenSSL built and installed in /opt, but I am having trouble getting Nginx to compile and [presumably] link against them. In this case we don't want Nginx building them from sources. Rather, we want Nginx to use the includes in /opt/include, and the libraries in /opt/lib. Typically GNU software using Autotools use an option like --with-openssl-prefix. Nginx does not provide the option. Nginx also does not provide the customary --includedir and --libdir. (And I understand Nginx is not GNU). How do we tell Nginx the prefix path for the libraries? Thanks in advance. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Tue Jan 11 05:15:51 2022 From: nginx-forum at forum.nginx.org (remdex) Date: Tue, 11 Jan 2022 00:15:51 -0500 Subject: unknown directive "js_include" after update to Message-ID: <0ccaa675d8d90bc84391486148905419.NginxMailingListEnglish@forum.nginx.org> Before updating RPM packages ``` nginx-module-perl-1.20.2-1.el7.ngx.x86_64 nginx-1.20.2-1.el7.ngx.x86_64 nginx-module-xslt-1.20.2-1.el7.ngx.x86_64 nginx-filesystem-1.20.1-9.el7.noarch nginx-module-njs-1.20.2+0.7.0-1.el7.ngx.x86_64 nginx-module-geoip-1.20.2-1.el7.ngx.x86_64 nginx-module-image-filter-1.20.2-1.el7.ngx.x86_64 ``` After updating RPM packages ``` nginx-module-perl-1.20.2-1.el7.ngx.x86_64 nginx-1.20.2-1.el7.ngx.x86_64 nginx-module-xslt-1.20.2-1.el7.ngx.x86_64 nginx-filesystem-1.20.1-9.el7.noarch nginx-module-geoip-1.20.2-1.el7.ngx.x86_64 nginx-module-njs-1.20.2+0.7.1-1.el7.ngx.x86_64 nginx-module-image-filter-1.20.2-1.el7.ngx.x86_64 ``` After yum update is executed `nginx-module-njs` package was updated and now nginx throws `unknown directive "js_include"` although with the previous version (nginx-module-njs-1.20.2+0.7.0-1.el7.ngx.x86_64) it worked Anyone knows why updating to `nginx-module-njs-1.20.2+0.7.1-1.el7.ngx.x86_64` from `nginx-module-njs-1.20.2+0.7.0-1.el7.ngx.x86_64` starts throwing that error. Is this a bug in official repo (http://nginx.org/packages/centos/$releasever/$basearch/) ? CentOS Linux release 7.9.2009 (Core) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293320,293320#msg-293320 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Tue Jan 11 05:22:07 2022 From: nginx-forum at forum.nginx.org (remdex) Date: Tue, 11 Jan 2022 00:22:07 -0500 Subject: unknown directive "js_include" after update to In-Reply-To: <0ccaa675d8d90bc84391486148905419.NginxMailingListEnglish@forum.nginx.org> References: <0ccaa675d8d90bc84391486148905419.NginxMailingListEnglish@forum.nginx.org> Message-ID: <801e02f10d9d15dda3a845b2c6ddb4f3.NginxMailingListEnglish@forum.nginx.org> I know what happened. js_include was deprecated and now is removed... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293320,293321#msg-293321 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Tue Jan 11 13:27:29 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Jan 2022 16:27:29 +0300 Subject: Use prebuilt Bzip, zLib and OpenSSL? In-Reply-To: References: Message-ID: Hello! On Mon, Jan 10, 2022 at 10:40:09PM -0500, Jeffrey Walton wrote: > I need to build a modern Nginx from sources on an older platform. The > need arises because the organization can't upgrade a particular set of > machines. We want to set up a Nginx proxy to handle the front-end > work. > > I've got Bzip, zLib and OpenSSL built and installed in /opt, but I am > having trouble getting Nginx to compile and [presumably] link against > them. In this case we don't want Nginx building them from sources. > Rather, we want Nginx to use the includes in /opt/include, and the > libraries in /opt/lib. > > Typically GNU software using Autotools use an option like > --with-openssl-prefix. Nginx does not provide the option. Nginx also > does not provide the customary --includedir and --libdir. (And I > understand Nginx is not GNU). > > How do we tell Nginx the prefix path for the libraries? The --with-cc-opt and --with-ld-opt should work: ./configure --with-cc-opt="-I /opt/include" --with-ld-opt="-L /opt/lib" ... See docs for details: http://nginx.org/en/docs/configure.html -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From noloader at gmail.com Tue Jan 11 20:45:06 2022 From: noloader at gmail.com (Jeffrey Walton) Date: Tue, 11 Jan 2022 15:45:06 -0500 Subject: Use prebuilt Bzip, zLib and OpenSSL? In-Reply-To: References: Message-ID: On Tue, Jan 11, 2022 at 8:27 AM Maxim Dounin wrote: > > Hello! > > On Mon, Jan 10, 2022 at 10:40:09PM -0500, Jeffrey Walton wrote: > > > I need to build a modern Nginx from sources on an older platform. The > > need arises because the organization can't upgrade a particular set of > > machines. We want to set up a Nginx proxy to handle the front-end > > work. > > > > I've got Bzip, zLib and OpenSSL built and installed in /opt, but I am > > having trouble getting Nginx to compile and [presumably] link against > > them. In this case we don't want Nginx building them from sources. > > Rather, we want Nginx to use the includes in /opt/include, and the > > libraries in /opt/lib. > > > > Typically GNU software using Autotools use an option like > > --with-openssl-prefix. Nginx does not provide the option. Nginx also > > does not provide the customary --includedir and --libdir. (And I > > understand Nginx is not GNU). > > > > How do we tell Nginx the prefix path for the libraries? > > The --with-cc-opt and --with-ld-opt should work: > > ./configure --with-cc-opt="-I /opt/include" --with-ld-opt="-L /opt/lib" ... > > See docs for details: > > http://nginx.org/en/docs/configure.html Ok, I think I see what the problem is... I used --with-cc-opt and --with-ld-opt, but configure still failed: ... checking for openat(), fstatat() ... found checking for getaddrinfo() ... found checking for PCRE library ... not found checking for PCRE library in /usr/local/ ... not found checking for PCRE library in /usr/include/pcre/ ... not found checking for PCRE library in /usr/pkg/ ... not found checking for PCRE library in /opt/local/ ... not found I'm using PCRE2, not PCRE: $ ls $HOME/tmp/include autosprintf.h idn2.h pcre2.h unigbrk.h uniwbrk.h bzlib.h libcharset.h pcre2posix.h unilbrk.h uniwidth.h db_cxx.h libxml2 readline uniname.h zconf.h db.h localcharset.h textstyle uninorm.h zlib.h dbm.h ncurses textstyle.h unistdio.h gdbm.h ncursesw unicase.h unistr.h gettext-po.h ndbm.h uniconv.h unistring iconv.h openssl unictype.h unitypes.h PCRE was end-of-life last year. PCRE also has several open bugs against it. I tried to get the maintainer to take patches for PCRE and release one more version but he did not. There's no reason to use PCRE nowadays since PCRE is unmaintained and PCRE2 is available. So I guess the question now is, can I tell Nginx to use PCRE2? Jeff _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From teward at thomas-ward.net Tue Jan 11 21:02:26 2022 From: teward at thomas-ward.net (Thomas Ward) Date: Tue, 11 Jan 2022 16:02:26 -0500 Subject: Use prebuilt Bzip, zLib and OpenSSL? In-Reply-To: References: Message-ID: <3cd83751-a49b-2346-e3da-712e8513a7e0@thomas-ward.net> Which NGINX are you attempting to compile?  Last I checked only the 1.21.x branch (Mainline) has support for PRCE2, unless I missed a stable branch release note... Thomas On 1/11/22 15:45, Jeffrey Walton wrote: > On Tue, Jan 11, 2022 at 8:27 AM Maxim Dounin wrote: >> Hello! >> >> On Mon, Jan 10, 2022 at 10:40:09PM -0500, Jeffrey Walton wrote: >> >>> I need to build a modern Nginx from sources on an older platform. The >>> need arises because the organization can't upgrade a particular set of >>> machines. We want to set up a Nginx proxy to handle the front-end >>> work. >>> >>> I've got Bzip, zLib and OpenSSL built and installed in /opt, but I am >>> having trouble getting Nginx to compile and [presumably] link against >>> them. In this case we don't want Nginx building them from sources. >>> Rather, we want Nginx to use the includes in /opt/include, and the >>> libraries in /opt/lib. >>> >>> Typically GNU software using Autotools use an option like >>> --with-openssl-prefix. Nginx does not provide the option. Nginx also >>> does not provide the customary --includedir and --libdir. (And I >>> understand Nginx is not GNU). >>> >>> How do we tell Nginx the prefix path for the libraries? >> The --with-cc-opt and --with-ld-opt should work: >> >> ./configure --with-cc-opt="-I /opt/include" --with-ld-opt="-L /opt/lib" ... >> >> See docs for details: >> >> http://nginx.org/en/docs/configure.html > Ok, I think I see what the problem is... I used --with-cc-opt and > --with-ld-opt, but configure still failed: > > ... > checking for openat(), fstatat() ... found > checking for getaddrinfo() ... found > checking for PCRE library ... not found > checking for PCRE library in /usr/local/ ... not found > checking for PCRE library in /usr/include/pcre/ ... not found > checking for PCRE library in /usr/pkg/ ... not found > checking for PCRE library in /opt/local/ ... not found > > I'm using PCRE2, not PCRE: > > $ ls $HOME/tmp/include > autosprintf.h idn2.h pcre2.h unigbrk.h uniwbrk.h > bzlib.h libcharset.h pcre2posix.h unilbrk.h uniwidth.h > db_cxx.h libxml2 readline uniname.h zconf.h > db.h localcharset.h textstyle uninorm.h zlib.h > dbm.h ncurses textstyle.h unistdio.h > gdbm.h ncursesw unicase.h unistr.h > gettext-po.h ndbm.h uniconv.h unistring > iconv.h openssl unictype.h unitypes.h > > PCRE was end-of-life last year. PCRE also has several open bugs > against it. I tried to get the maintainer to take patches for PCRE and > release one more version but he did not. There's no reason to use PCRE > nowadays since PCRE is unmaintained and PCRE2 is available. > > So I guess the question now is, can I tell Nginx to use PCRE2? > > Jeff > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From noloader at gmail.com Tue Jan 11 21:11:18 2022 From: noloader at gmail.com (Jeffrey Walton) Date: Tue, 11 Jan 2022 16:11:18 -0500 Subject: Use prebuilt Bzip, zLib and OpenSSL? In-Reply-To: <3cd83751-a49b-2346-e3da-712e8513a7e0@thomas-ward.net> References: <3cd83751-a49b-2346-e3da-712e8513a7e0@thomas-ward.net> Message-ID: On Tue, Jan 11, 2022 at 4:02 PM Thomas Ward wrote: > > Which NGINX are you attempting to compile? Last I checked only the 1.21.x branch (Mainline) has support for PRCE2, unless I missed a stable branch release note... > Thanks Thomas. Nginx version is 1.20.2. I guess I'll wait for 1.21 to become a stable release. Thanks. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From noloader at gmail.com Tue Jan 11 21:26:30 2022 From: noloader at gmail.com (Jeffrey Walton) Date: Tue, 11 Jan 2022 16:26:30 -0500 Subject: Use prebuilt Bzip, zLib and OpenSSL? In-Reply-To: References: Message-ID: On Mon, Jan 10, 2022 at 10:40 PM Jeffrey Walton wrote: > > I need to build a modern Nginx from sources on an older platform. The > need arises because the organization can't upgrade a particular set of > machines. We want to set up a Nginx proxy to handle the front-end > work. Thanks Maxim and Thomas. I dropped back to PCRE and Nginx builds fine. If interested, I got Nginx 1.20.2 running on Ubuntu 8 circa 2009. We are going to use it as a proxy for a similarly antique machine running WebSphere with Java 1.5. Nginx should help clear a lot of TLS related findings. Thanks again. Jeff _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed Jan 12 02:49:48 2022 From: nginx-forum at forum.nginx.org (David27) Date: Tue, 11 Jan 2022 21:49:48 -0500 Subject: curious .conf problem In-Reply-To: <20220111002844.GQ12557@daoine.org> References: <20220111002844.GQ12557@daoine.org> Message-ID: <328d829a0a6dab99260fdf7eab9edc7d.NginxMailingListEnglish@forum.nginx.org> >> 2. The proximate cause for all these problems: Certbot did not generate an >> SSL certificate for a server block with a 'dot' prefix name even when it was >> listening to 443. It didn't complain, it just didn't expand the >> certificate. > > That seems like a curious thing to do. > > Maybe certbot "knows" that wildcard certificates are somehow special, > and chooses not to try to create them automatically? It looks like you > would have preferred for certbot to say which server_name values it was > requesting certificate for, and which it was not? Or maybe the distinction > between "wildcard" names and fixed names could have been clearer? There were quite a few layers of stuff here to unwrap, but yes, at the bottom this came down to certbot behavior. certbot saw the wild card subdomain and then did nothing. Yeah, I wish the combination of listening on 443 and doing nothing had printed a message. Still, a more experienced person would have recognized that certbot had not added any new certificates. I was not sure that the subdomain even mattered when making an SSL certificate, so when I saw the .domain.com (dot in the front) I had assumed that it would make a certificate for the domain that applied to all subdomains. I now gather that this not a reasonable expectation. I gather that each subdomain gets its own certificate. As each subdomin gets its own certificate, of course certbot could not do anything with the wild card sub domain. It would have been nice if it had complained. Well as we are talking wish lists, I wish that nginx followed a generic unification algorithm without ad hoc recovery rules. I.e. if there was no block that unified, that it had an error rather than then applying an ad hoc rule. But nginx is what it is. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293269,293332#msg-293332 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed Jan 12 17:44:34 2022 From: nginx-forum at forum.nginx.org (podzol33) Date: Wed, 12 Jan 2022 12:44:34 -0500 Subject: =?UTF-8?Q?Variable=20=E2=80=9Cset=E2=80=9D=20in=20map=20with=20an=20He?= =?UTF-8?Q?ader=20Value=20was=20lost=20when=20we=20delete=20the=20heade?= =?UTF-8?Q?r.?= Message-ID: Hello, We have an responseHeader with technical information sent by the upstream server tomcat. We want to log this information in nginx and delete the header to avoid to be visible in the response Header to the client. log_format formatjson escape=json '{ '"tomcat_container_id": "$TOMCAT_CONTAINER_ID" }'; Nginx.conf in http { map $sent_http_Container_Id $TOMCAT_CONTAINER_ID { default $sent_http_Container_Id; } more_clear_headers 'Container-Id'; When I do this, my log tomcat_container_id is empty. If I comment the more_clear_header command line, I have my log fill with the right value but the header is also sent to the client. So I don’t understand why my $TOMCAT_CONTAINER_ID Is clear when I delete the header and not clear if I don’t. Thanks for your help. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293336,293336#msg-293336 From r at roze.lv Wed Jan 12 18:28:37 2022 From: r at roze.lv (Reinis Rozitis) Date: Wed, 12 Jan 2022 20:28:37 +0200 Subject: =?UTF-8?Q?RE:_Variable_=E2=80=9Cset=E2=80=9D_in_map_with_a?= =?UTF-8?Q?n_Header_Value_was_lost_when_we_?= =?UTF-8?Q?delete_the_header.?= In-Reply-To: References: Message-ID: <000201d807e2$374fa740$a5eef5c0$@roze.lv> > We have an responseHeader with technical information sent by the upstream server tomcat. > We want to log this information in nginx and delete the header to avoid to be visible in the response Header to the client. > > more_clear_headers 'Container-Id'; This seems to be a third-party module.. Maybe more simple approach would be just to use the inbuilt proxy module directive proxy_hide_header ? http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header rr From mdounin at mdounin.ru Wed Jan 12 18:36:58 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 12 Jan 2022 21:36:58 +0300 Subject: Variable =?utf-8?B?4oCcc2V04oCdIGk=?= =?utf-8?Q?n?= map with an Header Value was lost when we delete the header. In-Reply-To: References: Message-ID: Hello! On Wed, Jan 12, 2022 at 12:44:34PM -0500, podzol33 wrote: > We have an responseHeader with technical information sent by the upstream > server tomcat. > We want to log this information in nginx and delete the header to avoid to > be visible in the response Header to the client. > > log_format formatjson escape=json '{ > '"tomcat_container_id": "$TOMCAT_CONTAINER_ID" }'; > > Nginx.conf in http { > map $sent_http_Container_Id $TOMCAT_CONTAINER_ID { > default $sent_http_Container_Id; > } > more_clear_headers 'Container-Id'; > > When I do this, my log tomcat_container_id is empty. > If I comment the more_clear_header command line, I have my log fill with the > right value but the header is also sent to the client. > So I don’t understand why my $TOMCAT_CONTAINER_ID Is clear when I delete the > header and not clear if I don’t. Maps are evaluated when you access resulting variable, so it is expected that the map result will be empty if there are no $sent_http_container_id during map evaluation, that is, at logging. Overall, your construct is no different from using $sent_http_container_id in the log format directly. As far as I understand what you are trying to do, proper solution would be to void both 3rd-party more_clear_headers and map, and instead use proxy_hide_header[1] to hide the header, and use the $upstream_http_container_id[2] variable for logging the original upstream response header. Something like this: log_format formatjson escape=json '{ '"tomcat_container_id": "$upstream_http_container_id" }'; proxy_hide_header Container-Id; Hope this helps. [1] http://nginx.org/r/proxy_hide_header [2] http://nginx.org/r/$upstream_http_ -- Maxim Dounin http://mdounin.ru/ From fusca14 at gmail.com Wed Jan 12 20:00:35 2022 From: fusca14 at gmail.com (Fabiano Furtado Pessoa Coelho) Date: Wed, 12 Jan 2022 17:00:35 -0300 Subject: NGINX Plus trial and HA with keepalived in active-active mode Message-ID: Hi... I'm trying to configure keepalived in active-active mode, using 2 nodes, based on the official documentation https://docs.nginx.com/nginx/admin-guide/high-availability/ha-keepalived-nodes/, but the environment became unstable. The TCP connection is closing all the time and I don't know what's wrong with my keeepalived config. I'm using RHEL 8.5 with SELinux enabled and enforcing. The configuration of my first NGINX Plus node: global_defs { vrrp_version 3 router_id nginx_prod1 } vrrp_script chk_manual_failover { script "/usr/libexec/keepalived/nginx-ha-manual-failover" interval 10 weight 50 } vrrp_script chk_nginx_service { script "/usr/libexec/keepalived/nginx-ha-check" interval 3 weight 50 } vrrp_instance VI_1 { state MASTER interface eth0 priority 99 virtual_router_id 31 advert_int 1 accept garp_master_refresh 5 garp_master_refresh_repeat 1 unicast_src_ip x.y.z.48/26 unicast_peer { x.y.z.50 } virtual_ipaddress { x.y.z.49/26 brd x.y.z.63 dev eth0 } track_script { chk_nginx_service chk_manual_failover } notify "/usr/libexec/keepalived/nginx-ha-notify" } vrrp_instance VI_2 { state BACKUP interface eth0 priority 98 virtual_router_id 41 advert_int 1 accept garp_master_refresh 5 garp_master_refresh_repeat 1 unicast_src_ip x.y.z.48/26 unicast_peer { x.y.z.50 } virtual_ipaddress { x.y.z.51/26 brd x.y.z.63 dev eth0 } track_script { chk_nginx_service chk_manual_failover } notify "/usr/libexec/keepalived/nginx-ha-notify" } And the "ip a" config: eth0: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff inet x.y.z.48/26 brd x.y.z.63 scope global noprefixroute eth0 valid_lft forever preferred_lft forever inet x.y.z.49/26 brd x.y.z.63 scope global secondary eth0 valid_lft forever preferred_lft forever inet x.y.z.51/26 brd x.y.z.63 scope global secondary eth0 valid_lft forever preferred_lft forever The configuration of my second NGINX Plus node: global_defs { vrrp_version 3 router_id nginx_prod2 } vrrp_script chk_manual_failover { script "/usr/libexec/keepalived/nginx-ha-manual-failover" interval 10 weight 50 } vrrp_script chk_nginx_service { script "/usr/libexec/keepalived/nginx-ha-check" interval 3 weight 50 } vrrp_instance VI_1 { state MASTER interface eth0 priority 101 virtual_router_id 51 advert_int 1 accept garp_master_refresh 5 garp_master_refresh_repeat 1 unicast_src_ip x.y.z.50/26 unicast_peer { x.y.z.48 } virtual_ipaddress { x.y.z.51/26 brd x.y.z.63 dev eth0 } track_script { chk_nginx_service chk_manual_failover } notify "/usr/libexec/keepalived/nginx-ha-notify" } vrrp_instance VI_2 { state BACKUP interface eth0 priority 100 virtual_router_id 61 advert_int 1 accept garp_master_refresh 5 garp_master_refresh_repeat 1 unicast_src_ip x.y.z.50/26 unicast_peer { x.y.z.48 } virtual_ipaddress { x.y.z.49/26 brd x.y.z.63 dev eth0 } track_script { chk_nginx_service chk_manual_failover } notify "/usr/libexec/keepalived/nginx-ha-notify" } And the "ip a" config: eth0: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff inet x.y.z.50/26 brd x.y.z.63 scope global noprefixroute eth0 valid_lft forever preferred_lft forever inet x.y.z.51/26 brd x.y.z.63 scope global secondary eth0 valid_lft forever preferred_lft forever inet x.y.z.49/26 brd x.y.z.63 scope global secondary eth0 valid_lft forever preferred_lft forever What am I doing wrong? Thanks in advance! Fabiano From osa at freebsd.org.ru Wed Jan 12 20:46:34 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Wed, 12 Jan 2022 23:46:34 +0300 Subject: NGINX Plus trial and HA with keepalived in active-active mode In-Reply-To: References: Message-ID: Hi Fabiano, hope you're doing well these days. This mailing list is focused on NGINX OSS distribution. For commercial support I'd recommend to raise a support ticket on MyF5 portal, https://my.f5.com/ Thank you. -- Sergey A. Osokin On Wed, Jan 12, 2022 at 05:00:35PM -0300, Fabiano Furtado Pessoa Coelho wrote: > Hi... > > I'm trying to configure keepalived in active-active mode, using 2 > nodes, based on the official documentation > https://docs.nginx.com/nginx/admin-guide/high-availability/ha-keepalived-nodes/, > but the environment became unstable. The TCP connection is closing all > the time and I don't know what's wrong with my keeepalived config. > > I'm using RHEL 8.5 with SELinux enabled and enforcing. > > The configuration of my first NGINX Plus node: > > global_defs { > vrrp_version 3 > router_id nginx_prod1 > } > > vrrp_script chk_manual_failover { > script "/usr/libexec/keepalived/nginx-ha-manual-failover" > interval 10 > weight 50 > } > > vrrp_script chk_nginx_service { > script "/usr/libexec/keepalived/nginx-ha-check" > interval 3 > weight 50 > } > > vrrp_instance VI_1 { > state MASTER > interface eth0 > priority 99 > virtual_router_id 31 > advert_int 1 > accept > garp_master_refresh 5 > garp_master_refresh_repeat 1 > unicast_src_ip x.y.z.48/26 > unicast_peer { > x.y.z.50 > } > virtual_ipaddress { > x.y.z.49/26 brd x.y.z.63 dev eth0 > } > track_script { > chk_nginx_service > chk_manual_failover > } > notify "/usr/libexec/keepalived/nginx-ha-notify" > } > > vrrp_instance VI_2 { > state BACKUP > interface eth0 > priority 98 > virtual_router_id 41 > advert_int 1 > accept > garp_master_refresh 5 > garp_master_refresh_repeat 1 > unicast_src_ip x.y.z.48/26 > unicast_peer { > x.y.z.50 > } > virtual_ipaddress { > x.y.z.51/26 brd x.y.z.63 dev eth0 > } > track_script { > chk_nginx_service > chk_manual_failover > } > notify "/usr/libexec/keepalived/nginx-ha-notify" > } > > And the "ip a" config: > eth0: mtu 1500 qdisc fq_codel > state UP group default qlen 1000 > link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff > inet x.y.z.48/26 brd x.y.z.63 scope global noprefixroute eth0 > valid_lft forever preferred_lft forever > inet x.y.z.49/26 brd x.y.z.63 scope global secondary eth0 > valid_lft forever preferred_lft forever > inet x.y.z.51/26 brd x.y.z.63 scope global secondary eth0 > valid_lft forever preferred_lft forever > > > > The configuration of my second NGINX Plus node: > > global_defs { > vrrp_version 3 > router_id nginx_prod2 > } > > vrrp_script chk_manual_failover { > script "/usr/libexec/keepalived/nginx-ha-manual-failover" > interval 10 > weight 50 > } > > vrrp_script chk_nginx_service { > script "/usr/libexec/keepalived/nginx-ha-check" > interval 3 > weight 50 > } > > vrrp_instance VI_1 { > state MASTER > interface eth0 > priority 101 > virtual_router_id 51 > advert_int 1 > accept > garp_master_refresh 5 > garp_master_refresh_repeat 1 > unicast_src_ip x.y.z.50/26 > unicast_peer { > x.y.z.48 > } > virtual_ipaddress { > x.y.z.51/26 brd x.y.z.63 dev eth0 > } > track_script { > chk_nginx_service > chk_manual_failover > } > notify "/usr/libexec/keepalived/nginx-ha-notify" > } > > vrrp_instance VI_2 { > state BACKUP > interface eth0 > priority 100 > virtual_router_id 61 > advert_int 1 > accept > garp_master_refresh 5 > garp_master_refresh_repeat 1 > unicast_src_ip x.y.z.50/26 > unicast_peer { > x.y.z.48 > } > virtual_ipaddress { > x.y.z.49/26 brd x.y.z.63 dev eth0 > } > track_script { > chk_nginx_service > chk_manual_failover > } > notify "/usr/libexec/keepalived/nginx-ha-notify" > } > > And the "ip a" config: > eth0: mtu 1500 qdisc fq_codel > state UP group default qlen 1000 > link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff > inet x.y.z.50/26 brd x.y.z.63 scope global noprefixroute eth0 > valid_lft forever preferred_lft forever > inet x.y.z.51/26 brd x.y.z.63 scope global secondary eth0 > valid_lft forever preferred_lft forever > inet x.y.z.49/26 brd x.y.z.63 scope global secondary eth0 > valid_lft forever preferred_lft forever > > What am I doing wrong? > Thanks in advance! > > Fabiano > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org From fusca14 at gmail.com Wed Jan 12 20:57:51 2022 From: fusca14 at gmail.com (Fabiano Furtado Pessoa Coelho) Date: Wed, 12 Jan 2022 17:57:51 -0300 Subject: NGINX Plus trial and HA with keepalived in active-active mode In-Reply-To: References: Message-ID: Oh.. sorry to bother this mailing list! I will check this portal out. Thanks. On Wed, Jan 12, 2022 at 5:48 PM Sergey A. Osokin wrote: > > Hi Fabiano, > > hope you're doing well these days. > > This mailing list is focused on NGINX OSS distribution. > > For commercial support I'd recommend to raise a support ticket on MyF5 portal, > https://my.f5.com/ > > Thank you. > > -- > Sergey A. Osokin > > On Wed, Jan 12, 2022 at 05:00:35PM -0300, Fabiano Furtado Pessoa Coelho wrote: > > Hi... > > > > I'm trying to configure keepalived in active-active mode, using 2 > > nodes, based on the official documentation > > https://docs.nginx.com/nginx/admin-guide/high-availability/ha-keepalived-nodes/, > > but the environment became unstable. The TCP connection is closing all > > the time and I don't know what's wrong with my keeepalived config. > > > > I'm using RHEL 8.5 with SELinux enabled and enforcing. > > > > The configuration of my first NGINX Plus node: > > > > global_defs { > > vrrp_version 3 > > router_id nginx_prod1 > > } > > > > vrrp_script chk_manual_failover { > > script "/usr/libexec/keepalived/nginx-ha-manual-failover" > > interval 10 > > weight 50 > > } > > > > vrrp_script chk_nginx_service { > > script "/usr/libexec/keepalived/nginx-ha-check" > > interval 3 > > weight 50 > > } > > > > vrrp_instance VI_1 { > > state MASTER > > interface eth0 > > priority 99 > > virtual_router_id 31 > > advert_int 1 > > accept > > garp_master_refresh 5 > > garp_master_refresh_repeat 1 > > unicast_src_ip x.y.z.48/26 > > unicast_peer { > > x.y.z.50 > > } > > virtual_ipaddress { > > x.y.z.49/26 brd x.y.z.63 dev eth0 > > } > > track_script { > > chk_nginx_service > > chk_manual_failover > > } > > notify "/usr/libexec/keepalived/nginx-ha-notify" > > } > > > > vrrp_instance VI_2 { > > state BACKUP > > interface eth0 > > priority 98 > > virtual_router_id 41 > > advert_int 1 > > accept > > garp_master_refresh 5 > > garp_master_refresh_repeat 1 > > unicast_src_ip x.y.z.48/26 > > unicast_peer { > > x.y.z.50 > > } > > virtual_ipaddress { > > x.y.z.51/26 brd x.y.z.63 dev eth0 > > } > > track_script { > > chk_nginx_service > > chk_manual_failover > > } > > notify "/usr/libexec/keepalived/nginx-ha-notify" > > } > > > > And the "ip a" config: > > eth0: mtu 1500 qdisc fq_codel > > state UP group default qlen 1000 > > link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff > > inet x.y.z.48/26 brd x.y.z.63 scope global noprefixroute eth0 > > valid_lft forever preferred_lft forever > > inet x.y.z.49/26 brd x.y.z.63 scope global secondary eth0 > > valid_lft forever preferred_lft forever > > inet x.y.z.51/26 brd x.y.z.63 scope global secondary eth0 > > valid_lft forever preferred_lft forever > > > > > > > > The configuration of my second NGINX Plus node: > > > > global_defs { > > vrrp_version 3 > > router_id nginx_prod2 > > } > > > > vrrp_script chk_manual_failover { > > script "/usr/libexec/keepalived/nginx-ha-manual-failover" > > interval 10 > > weight 50 > > } > > > > vrrp_script chk_nginx_service { > > script "/usr/libexec/keepalived/nginx-ha-check" > > interval 3 > > weight 50 > > } > > > > vrrp_instance VI_1 { > > state MASTER > > interface eth0 > > priority 101 > > virtual_router_id 51 > > advert_int 1 > > accept > > garp_master_refresh 5 > > garp_master_refresh_repeat 1 > > unicast_src_ip x.y.z.50/26 > > unicast_peer { > > x.y.z.48 > > } > > virtual_ipaddress { > > x.y.z.51/26 brd x.y.z.63 dev eth0 > > } > > track_script { > > chk_nginx_service > > chk_manual_failover > > } > > notify "/usr/libexec/keepalived/nginx-ha-notify" > > } > > > > vrrp_instance VI_2 { > > state BACKUP > > interface eth0 > > priority 100 > > virtual_router_id 61 > > advert_int 1 > > accept > > garp_master_refresh 5 > > garp_master_refresh_repeat 1 > > unicast_src_ip x.y.z.50/26 > > unicast_peer { > > x.y.z.48 > > } > > virtual_ipaddress { > > x.y.z.49/26 brd x.y.z.63 dev eth0 > > } > > track_script { > > chk_nginx_service > > chk_manual_failover > > } > > notify "/usr/libexec/keepalived/nginx-ha-notify" > > } > > > > And the "ip a" config: > > eth0: mtu 1500 qdisc fq_codel > > state UP group default qlen 1000 > > link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff > > inet x.y.z.50/26 brd x.y.z.63 scope global noprefixroute eth0 > > valid_lft forever preferred_lft forever > > inet x.y.z.51/26 brd x.y.z.63 scope global secondary eth0 > > valid_lft forever preferred_lft forever > > inet x.y.z.49/26 brd x.y.z.63 scope global secondary eth0 > > valid_lft forever preferred_lft forever > > > > What am I doing wrong? > > Thanks in advance! > > > > Fabiano > > _______________________________________________ > > nginx mailing list -- nginx at nginx.org > > To unsubscribe send an email to nginx-leave at nginx.org > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org From sb at nginx.com Thu Jan 13 11:52:32 2022 From: sb at nginx.com (Sergey Budnevitch) Date: Thu, 13 Jan 2022 14:52:32 +0300 Subject: Mailing list migration to mailman3 Message-ID: <8BED24A3-FAD9-42E6-9DE8-55A1FF1C2A3F@nginx.com> Hello, As you may noticed already mailing list was migrated to mailman3. It differs significantly from mailman2 we used previously. Please pay attention to a few noticeable changes: * Mailman3 does not add X-BeenThere header to the outbound emails anymore. If you used this header for your filters, you should switch to the List-Id header (or List-Post header). * Old archives are available on https://mailman.nginx.org/pipermail/, New archives started on Jan 1 2020 and could be found on https://mailman.nginx.org/mailman3/lists/ * mail interface for subscribing/unsubscribing works as before, but web interface and authorisation have changed. To get access to the web interface you need to "sign up" with your email address, "reset password" will not work as technically there is no web user yet. From jamesread5737 at gmail.com Fri Jan 14 00:47:04 2022 From: jamesread5737 at gmail.com (James Read) Date: Fri, 14 Jan 2022 00:47:04 +0000 Subject: Nginx performance data In-Reply-To: <000001d804f2$6d799170$486cb450$@roze.lv> References: <000c01d803c0$9c652c70$d52f8550$@roze.lv> <004301d8047c$21402f60$63c08e20$@roze.lv> <000001d804f2$6d799170$486cb450$@roze.lv> Message-ID: On Sun, Jan 9, 2022 at 12:47 AM Reinis Rozitis wrote: > > Otherwise why is my application running into such performance limits as > mentioned in this question on stackoverflow > https://stackoverflow.com/questions/70584121/why-doesnt-my-epoll-based-program-improve-performance-by-increasing-the-number > ? > > You are testing something (public third party dns servers) you have no > control over (how can you be sure there are no rate limits?) with third > party libraries without actually measuring/showing what takes up your time. > That's not the best way to test low level things. > I just succeeded in modifying the wrk source code so that is connects with multiple servers rather than just the one. I can confirm that it gets the same throughput as with one host. So I take back my comments about there being a difference between one host simulating many being different to an actual test with many clients. There must be a problem with my epoll implementation because the modified wrk codebase works just fine. James Read > > I would suggest to at minimum do at least 'strace -c' to see what syscalls > takes most of the time. > > But that's something out of scope of this mailing list. > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Jan 14 15:10:14 2022 From: nginx-forum at forum.nginx.org (podzol33) Date: Fri, 14 Jan 2022 10:10:14 -0500 Subject: =?UTF-8?Q?Re:=20Variable=20=E2=80=9Cset=E2=80=9D=20in=20map=20with=20a?= =?UTF-8?Q?n=20Header=20Value=20was=20lost=20when=20we=20delete=20the?= =?UTF-8?Q?=20header.?= In-Reply-To: References: Message-ID: <2c2c4a2ca9d3f81c53f11b09d13902ba.NginxMailingListEnglish@forum.nginx.org> Thans you very well for your response. In fact, it works with log_format formatjson escape=json '{ '"tomcat_container_id": "$upstream_http_container_id" }'; proxy_hide_header Container-Id; define in my vhost configuration. I have the log AND the header deleted in the browser client. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293336,293359#msg-293359 From nginx-forum at forum.nginx.org Wed Jan 19 02:19:09 2022 From: nginx-forum at forum.nginx.org (yugo-horie) Date: Tue, 18 Jan 2022 21:19:09 -0500 Subject: Priority of cache-control, expires, x-accel-expires In-Reply-To: <20111205092958.GK67687@mdounin.ru> References: <20111205092958.GK67687@mdounin.ru> Message-ID: Excuse me for refer to quite old issue, we found a different behavior regarding their headers order in case of X-Accel-Expires which is not 0 and Cache-Control has any of no-store, no-cache or private. It is very easy reproducing process. ngx_http_upstream.c has two method ngx_http_upstream_process_cache_control and ngx_http_upstream_process_accel_expires which operations their cache related headers, Cache-Control is processed in ngx_http_upstream_process_cache_control, X-Accel-Expires is processed in process_accel_expires. ngx_http_upstream_process_cache_control in ngx_http_upstream.c 4738 if (r->cache->valid_sec != 0 && u->headers_in.x_accel_expires != NULL) { 4739 return NGX_OK; 4740 } 4741 4742 start = h->value.data; 4743 last = start + h->value.len; 4744 4745 if (ngx_strlcasestrn(start, last, (u_char *) "no-cache", 8 - 1) != NULL 4746 || ngx_strlcasestrn(start, last, (u_char *) "no-store", 8 - 1) != NULL 4747 || ngx_strlcasestrn(start, last, (u_char *) "private", 7 - 1) != NULL) 4748 { 4749 u->cacheable = 0; 4750 return NGX_OK; 4751 } If Cache-Control from upstream has a value of no-store, no-cache or private, u->cacheable = 0; in ngx_http_upstream_process_cache_control In the case of before processing ngx_http_upstream_process_accel_expires, if it sets u->cacheable = 0 for this procedure, it does not cache according to Cache-Control. X-Accel-Expires is only overwriting valid_sec. OTOH, In the case of after processing ngx_http_upstream_process_accel_expires, ngx_http_upstream_process_cache_control returns NGX_OK earlier than the procedure of Cache-Control: no-xxx or private. In this case, it does cache, the cache is following x-accel-expires values. It ignores Cache-Control. As this result it seems not to override entirely Cache-Control by X-Accel-Expires. And It has differential behavor according to order of upstream header. We could not find this behavior in any nginx documents. Could you tell me what is true? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,219520,293374#msg-293374 From nginx-forum at forum.nginx.org Wed Jan 19 12:24:20 2022 From: nginx-forum at forum.nginx.org (gab) Date: Wed, 19 Jan 2022 07:24:20 -0500 Subject: Nginx reload leading to ELB 502 on AWS Elastic Load Balancer Message-ID: <338ac72a40e2df3954838a0406bceb57.NginxMailingListEnglish@forum.nginx.org> # Issue Summary * After executing Nginx soft reload with "service nginx reload", nginx is able to close a lot of connections gracefully, but some connections aren't closed gracefully and Nginx is sending an RST packet. For these connections, Nginx didn't send FIN packet, and it didn't send "Connection: Close" header. The connections are HTTP/1.1 keep alive connections. # Expected Behaviour * After executing Nginx soft reload, Nginx is gracefully closing all connections by sending a "Connection: close" header in the response or a FIN packet. # Supporting Data I have tcpdump of 5 such connections where nginx didn't close the connection gracefully after nginx reload. Here's the tcpdump - https://drive.google.com/file/d/1UquhmJET9i8ShEizu8453iUKpprutILV/view?usp=sharing If we analyse one such connection, we see that nginx didn't send FIN packet on this connection, refer this image - https://i.imgur.com/zqyLOLc.png If we see the response of second last request, we see nginx didn't send "Connection: close" header either, refer this image - https://i.imgur.com/P2uu722.png In this image I have plotted FIN packets sent by nginx over time - https://i.imgur.com/5lNAmnk.png Nginx was reloaded on 2022-01-12 13:57:44 UTC. FIN packet graph (https://i.imgur.com/5lNAmnk.png) shows that Nginx was able to close a lot of connections gracefully at the time of reload, but it wasn't able to close the 5 connections. TCP dump of which I've shared above (https://drive.google.com/file/d/1UquhmJET9i8ShEizu8453iUKpprutILV/view?usp=sharing). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293375,293375#msg-293375 From mdounin at mdounin.ru Wed Jan 19 13:42:19 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Jan 2022 16:42:19 +0300 Subject: Nginx reload leading to ELB 502 on AWS Elastic Load Balancer In-Reply-To: <338ac72a40e2df3954838a0406bceb57.NginxMailingListEnglish@forum.nginx.org> References: <338ac72a40e2df3954838a0406bceb57.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Wed, Jan 19, 2022 at 07:24:20AM -0500, gab wrote: > # Issue Summary > > * After executing Nginx soft reload with "service nginx reload", nginx is > able to close a lot of connections gracefully, but some connections aren't > closed gracefully and Nginx is sending an RST packet. For these connections, > Nginx didn't send FIN packet, and it didn't send "Connection: Close" header. > The connections are HTTP/1.1 keep alive connections. > > # Expected Behaviour > > * After executing Nginx soft reload, Nginx is gracefully closing all > connections by sending a "Connection: close" header in the response or a FIN > packet. > > # Supporting Data > > I have tcpdump of 5 such connections where nginx didn't close the connection > gracefully after nginx reload. > Here's the tcpdump - > https://drive.google.com/file/d/1UquhmJET9i8ShEizu8453iUKpprutILV/view?usp=sharing > > If we analyse one such connection, we see that nginx didn't send FIN packet > on this connection, refer this image - https://i.imgur.com/zqyLOLc.png > > If we see the response of second last request, we see nginx didn't send > "Connection: close" header either, refer this image - > https://i.imgur.com/P2uu722.png > > In this image I have plotted FIN packets sent by nginx over time - > https://i.imgur.com/5lNAmnk.png > > Nginx was reloaded on 2022-01-12 13:57:44 UTC. > > FIN packet graph (https://i.imgur.com/5lNAmnk.png) shows that Nginx was able > to close a lot of connections gracefully at the time of reload, but it > wasn't able to close the 5 connections. TCP dump of which I've shared above > (https://drive.google.com/file/d/1UquhmJET9i8ShEizu8453iUKpprutILV/view?usp=sharing). You may want to check if the following change improves things: https://mailman.nginx.org/pipermail/nginx-devel/2022-January/014728.html Note though that it is generally the client's responsibility (AWS ELB's) to retry requests in such cases. See the this thread for details: https://mailman.nginx.org/pipermail/nginx-devel/2021-December/014681.html -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Jan 19 13:52:04 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Jan 2022 16:52:04 +0300 Subject: Priority of cache-control, expires, x-accel-expires In-Reply-To: References: <20111205092958.GK67687@mdounin.ru> Message-ID: Hello! On Tue, Jan 18, 2022 at 09:19:09PM -0500, yugo-horie wrote: > Excuse me for refer to quite old issue, we found a different behavior > regarding their headers order in case of X-Accel-Expires which is not 0 and > Cache-Control has any of no-store, no-cache or private. > > It is very easy reproducing process. > > ngx_http_upstream.c has two method ngx_http_upstream_process_cache_control > and ngx_http_upstream_process_accel_expires which operations their cache > related headers, Cache-Control is processed in > ngx_http_upstream_process_cache_control, X-Accel-Expires is processed in > process_accel_expires. > > ngx_http_upstream_process_cache_control in ngx_http_upstream.c > > 4738 if (r->cache->valid_sec != 0 && u->headers_in.x_accel_expires != > NULL) { > 4739 return NGX_OK; > 4740 } > 4741 > 4742 start = h->value.data; > 4743 last = start + h->value.len; > 4744 > 4745 if (ngx_strlcasestrn(start, last, (u_char *) "no-cache", 8 - 1) != > NULL > 4746 || ngx_strlcasestrn(start, last, (u_char *) "no-store", 8 - 1) > != NULL > 4747 || ngx_strlcasestrn(start, last, (u_char *) "private", 7 - 1) > != NULL) > 4748 { > 4749 u->cacheable = 0; > 4750 return NGX_OK; > 4751 } > > > If Cache-Control from upstream has a value of no-store, no-cache or > private, u->cacheable = 0; in ngx_http_upstream_process_cache_control > > In the case of before processing ngx_http_upstream_process_accel_expires, if > it sets u->cacheable = 0 for this procedure, it does not cache according to > Cache-Control. X-Accel-Expires is only overwriting valid_sec. > > OTOH, In the case of after processing > ngx_http_upstream_process_accel_expires, > ngx_http_upstream_process_cache_control returns NGX_OK earlier than the > procedure of Cache-Control: no-xxx or private. > In this case, it does cache, the cache is following x-accel-expires values. > It ignores Cache-Control. > > As this result it seems not to override entirely Cache-Control by > X-Accel-Expires. And It has differential behavor according to order of > upstream header. We could not find this behavior in any nginx documents. > Could you tell me what is true? Your analysis is correct, X-Accel-Expires only takes precedence for cache validity time. Other flags, such as non-cacheable status and stale-if-error/stale-while-revalidate times might be used from the Cache-Control header if it comes first. Certainly this is a misfeature, and patches to address this are welcome. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Jan 20 12:18:11 2022 From: nginx-forum at forum.nginx.org (gab) Date: Thu, 20 Jan 2022 07:18:11 -0500 Subject: Nginx reload leading to ELB 502 on AWS Elastic Load Balancer In-Reply-To: References: Message-ID: <8dd15f7fd22bf7093aaf119dac259637.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Wed, Jan 19, 2022 at 07:24:20AM -0500, gab wrote: > > > # Issue Summary > > > > * After executing Nginx soft reload with "service nginx reload", > nginx is > > able to close a lot of connections gracefully, but some connections > aren't > > closed gracefully and Nginx is sending an RST packet. For these > connections, > > Nginx didn't send FIN packet, and it didn't send "Connection: Close" > header. > > The connections are HTTP/1.1 keep alive connections. > > > > # Expected Behaviour > > > > * After executing Nginx soft reload, Nginx is gracefully closing all > > connections by sending a "Connection: close" header in the response > or a FIN > > packet. > > > > # Supporting Data > > > > I have tcpdump of 5 such connections where nginx didn't close the > connection > > gracefully after nginx reload. > > Here's the tcpdump - > > > https://drive.google.com/file/d/1UquhmJET9i8ShEizu8453iUKpprutILV/view > ?usp=sharing > > > > If we analyse one such connection, we see that nginx didn't send FIN > packet > > on this connection, refer this image - > https://i.imgur.com/zqyLOLc.png > > > > If we see the response of second last request, we see nginx didn't > send > > "Connection: close" header either, refer this image - > > https://i.imgur.com/P2uu722.png > > > > In this image I have plotted FIN packets sent by nginx over time - > > https://i.imgur.com/5lNAmnk.png > > > > Nginx was reloaded on 2022-01-12 13:57:44 UTC. > > > > FIN packet graph (https://i.imgur.com/5lNAmnk.png) shows that Nginx > was able > > to close a lot of connections gracefully at the time of reload, but > it > > wasn't able to close the 5 connections. TCP dump of which I've > shared above > > > (https://drive.google.com/file/d/1UquhmJET9i8ShEizu8453iUKpprutILV/vie > w?usp=sharing). > > You may want to check if the following change improves things: > > https://mailman.nginx.org/pipermail/nginx-devel/2022-January/014728.ht > ml > > Note though that it is generally the client's responsibility (AWS > ELB's) to retry requests in such cases. See the this thread for > details: > > https://mailman.nginx.org/pipermail/nginx-devel/2021-December/014681.h > tml > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org Your commit will help a lot - https://hg.nginx.org/nginx/rev/96ae8e57b3dd I see from this commit that the change will be a part of 1.21.6 release - https://hg.nginx.org/nginx/rev/57581198e51e Thanks Maxim! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293375,293396#msg-293396 From jelledejong at powercraft.nl Fri Jan 21 11:45:16 2022 From: jelledejong at powercraft.nl (Jelle de Jong) Date: Fri, 21 Jan 2022 12:45:16 +0100 Subject: how to create session persistence or hash_ip within server context for an if statement Message-ID: <3e9182b0-3b63-efa8-9d1b-4d47d7d2e0e8@powercraft.nl> Hello everybody, How can I use an nginx session or hash_ip in the server context to only do the following if statement code once and not a second time an client visits the website: server { if ($geoip2_data_country_iso_code != GB) { return 302 https://test01.example.nl$request_uri; } } I want the above code to be executed only once, and then be remembered for the next 24 hours or so (this is flexible, not a hard requirement). Kind regards, Jelle de Jong From julian at jlbprof.com Sat Jan 22 21:22:40 2022 From: julian at jlbprof.com (Julian Brown) Date: Sat, 22 Jan 2022 15:22:40 -0600 Subject: Using Graylog to log from Nginx in a Docker container. Message-ID: Currently I run Nginx as a reverse proxy in a Docker container. FROM nginx ENV DEBIAN_FRONTEND noninteractive ... Details hidden ... EXPOSE 80 EXPOSE 443 CMD nginx -g 'daemon off;' We are wanting to use Graylog, and have it for our other containers. So I want to now use it for Nginx. Graylog is like Syslog. I went to follow this guide: https://github.com/ronlut/graylog-content-pack-nginx-docker It recommends we symbolically link the logs to stdout/stderr (which is already part of the docker container) and it then uses this language (see the bottom of the the README.md at github): Run Now, when your logs are collected by docker from stdout & stderr, you can run your docker using this command: docker run --log-driver=gelf --log-opt gelf-address=udp://:12401 for example: docker run --log-driver=gelf --log-opt gelf-address=udp://:12401 busybox echo Hello Graylog Has anyone had any experience with this? Or understand this last set of steps? Thank you Julian -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Sun Jan 23 15:47:27 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Sun, 23 Jan 2022 18:47:27 +0300 Subject: Using Graylog to log from Nginx in a Docker container. In-Reply-To: References: Message-ID: Hi Julian, hope you're doing well. On Sat, Jan 22, 2022 at 03:22:40PM -0600, Julian Brown wrote: > Currently I run Nginx as a reverse proxy in a Docker container. > > FROM nginx > > ENV DEBIAN_FRONTEND noninteractive > > ... Details hidden ... > > EXPOSE 80 > EXPOSE 443 > > CMD nginx -g 'daemon off;' > > We are wanting to use Graylog, and have it for our other containers. So I > want to now use it for Nginx. Graylog is like Syslog. > > I went to follow this guide: > > https://github.com/ronlut/graylog-content-pack-nginx-docker > > It recommends we symbolically link the logs to stdout/stderr (which is > already part of the docker container) and it then uses this language (see > the bottom of the the README.md at github): > > Run > > Now, when your logs are collected by docker from stdout & stderr, you can > run your docker using this command: > > docker run --log-driver=gelf --log-opt > gelf-address=udp://:12401 > > for example: > > docker run --log-driver=gelf --log-opt > gelf-address=udp://:12401 busybox echo Hello Graylog > > Has > anyone had any experience with this? Or understand this last set of steps? Since it's not related to nginx indirect, I'd recommend to raise a question or an issue for the project on GH, i.e. https://github.com/ronlut/graylog-content-pack-nginx-docker/issues/new/choose Hope that helps. -- Sergey A. Osokin From julian at jlbprof.com Sun Jan 23 15:59:02 2022 From: julian at jlbprof.com (Julian Brown) Date: Sun, 23 Jan 2022 09:59:02 -0600 Subject: Using Graylog to log from Nginx in a Docker container. In-Reply-To: References: Message-ID: OK, thank you On Sun, Jan 23, 2022 at 9:47 AM Sergey A. Osokin wrote: > Hi Julian, > > hope you're doing well. > > On Sat, Jan 22, 2022 at 03:22:40PM -0600, Julian Brown wrote: > > Currently I run Nginx as a reverse proxy in a Docker container. > > > > FROM nginx > > > > ENV DEBIAN_FRONTEND noninteractive > > > > ... Details hidden ... > > > > EXPOSE 80 > > EXPOSE 443 > > > > CMD nginx -g 'daemon off;' > > > > We are wanting to use Graylog, and have it for our other containers. So > I > > want to now use it for Nginx. Graylog is like Syslog. > > > > I went to follow this guide: > > > > https://github.com/ronlut/graylog-content-pack-nginx-docker > > > > It recommends we symbolically link the logs to stdout/stderr (which is > > already part of the docker container) and it then uses this language (see > > the bottom of the the README.md at github): > > > > Run > > > > Now, when your logs are collected by docker from stdout & stderr, you can > > run your docker using this command: > > > > docker run --log-driver=gelf --log-opt > > gelf-address=udp://:12401 > > > > for example: > > > > docker run --log-driver=gelf --log-opt > > gelf-address=udp://:12401 busybox echo Hello Graylog > > > > >Has > > anyone had any experience with this? Or understand this last set of > steps? > > Since it's not related to nginx indirect, I'd recommend to raise a > question or an issue for the project on GH, i.e. > > https://github.com/ronlut/graylog-content-pack-nginx-docker/issues/new/choose > > Hope that helps. > > -- > Sergey A. Osokin > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamesread5737 at gmail.com Sun Jan 23 23:37:39 2022 From: jamesread5737 at gmail.com (James Read) Date: Sun, 23 Jan 2022 23:37:39 +0000 Subject: Nginx performance data In-Reply-To: References: <000c01d803c0$9c652c70$d52f8550$@roze.lv> <004301d8047c$21402f60$63c08e20$@roze.lv> <000001d804f2$6d799170$486cb450$@roze.lv> Message-ID: On Fri, Jan 14, 2022 at 12:47 AM James Read wrote: > > > On Sun, Jan 9, 2022 at 12:47 AM Reinis Rozitis wrote: > >> > Otherwise why is my application running into such performance limits as >> mentioned in this question on stackoverflow >> https://stackoverflow.com/questions/70584121/why-doesnt-my-epoll-based-program-improve-performance-by-increasing-the-number >> ? >> >> You are testing something (public third party dns servers) you have no >> control over (how can you be sure there are no rate limits?) with third >> party libraries without actually measuring/showing what takes up your time. >> That's not the best way to test low level things. >> > > I just succeeded in modifying the wrk source code so that is connects with > multiple servers rather than just the one. I can confirm that it gets the > same throughput as with one host. So I take back my comments about there > being a difference between one host simulating many being different to an > actual test with many clients. There must be a problem with my epoll > implementation because the modified wrk codebase works just fine. > > I've done some more playing around with the wrk source code. It turns out that wrk was reusing connections and for this reason was able to achieve high throughput. When you change the source code and force wrk to only use a connection once (the very situation it should be trying to simulate) the throughput drops off a cliff. I'm beginning to think there may be a problem with the Linux TCP/IP stack. I haven't tested this on BSD (yet). James Read > James Read > > >> >> I would suggest to at minimum do at least 'strace -c' to see what >> syscalls takes most of the time. >> >> But that's something out of scope of this mailing list. >> >> rr >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Jan 23 23:45:08 2022 From: francis at daoine.org (Francis Daly) Date: Sun, 23 Jan 2022 23:45:08 +0000 Subject: how to create session persistence or hash_ip within server context for an if statement In-Reply-To: <3e9182b0-3b63-efa8-9d1b-4d47d7d2e0e8@powercraft.nl> References: <3e9182b0-3b63-efa8-9d1b-4d47d7d2e0e8@powercraft.nl> Message-ID: <20220123234508.GA5824@daoine.org> On Fri, Jan 21, 2022 at 12:45:16PM +0100, Jelle de Jong wrote: Hi there, > How can I use an nginx session or hash_ip in the server context to only do > the following if statement code once and not a second time an client visits > the website: > > server { > if ($geoip2_data_country_iso_code != GB) { > return 302 https://test01.example.nl$request_uri; > } > } > > I want the above code to be executed only once, and then be remembered for > the next 24 hours or so (this is flexible, not a hard requirement). For what I think you are asking for, I think the answer is a combination of "stock nginx does not let you cache at that level"; and "it should not matter; that lookup should be lightweight". But I'm not quite sure what you are asking for; or why you are asking for it. So maybe there's a different answer too. Cheers, f -- Francis Daly francis at daoine.org From skip.montanaro at gmail.com Mon Jan 24 02:02:45 2022 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Sun, 23 Jan 2022 20:02:45 -0600 Subject: 444 response generated from backend? Message-ID: I have a small website, nginx in front of gunicorn which is in front of flask. Pretty typical stuff. I'm messing around with what I can and can't do in some situations. I'm intrigued by the non-standard 444 response. Is it possible to trigger the no-response response from the backend like this or is it only from within nginx itself? I followed the directions here: https://flask.palletsprojects.com/en/2.0.x/errorhandling/ and came up with something which doesn't generate errors in my flask server, but nginx generates a complete response instead of simply closing the connection. This is more out of curiosity than anything else, so if this isn't possible, no big deal... Thanks, Skip Montanaro -------------- next part -------------- An HTML attachment was scrubbed... URL: From devashi.tandon at appsentinels.ai Mon Jan 24 05:52:56 2022 From: devashi.tandon at appsentinels.ai (Devashi Tandon) Date: Mon, 24 Jan 2022 05:52:56 +0000 Subject: Using single persistent socket to send subrequests In-Reply-To: References: Message-ID: Hi, We have the following configuration: location / { proxy_http_version 1.1; proxy_pass http://ext-authz-upstream-server; } upstream ext-authz-upstream-server { server 172.20.10.6:9006; keepalive 4; } With this configuration, as per the previous email in nginx-devel, I am expecting that when I sequentially send 5 requests the 5th request should reuse one of the ports of the first four requests since 4 socket connections should be cached and persistent. But I observe that the 5th connection is also sent from a new source port. Do I need to add any other configuration to reuse the first four socket connections besides keepalive 4? Looking forward to your response. Thanks, Devashi ________________________________ From: nginx-devel on behalf of Maxim Dounin Sent: Thursday, December 30, 2021 1:47 PM To: nginx-devel at nginx.org Subject: Re: Using single persistent socket to send subrequests Hello! On Thu, Dec 30, 2021 at 07:58:33AM +0000, Devashi Tandon wrote: > upstream ext-authz-upstream-server { > server 172.20.10.6:9006; > keepalive 4; > } [...] > However, when I create 100 simultaneous connections, they are > all sent via a different source port which means that a new > socket connection is created everytime. That's expected behaviour: the keepalive directive specifies the number of connections to cache, not the limit on the number of connections to the upstream server. With many simultaneous requests nginx will open additional connections as needed. > How can I pipeline requests over 4 connections with keepalive > configuration set to 4? You cannot, pipelining is not supported by the proxy module. If the goal is not pipelining but to limit the number of connections to upstream servers, the "server ... max_conns=..." and the "queue" directive as available in nginx-plus might be what you want, see here: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#max_conns http://nginx.org/en/docs/http/ngx_http_upstream_module.html#queue Note well that such questions do not look like something related to nginx development. A better mailing list for user-level question would be nginx at nginx.org, see here: http://nginx.org/en/support.html Hope this helps. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From jelledejong at powercraft.nl Mon Jan 24 09:57:17 2022 From: jelledejong at powercraft.nl (Jelle de Jong) Date: Mon, 24 Jan 2022 10:57:17 +0100 Subject: how to create session persistence or hash_ip within server context for an if statement In-Reply-To: <20220123234508.GA5824@daoine.org> References: <3e9182b0-3b63-efa8-9d1b-4d47d7d2e0e8@powercraft.nl> <20220123234508.GA5824@daoine.org> Message-ID: <89c227c1-01d6-204d-0e73-44bb65a92332@powercraft.nl> Thank you for taking a look! On 1/24/22 00:45, Francis Daly wrote: > On Fri, Jan 21, 2022 at 12:45:16PM +0100, Jelle de Jong wrote: > > Hi there, > >> How can I use an nginx session or hash_ip in the server context to only do >> the following if statement code once and not a second time an client visits >> the website: >> >> server { >> if ($geoip2_data_country_iso_code != GB) { >> return 302 https://test01.example.nl$request_uri; >> } >> } >> >> I want the above code to be executed only once, and then be remembered for >> the next 24 hours or so (this is flexible, not a hard requirement). > > For what I think you are asking for, I think the answer is a combination > of "stock nginx does not let you cache at that level"; and "it should > not matter; that lookup should be lightweight". > > But I'm not quite sure what you are asking for; or why you are asking > for it. So maybe there's a different answer too. I want to be able to do an redirect, but only one time, the hit it should not redirect. If a client visits an web-store it will get redirected to the region specific store, but if then then manually select an other store there it should not redirect back again. I don't know if a nginx session cookie is possible or an way to use the nginx upstream module? I am open to ideas to make this work. Kind regards, Jelle de Jong From r at roze.lv Mon Jan 24 10:15:05 2022 From: r at roze.lv (Reinis Rozitis) Date: Mon, 24 Jan 2022 12:15:05 +0200 Subject: how to create session persistence or hash_ip within server context for an if statement In-Reply-To: <89c227c1-01d6-204d-0e73-44bb65a92332@powercraft.nl> References: <3e9182b0-3b63-efa8-9d1b-4d47d7d2e0e8@powercraft.nl> <20220123234508.GA5824@daoine.org> <89c227c1-01d6-204d-0e73-44bb65a92332@powercraft.nl> Message-ID: <000001d8110b$42122ef0$c6368cd0$@roze.lv> > I want to be able to do an redirect, but only one time, the hit it should not redirect. > > If a client visits an web-store it will get redirected to the region specific store, but if then then manually select an other store there it should not redirect back again. I don't know if a nginx session cookie is possible or an way to use the nginx upstream module? If your website uses cookies (and sets some information you can identify a region) one way would be to make a decision on that - nginx can access cookies using $cookie_COOKIENAME variable(s) so you can add it as an if() or map conditions to decide if a redirect is needed. rr From osa at freebsd.org.ru Mon Jan 24 14:56:33 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Mon, 24 Jan 2022 17:56:33 +0300 Subject: Using single persistent socket to send subrequests In-Reply-To: References: Message-ID: Hi Devashi, On Mon, Jan 24, 2022 at 05:52:56AM +0000, Devashi Tandon wrote: > > We have the following configuration: > > location / { > proxy_http_version 1.1; > proxy_pass http://ext-authz-upstream-server; > } > > upstream ext-authz-upstream-server { > server 172.20.10.6:9006; > keepalive 4; > } > > Do I need to add any other configuration to reuse the first four socket connections besides keepalive 4? You'd need to review and slightly update the `location /' configuration block by adding the following directive: proxy_set_header Connection ""; Please visit the following link to get more details: https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive -- Sergey Osokin From jelledejong at powercraft.nl Mon Jan 24 15:09:34 2022 From: jelledejong at powercraft.nl (Jelle de Jong) Date: Mon, 24 Jan 2022 16:09:34 +0100 Subject: how to create session persistence or hash_ip within server context for an if statement In-Reply-To: <000001d8110b$42122ef0$c6368cd0$@roze.lv> References: <3e9182b0-3b63-efa8-9d1b-4d47d7d2e0e8@powercraft.nl> <20220123234508.GA5824@daoine.org> <89c227c1-01d6-204d-0e73-44bb65a92332@powercraft.nl> <000001d8110b$42122ef0$c6368cd0$@roze.lv> Message-ID: On 1/24/22 11:15, Reinis Rozitis wrote: >> I want to be able to do an redirect, but only one time, the hit it should > not redirect. >> >> If a client visits an web-store it will get redirected to the region > specific store, but if then then manually select an other store there it > should not redirect back again. I don't know if a nginx session cookie is > possible or an way to use the nginx upstream module? > > If your website uses cookies (and sets some information you can identify a > region) one way would be to make a decision on that - nginx can access > cookies using $cookie_COOKIENAME variable(s) so you can add it as an if() or > map conditions to decide if a redirect is needed. I got it working! I am not sure of the performance hit, maybe someone know how to make this more effective? I noticed that the if statements have an higher priority then the location sections and it does not follow the order in the configuration file. I needed an additional if statement to check the request URL... if ($cookie_REDIRECT != false) { set $redirect true; } if ($request_uri ~ "^/(noredirect)/(.*)$" ) { set $redirect false; } location ~ ^/(noredirect)/ { add_header Set-Cookie "REDIRECT=false;Domain=$host;Path=/;Max-Age=31536000"; rewrite ^/noredirect(.*)$ https://$host$1 redirect; } if ($geoip2_data_country_iso_code = IE) { set $redirect ${redirect}+test02.example.nl; } if ($redirect = true+test02.example.nl) { return 302 https://test02.example.nl$request_uri; } if ($geoip2_data_country_iso_code != GB) { set $redirect ${redirect}+test03.example.nl; } if ($redirect = true+test03.example.nl) { return 302 https://test03.example.nl$request_uri; } if ($geoip2_data_country_iso_code = NL) { set $MAGE_RUN_CODE_ORG storeview07; } if ($geoip2_data_country_iso_code = US) { set $MAGE_RUN_CODE_ORG storeview08; } if ($geoip2_data_country_iso_code = DE) { set $MAGE_RUN_CODE_ORG storeview09; } location / { proxy_pass http://127.0.0.1:6081; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; } If I am falling in a pitfall somehow let me know, it seems to work well. Kind regards, Jelle From tony_tabone at hotmail.com Mon Jan 24 16:57:02 2022 From: tony_tabone at hotmail.com (tony tabone) Date: Mon, 24 Jan 2022 16:57:02 +0000 Subject: 503 redirect Message-ID: Dear all, I am currently trying to implement a 503 reply with specific pages. I basically have a nginx config which is using GEOIP. All the html files load fine with a 200 reply, however I am trying to change the reply to 503 and still show the same page. Is it possible for this to be done ? The below is an example of the html file. # location of all html files that can be called. location ~ (/static/*|/5xx_errors.html) { #kill cache add_header Last-Modified $date_gmt; #add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0'; if_modified_since off; expires off; etag off; add_header Cache-Control 'must-revalidate'; } location /{ index $site_index_file ; try_files 5xx_error.html =503; ### Condition statement to store Geo location variable and use that against the user agent to determine page to load set $cond "${geoip2_data_country_code}"; if ($http_user_agent ~* "iphone|android") { set $cond "${cond}X"; } } } What this is doing is give a 503 error and the html file is not being shown. Thanks & Regards, tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Mon Jan 24 17:15:20 2022 From: r at roze.lv (Reinis Rozitis) Date: Mon, 24 Jan 2022 19:15:20 +0200 Subject: how to create session persistence or hash_ip within server context for an if statement In-Reply-To: References: <3e9182b0-3b63-efa8-9d1b-4d47d7d2e0e8@powercraft.nl> <20220123234508.GA5824@daoine.org> <89c227c1-01d6-204d-0e73-44bb65a92332@powercraft.nl> <000001d8110b$42122ef0$c6368cd0$@roze.lv> Message-ID: <000001d81145$f743c920$e5cb5b60$@roze.lv> > I got it working! I am not sure of the performance hit, maybe someone know how to make this more effective? Without going deeper on what are your actual requirements and what is the expected result, the suggestion I could give is instead of the multiple if statements to look at the map directive (http://nginx.org/en/docs/http/ngx_http_map_module.html ) - it is more powerful and performant. For example all the $geoip2_data_country_iso ifs could be: map $geoip2_data_country_iso_code $MAGE_RUN_CODE_ORG { NL storeview07; US storeview08; DE storeview09; // optionally a default which will be set for all non-matching country codes } Since you can also chain the map directives the same probably can be done with the $redirect logic - rather than multiple ifs you could group the checks for same conditions together.. but I'm not exactly sure I understand the reason behind the /noredirect/ location and rewriting: if ($request_uri ~ "^/(noredirect)/(.*)$" ) { set $redirect false; } location ~ ^/(noredirect)/ { add_header Set-Cookie "REDIRECT=false;Domain=$host;Path=/;Max-Age=31536000"; rewrite ^/noredirect(.*)$ https://$host$1 redirect; } Some thoughts: - if you have already a location match the if check (for the same uri) is redundant you could just set the variable in the location block - extracting regexp parts of the without using them makes little sense - since you do rewrite ^/noredirect(.*)$ https://$host$1 redirect; the "^/(noredirect)/(.*)$" and ~ ^/(noredirect)/ is not needed - just using location ^~ /noredirect or location ~ ^/noredirect (depending on the location match priority) would be enough - in the end in one location check $redirect is being set to false and then in in the same location still a redirect happens is confusing (at least) to me p.s. sometimes writing the application logic in pure webserver configuration is cumbersome - dynamic languages like Lua (can be embedded in nginx) / Perl or php could do a better job. rr From francis at daoine.org Tue Jan 25 13:58:43 2022 From: francis at daoine.org (Francis Daly) Date: Tue, 25 Jan 2022 13:58:43 +0000 Subject: 503 redirect In-Reply-To: References: Message-ID: <20220125135843.GB5824@daoine.org> On Mon, Jan 24, 2022 at 04:57:02PM +0000, tony tabone wrote: Hi there, > I am currently trying to implement a 503 reply with specific pages. I basically have a nginx config which is using GEOIP. > > All the html files load fine with a 200 reply, however I am trying to change the reply to 503 and still show the same page. > > Is it possible for this to be done ? I think I don't fully understand what your plan for request / response is; but the usual way in nginx to set a "custom" response body for non-200 http responses is to use error_page: http://nginx.org/r/error_page. Something like error_page 503 /5xx_errors.html; and then a "return 503;" or equivalent, might work for you? If you have an enumerated set of uri:s that you want to return different content to, then it might be possible to have a set of "location =" locations, each of which sets an error_page and an appropriate return. That might get a bit fiddly; if you can give a list of "for *this* request I want *this* response" pairs, it might be possible to come up with a better suggestion. Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Tue Jan 25 15:23:36 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 Jan 2022 18:23:36 +0300 Subject: nginx-1.21.6 Message-ID: Changes with nginx 1.21.6 25 Jan 2022 *) Bugfix: when using EPOLLEXCLUSIVE on Linux client connections were unevenly distributed among worker processes. *) Bugfix: nginx returned the "Connection: keep-alive" header line in responses during graceful shutdown of old worker processes. *) Bugfix: in the "ssl_session_ticket_key" when using TLSv1.3. -- Maxim Dounin http://nginx.org/ From xeioex at nginx.com Tue Jan 25 15:51:59 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 25 Jan 2022 18:51:59 +0300 Subject: njs-0.7.2 Message-ID: <0fde85b5-5ce0-9fee-058a-565330bd1d0a@nginx.com> Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release focuses on stabilization of recently released features including async/await and fixing bugs found by various fuzzers. Learn more about njs: - Overview and introduction: https://nginx.org/en/docs/njs/ - NGINX JavaScript in Your Web Server Configuration: https://youtu.be/Jc_L6UffFOs - Extending NGINX with Custom Code: https://youtu.be/0CVhq4AUU7M - Using node modules with njs: https://nginx.org/en/docs/njs/node_modules.html - Writing njs code using TypeScript definition files: https://nginx.org/en/docs/njs/typescript.html We are hiring: If you are a C programmer, passionate about Open Source and you love what we do, consider the following career opportunity: https://ffive.wd5.myworkdayjobs.com/NGINX/job/Ireland-Homebase/Software-Engineer-III---NGNIX-NJS_RP1022237 Feel free to try it and give us feedback on: - Github: https://github.com/nginx/njs/issues - Mailing list: https://mailman.nginx.org/mailman/listinfo/nginx-devel Changes with njs 0.7.2 25 Jan 2022 Core: *) Bugfix: fixed Array.prototype.join() when array is changed while iterating. *) Bugfix: fixed Array.prototype.slice() when array is changed while iterating. *) Bugfix: fixed Array.prototype.concat() when array is changed while iterating. *) Bugfix: fixed Array.prototype.reverse() when array is changed while iterating. *) Bugfix: fixed Buffer.concat() with subarrays. Thanks to Sylvain Etienne. *) Bugfix: fixed type confusion bug while resolving promises. *) Bugfix: fixed Function.prototype.apply() with large array arguments. *) Bugfix: fixed recursive async function calls. *) Bugfix: fixed function redeclaration. The bug was introduced in 0.7.0. From devashi.tandon at appsentinels.ai Wed Jan 26 07:14:31 2022 From: devashi.tandon at appsentinels.ai (Devashi Tandon) Date: Wed, 26 Jan 2022 07:14:31 +0000 Subject: nginx Digest, Vol 147, Issue 27 In-Reply-To: <164306886240.74788.8663387132680835741@ec2-18-197-214-38.eu-central-1.compute.amazonaws.com> References: <164306886240.74788.8663387132680835741@ec2-18-197-214-38.eu-central-1.compute.amazonaws.com> Message-ID: Hi Sergey, I tried with clearing the connections header but NGINX is still sending the 5th response through a new source port. Let me give a more detailed configuration we have. Just to inform you, we have our own auth module instead of using the NGINX auth module. We call ngx_http_post_request to post subrequests and the code is almost the same as that of auth module. For the subrequest sent by auth module with the following configuration we expect NGINX to send requests through a new port for the first four connections and then reuse one of the ports for the fifth connection, especially when the requests are sequential. http { include /etc/nginx/mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65s; include /etc/nginx/conf.d/*.conf; proxy_socket_keepalive on; server { listen 9000; server_name front-service; ext_auth_fail_allow on; error_log /var/log/nginx/error.log debug; location / { ext_auth_request /auth; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Upgrade $http_upgrade; proxy_set_header X-Real-Ip $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://localhost:8090; location /auth { internal; proxy_set_header X-Req-Uri $request_uri; proxy_set_header X-Method $request_method; proxy_set_header X-Req-Host $host; proxy_set_header X-Client-Addr $remote_addr:$remote_port; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_connect_timeout 5000ms; proxy_read_timeout 5000ms; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_pass http://ext-authz-upstream-server; } } upstream ext-authz-upstream-server { server 172.20.10.6:9006; keepalive 4; } } Could you please help on what we are missing? Thanks, Devashi Date: Mon, 24 Jan 2022 17:56:33 +0300 From: "Sergey A. Osokin" Subject: Re: Using single persistent socket to send subrequests To: nginx at nginx.org Message-ID: Content-Type: text/plain; charset=utf-8 Hi Devashi, On Mon, Jan 24, 2022 at 05:52:56AM +0000, Devashi Tandon wrote: > > We have the following configuration: > > location / { > proxy_http_version 1.1; > proxy_pass http://ext-authz-upstream-server; > } > > upstream ext-authz-upstream-server { > server 172.20.10.6:9006; > keepalive 4; > } > > Do I need to add any other configuration to reuse the first four socket connections besides keepalive 4? You'd need to review and slightly update the `location /' configuration block by adding the following directive: proxy_set_header Connection ""; Please visit the following link to get more details: https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive -- Sergey Osokin -------------- next part -------------- An HTML attachment was scrubbed... URL: From devashi.tandon at appsentinels.ai Wed Jan 26 07:18:02 2022 From: devashi.tandon at appsentinels.ai (Devashi Tandon) Date: Wed, 26 Jan 2022 07:18:02 +0000 Subject: Using single persistent socket to send subrequests In-Reply-To: References: <164306886240.74788.8663387132680835741@ec2-18-197-214-38.eu-central-1.compute.amazonaws.com> Message-ID: Resending with correct Subject. Sorry for the confusion. Hi Sergey, I tried with clearing the connections header but NGINX is still sending the 5th response through a new source port. Let me give a more detailed configuration we have. Just to inform you, we have our own auth module instead of using the NGINX auth module. We call ngx_http_post_request to post subrequests and the code is almost the same as that of auth module. For the subrequest sent by auth module with the following configuration we expect NGINX to send requests through a new port for the first four connections and then reuse one of the ports for the fifth connection, especially when the requests are sequential. http { include /etc/nginx/mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65s; include /etc/nginx/conf.d/*.conf; proxy_socket_keepalive on; server { listen 9000; server_name front-service; ext_auth_fail_allow on; error_log /var/log/nginx/error.log debug; location / { ext_auth_request /auth; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Upgrade $http_upgrade; proxy_set_header X-Real-Ip $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://localhost:8090; location /auth { internal; proxy_set_header X-Req-Uri $request_uri; proxy_set_header X-Method $request_method; proxy_set_header X-Req-Host $host; proxy_set_header X-Client-Addr $remote_addr:$remote_port; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_connect_timeout 5000ms; proxy_read_timeout 5000ms; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_pass http://ext-authz-upstream-server; } } upstream ext-authz-upstream-server { server 172.20.10.6:9006; keepalive 4; } } Could you please help on what we are missing? Thanks, Devashi Date: Mon, 24 Jan 2022 17:56:33 +0300 From: "Sergey A. Osokin" Subject: Re: Using single persistent socket to send subrequests To: nginx at nginx.org Message-ID: Content-Type: text/plain; charset=utf-8 Hi Devashi, On Mon, Jan 24, 2022 at 05:52:56AM +0000, Devashi Tandon wrote: > > We have the following configuration: > > location / { > proxy_http_version 1.1; > proxy_pass http://ext-authz-upstream-server; > } > > upstream ext-authz-upstream-server { > server 172.20.10.6:9006; > keepalive 4; > } > > Do I need to add any other configuration to reuse the first four socket connections besides keepalive 4? You'd need to review and slightly update the `location /' configuration block by adding the following directive: proxy_set_header Connection ""; Please visit the following link to get more details: https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive -- Sergey Osokin -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jan 26 12:40:32 2022 From: nginx-forum at forum.nginx.org (itpp2012) Date: Wed, 26 Jan 2022 07:40:32 -0500 Subject: [News] Igor Sysoev leaving nginx Message-ID: @Igor Sysoev, thanks for all your work and a wonderful product, we all hope you enjoy whatever you do next. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293469,293469#msg-293469 From antoine.bonavita at gmail.com Wed Jan 26 13:26:59 2022 From: antoine.bonavita at gmail.com (Antoine Bonavita) Date: Wed, 26 Jan 2022 14:26:59 +0100 Subject: [News] Igor Sysoev leaving nginx In-Reply-To: References: Message-ID: Yeah, BIG THANK YOU @Igor Sysoev for giving us this wonderful piece of tech and all the best for the next adventure (that I, like many others here, will be very interested in following). So long, and thanks for all the ... nginx. A. On Wed, Jan 26, 2022 at 1:42 PM itpp2012 wrote: > @Igor Sysoev, thanks for all your work and a wonderful product, we all hope > you enjoy whatever you do next. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,293469,293469#msg-293469 > > _______________________________________________ > nginx mailing list -- nginx at nginx.org > To unsubscribe send an email to nginx-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jan 27 11:07:43 2022 From: nginx-forum at forum.nginx.org (unikron) Date: Thu, 27 Jan 2022 06:07:43 -0500 Subject: How to make nginx fail if it can't write to cache dir. Message-ID: <87166eece9aaf63039af2711080e3998.NginxMailingListEnglish@forum.nginx.org> Hi, Nginx version: nginx/1.20.2 Cache config: proxy_cache_path /nginx/cache levels=1:2 keys_zone=bla:20m max_size=10g inactive=20m use_temp_path=off; I had a problem while using tmpfs mount for nginx cache dir, my automation mounted it over and over again(so each time nginx created a new set of base dirs [a-z]), but after a few mounts, it could no longer create new base dirs(no more memory) and writes to the cache dir failed with: [crit] 31334#31334: *128216 mkdir() "/nginx/cache/4" failed (13: Permission denied) while reading upstream But nginx still accepted connections and the client got a hangup during transfer each time. How can I fail nginx when this thing happens, so clients will not get to it ? Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293477,293477#msg-293477 From r at roze.lv Thu Jan 27 11:39:54 2022 From: r at roze.lv (Reinis Rozitis) Date: Thu, 27 Jan 2022 13:39:54 +0200 Subject: How to make nginx fail if it can't write to cache dir. In-Reply-To: <87166eece9aaf63039af2711080e3998.NginxMailingListEnglish@forum.nginx.org> References: <87166eece9aaf63039af2711080e3998.NginxMailingListEnglish@forum.nginx.org> Message-ID: <000001d81372$9a8a6430$cf9f2c90$@roze.lv> > Cache config: > proxy_cache_path /nginx/cache levels=1:2 keys_zone=bla:20m max_size=10g inactive=20m use_temp_path=off; > > I had a problem while using tmpfs mount for nginx cache dir, my automation mounted it over and over again(so each time nginx created a new set of base dirs [a-z]), but after a few mounts, it could no longer create new base dirs(no more memory) and writes to the cache dir failed with: Isn't it something to fix on the automation/mount side rather than trying nginx to "fail"? If there is lack of resources maybe lowering the max_size=10g also would make sense. rr From nginx-forum at forum.nginx.org Thu Jan 27 12:57:26 2022 From: nginx-forum at forum.nginx.org (unikron) Date: Thu, 27 Jan 2022 07:57:26 -0500 Subject: How to make nginx fail if it can't write to cache dir. In-Reply-To: <000001d81372$9a8a6430$cf9f2c90$@roze.lv> References: <000001d81372$9a8a6430$cf9f2c90$@roze.lv> Message-ID: <56ef6ec54e2470ae3231ca064181b580.NginxMailingListEnglish@forum.nginx.org> I can't think of any edge case that will happen to my nginx instances, that's why I want it to die. I fail to see why it will keep on answering to new requests if it can't fulfill any of them. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293477,293484#msg-293484 From anoopalias01 at gmail.com Thu Jan 27 13:48:13 2022 From: anoopalias01 at gmail.com (Anoop Alias) Date: Thu, 27 Jan 2022 19:18:13 +0530 Subject: ktls nginx not working Message-ID: Hi, I am trying to implement/test ktls as per the blog article https://www.nginx.com/blog/improving-nginx-performance-with-kernel-tls/#tls-protocol ########################### This is done on CentOS8 VM # uname -r 4.18.0-348.7.1.el8_5.x86_64 ########################### # openssl-3.0.1/.openssl/bin/openssl ciphers TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES256-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:RSA-PSK-AES256-GCM-SHA384:DHE-PSK-AES256-GCM-SHA384:RSA-PSK-CHACHA20-POLY1305:DHE-PSK-CHACHA20-POLY1305:ECDHE-PSK-CHACHA20-POLY1305:AES256-GCM-SHA384:PSK-AES256-GCM-SHA384:PSK-CHACHA20-POLY1305:RSA-PSK-AES128-GCM-SHA256:DHE-PSK-AES128-GCM-SHA256:AES128-GCM-SHA256:PSK-AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:ECDHE-PSK-AES256-CBC-SHA384:ECDHE-PSK-AES256-CBC-SHA:SRP-RSA-AES-256-CBC-SHA:SRP-AES-256-CBC-SHA:RSA-PSK-AES256-CBC-SHA384:DHE-PSK-AES256-CBC-SHA384:RSA-PSK-AES256-CBC-SHA:DHE-PSK-AES256-CBC-SHA:AES256-SHA:PSK-AES256-CBC-SHA384:PSK-AES256-CBC-SHA:ECDHE-PSK-AES128-CBC-SHA256:ECDHE-PSK-AES128-CBC-SHA:SRP-RSA-AES-128-CBC-SHA:SRP-AES-128-CBC-SHA:RSA-PSK-AES128-CBC-SHA256:DHE-PSK-AES128-CBC-SHA256:RSA-PSK-AES128-CBC-SHA:DHE-PSK-AES128-CBC-SHA:AES128-SHA:PSK-AES128-CBC-SHA256:PSK-AES128-CBC-SHA ########################### # /usr/sbin/nginx-debug -V nginx version: nginx/1.21.6 built by gcc 8.5.0 20210514 (Red Hat 8.5.0-4) (GCC) built with OpenSSL 3.0.1 14 Dec 2021 TLS SNI support enabled configure arguments: --with-debug --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules --with-pcre=./pcre2-10.39 --with-pcre-jit --with-zlib=./zlib-1.2.11 --with-openssl=./openssl-3.0.1 --with-openssl-opt=enable-ktls --with-openssl-opt=enable-tls1_3 --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log ############################ The debug log does not show any signs of ktls in use (snippet from the log provided below on download of a 1G file) 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame 0000000002077A08 was sent 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent: 0000000002077A08 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame 0000000002077D30 was sent 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent: 0000000002077D30 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame 0000000002075E58 was sent 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent: 0000000002075E58 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame 0000000002075F60 was sent 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent: 0000000002075F60 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame 0000000002077BA8 was sent 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent: 0000000002077BA8 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame 0000000002077AA0 was sent 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent: 0000000002077AA0 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame 0000000002077890 was sent 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent: 0000000002077890 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame 0000000002075BC8 was sent 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent: 0000000002075BC8 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http write filter 0000000000000000 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 read: 15, 0000000002791FC0, 32768, 21168128 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 read: 15, 0000000002791FC0, 32768, 21168128 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 read: 15, 0000000002799FD0, 32768, 21200896 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http postpone filter "/1G?" 0000000002075DD8 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 write new buf t:1 f:1 0000000002791FC0, pos 0000000002791FC0, size: 32768 file: 21168128, size: 32768 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 write new buf t:1 f:1 0000000002799FD0, pos 0000000002799FD0, size: 32768 file: 21200896, size: 32768 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http write filter: l:0 f:1 s:65536 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http write filter limit 2097152 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 send chain: 0000000002075DF8 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 windows: conn:10297344 stream:868352 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 create DATA frame 0000000002075BC8: len:8192 flags:0 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 create DATA frame 0000000002077890: len:8192 flags:0 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 create DATA frame 0000000002077AA0: len:8192 flags:0 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 create DATA frame 0000000002077BA8: len:8192 flags:0 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 create DATA frame 0000000002075F60: len:8192 flags:0 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 create DATA frame 0000000002075E58: len:8192 flags:0 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 create DATA frame 0000000002077D30: len:8192 flags:0 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 create DATA frame 0000000002077A08: len:8192 flags:0 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame out: 0000000002077A08 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame out: 0000000002077D30 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame out: 0000000002075E58 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame out: 0000000002075F60 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame out: 0000000002077BA8 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame out: 0000000002077AA0 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame out: 0000000002077890 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame out: 0000000002075BC8 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL buf copy: 9 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL buf copy: 8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL buf copy: 9 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL buf copy: 8174 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL to write: 16384 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL_write: 16384 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL buf copy: 18 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL buf copy: 9 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL buf copy: 8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL buf copy: 9 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL buf copy: 8156 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL to write: 16384 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL_write: 16384 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL buf copy: 36 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL buf copy: 9 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL buf copy: 8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL buf copy: 9 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL buf copy: 8138 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL to write: 16384 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL_write: 16384 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL buf copy: 54 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL buf copy: 9 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL buf copy: 8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL buf copy: 9 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL buf copy: 8120 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL to write: 16384 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL_write: 16384 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL buf copy: 72 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL to write: 72 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 SSL_write: 72 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame 0000000002075BC8 was sent 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent: 0000000002075BC8 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame 0000000002077890 was sent 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent: 0000000002075BC8 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame 0000000002077890 was sent 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent: 0000000002077890 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame 0000000002077AA0 was sent 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent: 0000000002077AA0 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame 0000000002077BA8 was sent 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent: 0000000002077BA8 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame 0000000002075F60 was sent 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent: 0000000002075F60 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame 0000000002075E58 was sent 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent: 0000000002075E58 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame 0000000002077D30 was sent 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent: 0000000002077D30 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 DATA frame 0000000002077A08 was sent 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame sent: 0000000002077A08 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http write filter 0000000000000000 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 read: 15, 0000000002799FD0, 32768, 21233664 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 read: 15, 0000000002791FC0, 32768, 21266432 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http postpone filter "/1G?" 0000000002075DE8 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 write new buf t:1 f:1 0000000002799FD0, pos 0000000002799FD0, size: 32768 file: 21233664, size: 32768 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 write new buf t:1 f:1 0000000002791FC0, pos 0000000002791FC0, size: 32768 file: 21266432, size: 32768 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http write filter: l:0 f:1 s:65536 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 send chain: 0000000002077C50 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 windows: conn:10231808 stream:802816 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 create DATA frame 0000000002077A08: len:8192 flags:0 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 create DATA frame 0000000002077D30: len:8192 flags:0 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 create DATA frame 0000000002075E58: len:8192 flags:0 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 create DATA frame 0000000002075F60: len:8192 flags:0 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 create DATA frame 0000000002077BA8: len:8192 flags:0 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 create DATA frame 0000000002077AA0: len:8192 flags:0 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 create DATA frame 0000000002077890: len:8192 flags:0 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2:1 create DATA frame 0000000002075BC8: len:8192 flags:0 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame out: 0000000002075BC8 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame out: 0000000002077890 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame out: 0000000002077AA0 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame out: 0000000002077BA8 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame out: 0000000002075F60 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame out: 0000000002075E58 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame out: 0000000002077D30 sid:1 bl:0 len:8192 2022/01/27 13:41:33 [debug] 1843564#1843564: *140 http2 frame out: 0000000002077A08 sid:1 bl:0 len:8192 ############################################# [root at 65-108-156-104 nginx-1.21.6]# grep SSL_sendfile /var/log/nginx/error_log [root at 65-108-156-104 nginx-1.21.6]# grep BIO /var/log/nginx/error_log [root at 65-108-156-104 nginx-1.21.6]# There is no SSL_sendfile in the log ############################################## # TLS Settings ssl_protocols TLSv1.3; ssl_session_cache shared:SSL:32m; ssl_dhparam /etc/nginx/ssl/dhparam.pem; ssl_session_timeout 1d; ssl_session_tickets off; ssl_ocsp_cache shared:ocspcache:10m; server{ ... ssl_conf_command Options KTLS; .. } ################################################# What am I doing wrong? Thanks in advance, -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Thu Jan 27 14:08:59 2022 From: r at roze.lv (Reinis Rozitis) Date: Thu, 27 Jan 2022 16:08:59 +0200 Subject: How to make nginx fail if it can't write to cache dir. In-Reply-To: <56ef6ec54e2470ae3231ca064181b580.NginxMailingListEnglish@forum.nginx.org> References: <000001d81372$9a8a6430$cf9f2c90$@roze.lv> <56ef6ec54e2470ae3231ca064181b580.NginxMailingListEnglish@forum.nginx.org> Message-ID: <000a01d81387$6e036aa0$4a0a3fe0$@roze.lv> > I fail to see why it will keep on answering to new requests if it can't fulfill any of them. Because there might be other requests which can be answered without touching the problematic cache directory? While the error is critical (for the particular request) I don't think it makes sense for nginx to stop/kill itself. If you need such functionality/behavior maybe a simple script which checks the nginx error log for 'crit' entries and performs 'killall -9 nginx' might work in your case (or use something like sec (https://simple-evcorr.github.io/) for example) ? rr From pluknet at nginx.com Thu Jan 27 14:14:56 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 27 Jan 2022 17:14:56 +0300 Subject: ktls nginx not working In-Reply-To: References: Message-ID: > On 27 Jan 2022, at 16:48, Anoop Alias wrote: > > Hi, > > I am trying to implement/test ktls as per the blog article > > https://www.nginx.com/blog/improving-nginx-performance-with-kernel-tls/#tls-protocol > > ########################### > This is done on CentOS8 VM > > # uname -r > 4.18.0-348.7.1.el8_5.x86_64 > ########################### > # openssl-3.0.1/.openssl/bin/openssl ciphers > [..] > > ########################### > # /usr/sbin/nginx-debug -V > nginx version: nginx/1.21.6 > built by gcc 8.5.0 20210514 (Red Hat 8.5.0-4) (GCC) > built with OpenSSL 3.0.1 14 Dec 2021 > TLS SNI support enabled > configure arguments: --with-debug --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules --with-pcre=./pcre2-10.39 --with-pcre-jit --with-zlib=./zlib-1.2.11 --with-openssl=./openssl-3.0.1 --with-openssl-opt=enable-ktls --with-openssl-opt=enable-tls1_3 --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log > ############################ > The debug log does not show any signs of ktls in use > [..] > [root at 65-108-156-104 nginx-1.21.6]# grep SSL_sendfile /var/log/nginx/error_log > [root at 65-108-156-104 nginx-1.21.6]# grep BIO /var/log/nginx/error_log > [root at 65-108-156-104 nginx-1.21.6]# > > There is no SSL_sendfile in the log > > ############################################## > # TLS Settings > ssl_protocols TLSv1.3; > ssl_session_cache shared:SSL:32m; > ssl_dhparam /etc/nginx/ssl/dhparam.pem; > ssl_session_timeout 1d; > ssl_session_tickets off; > ssl_ocsp_cache shared:ocspcache:10m; > > > server{ > ... > ssl_conf_command Options KTLS; > .. > } > ################################################# > What am I doing wrong? > Make sure you have enabled sendfile in configuration. Note that Linux 4.18 as distributed with Centos8 implements no KTLS for TLSv1.3 ciphers, and quite limited number of ciphers for TLSv1.2. -- Sergey Kandaurov From anoopalias01 at gmail.com Thu Jan 27 14:27:10 2022 From: anoopalias01 at gmail.com (Anoop Alias) Date: Thu, 27 Jan 2022 19:57:10 +0530 Subject: ktls nginx not working In-Reply-To: References: Message-ID: sendfile on; is there in the http context I tested with # TLS Settings ssl_protocols TLSv1.2; ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384; which should cover centos8 as mentioned in the blog post? But it still did not work ########################## Its a KVM vps from hetzner and tls module seems loaded [root at 65-108-156-104 nginx-1.21.6]# lsmod|grep tls tls 102400 0 -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Thu Jan 27 14:49:24 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 27 Jan 2022 17:49:24 +0300 Subject: ktls nginx not working In-Reply-To: References: Message-ID: <9812EC10-8AEE-4EBA-83CC-ADA086F8F827@nginx.com> > On 27 Jan 2022, at 17:27, Anoop Alias wrote: > > sendfile on; > > is there in the http context > > I tested with > > # TLS Settings > ssl_protocols TLSv1.2; > ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384; > > which should cover centos8 as mentioned in the blog post? > > But it still did not work > ########################## > > Its a KVM vps from hetzner and tls module seems loaded > > [root at 65-108-156-104 nginx-1.21.6]# lsmod|grep tls > tls 102400 0 > Another thing to check is to make sure you've actually built OpenSSL with KTLS support (should be no OPENSSL_NO_KTLS defined in generated includes). As previously provided, you have several --with-openssl-opt: --with-openssl-opt=enable-ktls --with-openssl-opt=enable-tls1_3 In this case, only the last one is applied. To specify several values: --with-openssl-opt="opt1 opt2" It's also useful to know the actually negotiated ciphersuite. -- Sergey Kandaurov From anoopalias01 at gmail.com Fri Jan 28 04:29:49 2022 From: anoopalias01 at gmail.com (Anoop Alias) Date: Fri, 28 Jan 2022 09:59:49 +0530 Subject: ktls nginx not working In-Reply-To: <9812EC10-8AEE-4EBA-83CC-ADA086F8F827@nginx.com> References: <9812EC10-8AEE-4EBA-83CC-ADA086F8F827@nginx.com> Message-ID: it works now But I have a strange situation If we download the file using text clients like wget or curl , the BIO_get_ktls_send and SSL_sendfile is not showing in debug log, but it shows up if we use a browser like Chrome or Firefox -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From noloader at gmail.com Fri Jan 28 06:46:21 2022 From: noloader at gmail.com (Jeffrey Walton) Date: Fri, 28 Jan 2022 01:46:21 -0500 Subject: ktls nginx not working In-Reply-To: References: Message-ID: On Thu, Jan 27, 2022 at 8:52 AM Anoop Alias wrote: > > I am trying to implement/test ktls as per the blog article > > https://www.nginx.com/blog/improving-nginx-performance-with-kernel-tls/#tls-protocol > > ########################### > This is done on CentOS8 VM > > # uname -r > 4.18.0-348.7.1.el8_5.x86_64 > ########################### > # openssl-3.0.1/.openssl/bin/openssl ciphers > TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES256-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:RSA-PSK-AES256-GCM-SHA384:DHE-PSK-AES256-GCM-SHA384:RSA-PSK-CHACHA20-POLY1305:DHE-PSK-CHACHA20-POLY1305:ECDHE-PSK-CHACHA20-POLY1305:AES256-GCM-SHA384:PSK-AES256-GCM-SHA384:PSK-CHACHA20-POLY1305:RSA-PSK-AES128-GCM-SHA256:DHE-PSK-AES128-GCM-SHA256:AES128-GCM-SHA256:PSK-AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:ECDHE-PSK-AES256-CBC-SHA384:ECDHE-PSK-AES256-CBC-SHA:SRP-RSA-AES-256-CBC-SHA:SRP-AES-256-CBC-SHA:RSA-PSK-AES256-CBC-SHA384:DHE-PSK-AES256-CBC-SHA384:RSA-PSK-AES256-CBC-SHA:DHE-PSK-AES256-CBC-SHA:AES256-SHA:PSK-AES256-CBC-SHA384:PSK-AES256-CBC-SHA:ECDHE-PSK-AES128-CBC-SHA256:ECDHE-PSK-AES128-CBC-SHA:SRP-RSA-AES-128-CBC-SHA:SRP-AES-128-CBC-SHA:RSA-PSK-AES128-CBC-SHA256:DHE-PSK-AES128-CBC-SHA256:RSA-PSK-AES128-CBC-SHA:DHE-PSK-AES128-CBC-SHA:AES128-SHA:PSK-AES128-CBC-SHA256:PSK-AES128-CBC-SHA One small comment here... Typically you can reduce the advertised cipher suites to reduce the size of the pdu. Use a cipher string like "HIGH:!aNULL:!kRSA:!PSK:!SRP:!MD5:!RC4". That gets you down to about 40 or 50 cipher suites (iirc), which takes up 80 or 100 bytes (each cipher suite consumes 2 bytes in the client.hello). You want to do what you can to keep those pdu's small. Also see https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/ > ########################### > # /usr/sbin/nginx-debug -V > nginx version: nginx/1.21.6 > built by gcc 8.5.0 20210514 (Red Hat 8.5.0-4) (GCC) > built with OpenSSL 3.0.1 14 Dec 2021 > TLS SNI support enabled > configure arguments: --with-debug --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules --with-pcre=./pcre2-10.39 --with-pcre-jit --with-zlib=./zlib-1.2.11 --with-openssl=./openssl-3.0.1 --with-openssl-opt=enable-ktls --with-openssl-opt=enable-tls1_3 --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log One small comment here... On x86_64 you should also use the OpenSSL option enable-ec_nistp_64_gcc_128. It makes DH key exchange 2x to 4x faster. There are three conditions to use enable-ec_nistp_64_gcc_128, and x86_64 satisfies them. Also see https://wiki.openssl.org/index.php/Compilation_and_Installation#Configure_Options. Jeff From drodriguez at unau.edu.ar Fri Jan 28 13:22:42 2022 From: drodriguez at unau.edu.ar (Daniel Armando Rodriguez) Date: Fri, 28 Jan 2022 10:22:42 -0300 Subject: SSL passtrough In-Reply-To: References: Message-ID: <0b550d1839dce279310fe9350b1114d9@unau.edu.ar> Hi there I have a RP in front of several services and now need to add SSL passtrough for some of them. So, with this goal set up this config stream { map $ssl_preread_server_name $name { sub1.DOMAIN sub1; sub2.DOMAIN sub2; sub3.DOMAIN sub3; sub4.DOMAIN sub4; } upstream sub1 { server x.y.z.1:443; } upstream sub2 { server x.y.z.1:443; } upstream sub3 { server x.y.z.1:443; } upstream sub4 { server x.y.z.1:443; } server { listen 443; proxy_pass $name; ssl_preread on; } } And yes, four subdomains are hosted in the same VM. This has to do with the peculiarities of the software used. In order to catch HTTP traffic, and redirect, add this to each subdomain server. server { listen 80; return 301 https://$host$request_uri; } Is this the right way to go or am I missing something? Also tryied to upgrade nginx using Debian repo but wasn't possible. Currently installed 1.14.2 under Debian Buster ________________________________________________ Daniel A. Rodriguez _Informática, Conectividad y Sistemas_ Universidad Nacional del Alto Uruguay San Vicente - Misiones - Argentina informatica.unau.edu.ar From nginx-forum at forum.nginx.org Fri Jan 28 18:17:34 2022 From: nginx-forum at forum.nginx.org (hablutzel1) Date: Fri, 28 Jan 2022 13:17:34 -0500 Subject: "ssl_stapling" without configured "resolver" caches responder IP indefinitely Message-ID: <6f6dcc053419edb3dbcf5eba1594283c.NginxMailingListEnglish@forum.nginx.org> Hi, while testing the latest NGINX source code around ~1.21.7, I’ve observed that enabling "ssl_stapling" without configuring a “resolver”, makes NGINX cache the OCSP responder IP indefinitely, so, if the CA later changes the OCSP responder IP, NGINX is still going to try to get OCSP queries from the old IP (possibly inoperative now), irrespective of the DNS record TTL. Now, I'm aware of https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_stapling saying: > For a resolution of the OCSP responder hostname, the resolver directive should also be specified. And effectively, using the “resolver” directive, OCSP DNS records are refreshed, but it is not obvious at all what is going to happen if a "resolver" is not configured. Is there any documentation on this? Additionally, what is the reason to not use the default system DNS resolvers in the standard way (i.e. respecting DNS TTLs) instead of performing the resolution only once when no "resolver" is configured? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293525,293525#msg-293525 From mdounin at mdounin.ru Fri Jan 28 22:05:59 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 29 Jan 2022 01:05:59 +0300 Subject: "ssl_stapling" without configured "resolver" caches responder IP indefinitely In-Reply-To: <6f6dcc053419edb3dbcf5eba1594283c.NginxMailingListEnglish@forum.nginx.org> References: <6f6dcc053419edb3dbcf5eba1594283c.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Fri, Jan 28, 2022 at 01:17:34PM -0500, hablutzel1 wrote: > Hi, while testing the latest NGINX source code around ~1.21.7, I’ve observed > that enabling "ssl_stapling" without configuring a “resolver”, makes NGINX > cache the OCSP responder IP indefinitely, so, if the CA later changes the > OCSP responder IP, NGINX is still going to try to get OCSP queries from the > old IP (possibly inoperative now), irrespective of the DNS record TTL. > > Now, I'm aware of > https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_stapling > saying: > > > For a resolution of the OCSP responder hostname, the resolver directive > should also be specified. > > And effectively, using the “resolver” directive, OCSP DNS records are > refreshed, but it is not obvious at all what is going to happen if a > "resolver" is not configured. Is there any documentation on this? > Additionally, what is the reason to not use the default system DNS resolvers > in the standard way (i.e. respecting DNS TTLs) instead of performing the > resolution only once when no "resolver" is configured? Standard system resolver does not provide non-blocking interface, which makes it unusable for nginx at runtime. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Sat Jan 29 02:29:54 2022 From: nginx-forum at forum.nginx.org (hablutzel1) Date: Fri, 28 Jan 2022 21:29:54 -0500 Subject: "ssl_stapling" without configured "resolver" caches responder IP indefinitely In-Reply-To: References: Message-ID: <6633ac5cfffc8a6d6cd81e84d0d3dce3.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, I'm not really familiar with NGINX source code or with the C language for that matter, so could you please provide more detail on why does NGING require a non-blocking DNS resolver? Couldn't it rely on child processes or threads to not block? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293525,293529#msg-293529 From osa at freebsd.org.ru Sat Jan 29 03:02:38 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Sat, 29 Jan 2022 06:02:38 +0300 Subject: "ssl_stapling" without configured "resolver" caches responder IP indefinitely In-Reply-To: <6633ac5cfffc8a6d6cd81e84d0d3dce3.NginxMailingListEnglish@forum.nginx.org> References: <6633ac5cfffc8a6d6cd81e84d0d3dce3.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, let me add 2 cents to the topic. On Fri, Jan 28, 2022 at 09:29:54PM -0500, hablutzel1 wrote: > Hi Maxim, I'm not really familiar with NGINX source code or with the C > language for that matter, so could you please provide more detail on why > does NGING require a non-blocking DNS resolver? Couldn't it rely on child > processes or threads to not block? There are many resources available in internet about non-blocking IO, threads and so on, and here's the one of those about applications that handle 10000s of requests per second, https://medium.com/ing-blog/how-does-non-blocking-io-work-under-the-hood-6299d2953c74 I'd highly recommend to read it and understand. Also, here's great article I'd recommend to read right after after the previous one, http://www.catb.org/~esr/faqs/smart-questions.html Hope it helps a lot. Thank you. -- Sergey A. Osokin From zach at balbecfilms.com Sat Jan 29 06:08:45 2022 From: zach at balbecfilms.com (Zach Rait) Date: Fri, 28 Jan 2022 22:08:45 -0800 Subject: auth_request sub requests not using upstream keepalive Message-ID: Hi-- I was exploring using auth_request from the ngx_http_auth_request_module, and I have encountered some unexpected behavior with regard to HTTP keepalive/connection reuse. I have some configuration that looks roughly like this: location = /auth_check { proxy_pass_request_body off; proxy_set_header Content-Length ''; proxy_http_version 1.1; proxy_set_header Connection ''; proxy_pass https://upstream_with_keepalive_confugred; } location /private { auth_request /auth_check; proxy_pass http://some_backend; } When I make a series of requests to /auth_check, nginx uses an existing connection as confirmed by tcpdump, but when I make a series of requests to /private, each /auth_check is closing the TCP connection at the end and then creating a new one for the following request. In my particular use-case this leads to approximately double the latency of the calls that use auth_request. Is this the expected behavior/a known issue with auth_request/http subrequests in general? Thank you, Zach -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Jan 30 09:01:31 2022 From: nginx-forum at forum.nginx.org (unikron) Date: Sun, 30 Jan 2022 04:01:31 -0500 Subject: How to make nginx fail if it can't write to cache dir. In-Reply-To: <000a01d81387$6e036aa0$4a0a3fe0$@roze.lv> References: <000a01d81387$6e036aa0$4a0a3fe0$@roze.lv> Message-ID: <653c3b3f641547d0cd1ec032dad48a0c.NginxMailingListEnglish@forum.nginx.org> The error on a base dir is critical to all cache requests, so no request will succeed. I know I can write a script to monitor it, but it seems like the wrong approach. I would like to have the option for nginx to stop if it can't do the job as should, But I guess that if no one answered with that kind of solution, it doesn't exist in nginx. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,293477,293533#msg-293533 From mdounin at mdounin.ru Sun Jan 30 20:30:46 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 30 Jan 2022 23:30:46 +0300 Subject: auth_request sub requests not using upstream keepalive In-Reply-To: References: Message-ID: Hello! On Fri, Jan 28, 2022 at 10:08:45PM -0800, Zach Rait wrote: > I was exploring using auth_request from the ngx_http_auth_request_module, > and I have encountered some unexpected behavior with regard to HTTP > keepalive/connection reuse. I have some configuration that looks roughly > like this: > > location = /auth_check { > proxy_pass_request_body off; > proxy_set_header Content-Length ''; > proxy_http_version 1.1; > proxy_set_header Connection ''; > proxy_pass https://upstream_with_keepalive_confugred; > } > > location /private { > auth_request /auth_check; > proxy_pass http://some_backend; > } > > When I make a series of requests to /auth_check, nginx uses an existing > connection as confirmed by tcpdump, but when I make a series of requests to > /private, each /auth_check is closing the TCP connection at the end and > then creating a new one for the following request. In my > particular use-case this leads to approximately double the latency of the > calls that use auth_request. Is this the expected behavior/a known issue > with auth_request/http subrequests in general? Make sure your backend uses "Content-Length: 0" or status code 204 in responses to auth subrequests. Even with keepalive configured, nginx closes upstream connection if the response contains the response body nginx is not going to read, as it is often cheaper to reopen a connection than wait for the unneeded response body. -- Maxim Dounin http://mdounin.ru/