From mdounin at mdounin.ru Thu Dec 1 01:07:40 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 1 Dec 2022 04:07:40 +0300 Subject: [PATCH v2] Removed the casts within ngx_memcmp() In-Reply-To: <20221130121715.45yx5g4upwxhrs3k@Y9MQ9X2QVV> References: <20221108105539.3924-1-alx@nginx.com> <20221130121715.45yx5g4upwxhrs3k@Y9MQ9X2QVV> Message-ID: Hello! On Wed, Nov 30, 2022 at 04:17:15PM +0400, Sergey Kandaurov wrote: > On Wed, Nov 09, 2022 at 06:03:24PM +0300, Maxim Dounin wrote: > > # HG changeset patch > > # User Maxim Dounin > > # Date 1668004692 -10800 > > # Wed Nov 09 17:38:12 2022 +0300 > > # Node ID fc79ea0724a92c1f463625a11ed4cb785cd342b7 > > # Parent 42bc158a47ecb3c2bd0396c723c307c757f2770e > > Fixed alignment of ngx_memmove()/ngx_movemem() macro definitions. > > > > diff --git a/src/core/ngx_string.h b/src/core/ngx_string.h > > --- a/src/core/ngx_string.h > > +++ b/src/core/ngx_string.h > > @@ -140,8 +140,8 @@ ngx_copy(u_char *dst, u_char *src, size_ > > #endif > > > > > > -#define ngx_memmove(dst, src, n) (void) memmove(dst, src, n) > > -#define ngx_movemem(dst, src, n) (((u_char *) memmove(dst, src, n)) + (n)) > > +#define ngx_memmove(dst, src, n) (void) memmove(dst, src, n) > > +#define ngx_movemem(dst, src, n) (((u_char *) memmove(dst, src, n)) + (n)) > > > > > > /* msvc and icc7 compile memcmp() to the inline loop */ > > # HG changeset patch > > # User Maxim Dounin > > # Date 1668005196 -10800 > > # Wed Nov 09 17:46:36 2022 +0300 > > # Node ID 5269880f00df1e5ae08299165ec43435b759c5a3 > > # Parent fc79ea0724a92c1f463625a11ed4cb785cd342b7 > > Removed casts from ngx_memcmp() macro. > > > > Casts are believed to be not needed, since memcmp() has "const void *" > > arguments since introduction of the "void" type in C89. And on pre-C89 > > platforms nginx is unlikely to compile without warnings anyway, as there > > are no casts in memcpy() and memmove() calls. > > > > These casts were added in 1648:89a47f19b9ec without any details on why they > > were added, and Igor does not remember details either. The most plausible > > explanation is that they were copied from ngx_strcmp() and were not really > > needed even at that time. > > > > Prodded by Alejandro Colomar. > > > > diff --git a/src/core/ngx_string.h b/src/core/ngx_string.h > > --- a/src/core/ngx_string.h > > +++ b/src/core/ngx_string.h > > @@ -145,7 +145,7 @@ ngx_copy(u_char *dst, u_char *src, size_ > > > > > > /* msvc and icc7 compile memcmp() to the inline loop */ > > -#define ngx_memcmp(s1, s2, n) memcmp((const char *) s1, (const char *) s2, n) > > +#define ngx_memcmp(s1, s2, n) memcmp(s1, s2, n) > > > > > > u_char *ngx_cpystrn(u_char *dst, u_char *src, size_t n); > > > > Looks good. Pushed to http://mdounin.ru/hg/nginx, thanks. > Indeed, old memcmp definition is traced back to pre-ANSI. > In particular, you can find old implementation in 4.3BSD-Tahoe > (named as "Routines described in memory(BA_LIB); System V compatibility") > that uses "char *" arguments, until they were rewritten in 4.3BSD-Reno > in ANSI C by Chris Torek. >From unix-history-repo it looks like there was no memcmp() in string.h in 4.3BSD-Tahoe release, and memcmp() was instead available via memory.h, without any prototypes: https://github.com/dspinellis/unix-history-repo/blob/BSD-4_3_Tahoe/usr/src/include/string.h https://github.com/dspinellis/unix-history-repo/blob/BSD-4_3_Tahoe/usr/src/include/memory.h And in 4.3BSD-Reno release it was already using "const void *". https://github.com/dspinellis/unix-history-repo/blob/BSD-4_3_Reno/usr/src/include/string.h Looks like the variant with "const char *" prototypes available via string.h was only present shortly between these two versions. This seems to match CSRG archive CDs from Kirk McKusick. > Also, it seems that VC 6.0 has memcmp with non-const void argument > as pre-C++98 (but I cannot support this clame with real facts). It does not look like MSVC 6.0 can be downloaded from Microsoft now, though CD copies are available at least at: https://winworldpc.com/product/visual-c/6x https://archive.org/details/microsoft-visual-c-6.0-standard-edition And memcmp() there is defined as follows: $ grep memcmp string.h _CRTIMP int __cdecl memcmp(const void *, const void *, size_t); int __cdecl memcmp(const void *, const void *, size_t); Though I'm pretty sure we don't care about anything older than MSVC2005 (which is still can be downloaded from Microsoft, BTW). -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Thu Dec 1 01:21:23 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 1 Dec 2022 04:21:23 +0300 Subject: [PATCH v2] Removed the casts within ngx_memcmp() In-Reply-To: References: <20221108105539.3924-1-alx@nginx.com> Message-ID: Hello! On Wed, Nov 30, 2022 at 11:49:18AM +0100, Alex Colomar wrote: > On 11/30/22 05:22, Maxim Dounin wrote: > > Hello! > > > > Ping. > > Sorry, I didn't know you were waiting for my confirmation. Not exactly yours. As per the project policy, patches need to be approved by other core developers before being committed. Other reviews are appreciated though, especially from the authors of original submissions. > >> # HG changeset patch > >> # User Maxim Dounin > >> # Date 1668004692 -10800 > >> # Wed Nov 09 17:38:12 2022 +0300 > >> # Node ID fc79ea0724a92c1f463625a11ed4cb785cd342b7 > >> # Parent 42bc158a47ecb3c2bd0396c723c307c757f2770e > >> Fixed alignment of ngx_memmove()/ngx_movemem() macro definitions. > > Makes sense. > > >> > >> diff --git a/src/core/ngx_string.h b/src/core/ngx_string.h > >> --- a/src/core/ngx_string.h > >> +++ b/src/core/ngx_string.h > >> @@ -140,8 +140,8 @@ ngx_copy(u_char *dst, u_char *src, size_ > >> #endif > >> > >> > >> -#define ngx_memmove(dst, src, n) (void) memmove(dst, src, n) > >> -#define ngx_movemem(dst, src, n) (((u_char *) memmove(dst, src, n)) + (n)) > >> +#define ngx_memmove(dst, src, n) (void) memmove(dst, src, n) > >> +#define ngx_movemem(dst, src, n) (((u_char *) memmove(dst, src, n)) + (n)) > >> > >> > >> /* msvc and icc7 compile memcmp() to the inline loop */ > >> # HG changeset patch > >> # User Maxim Dounin > >> # Date 1668005196 -10800 > >> # Wed Nov 09 17:46:36 2022 +0300 > >> # Node ID 5269880f00df1e5ae08299165ec43435b759c5a3 > >> # Parent fc79ea0724a92c1f463625a11ed4cb785cd342b7 > >> Removed casts from ngx_memcmp() macro. > >> > >> Casts are believed to be not needed, since memcmp() has "const void *" > >> arguments since introduction of the "void" type in C89. And on pre-C89 > >> platforms nginx is unlikely to compile without warnings anyway, as there > >> are no casts in memcpy() and memmove() calls. > >> > >> These casts were added in 1648:89a47f19b9ec without any details on why they > >> were added, and Igor does not remember details either. The most plausible > >> explanation is that they were copied from ngx_strcmp() and were not really > >> needed even at that time. > >> > >> Prodded by Alejandro Colomar. > > And of course, this patch LGTM :) Thanks, committed and will be available in the next release. Thanks for prodding this. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Thu Dec 1 01:41:20 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 1 Dec 2022 04:41:20 +0300 Subject: [PATCH 0 of 2] unbuffered proxying CPU hog (ticket #2418) In-Reply-To: <8BB66B75-7321-4A51-9E97-FF66A3FD1A9D@nginx.com> References: <8BB66B75-7321-4A51-9E97-FF66A3FD1A9D@nginx.com> Message-ID: Hello! On Wed, Nov 30, 2022 at 07:29:35PM +0400, Sergey Kandaurov wrote: > > On 29 Nov 2022, at 02:12, Maxim Dounin wrote: > > > > Hello! > > > > The following patch fixes CPU hog observed with unbuffered SSL proxying > > after SSL errors (ticket #2418). Fix is to always clear c->read->ready > > flag when returning errors from ngx_ssl_recv(). > > > > An additional patch cleans up some win32-specific edge cases (not expected > > to appear in practice though) when c->read->ready is not cleared when > > errors or EOFs are returned from ngx_wsarecv() and ngx_wsarecv_chain(). > > > > Hello! > > Both patches look good to me. Pushed to http://mdounin.ru/hg/nginx, thanks. -- Maxim Dounin http://mdounin.ru/ From lishu.zy at alibaba-inc.com Thu Dec 1 10:57:54 2022 From: lishu.zy at alibaba-inc.com (=?UTF-8?B?5pyx5a6HKOm7juWPlCk=?=) Date: Thu, 01 Dec 2022 18:57:54 +0800 Subject: =?UTF-8?B?UmU6IFFVSUM6IHBvc2l0aW9uIG9mIFJUVCBhbmQgY29uZ2VzdGlvbg==?= In-Reply-To: References: <56b10dbb.3bca.184c8720fd8.Coremail.yolkking@126.com>, Message-ID: <55fc5eba-10fc-4c6a-8431-0ff3a8317c53.lishu.zy@alibaba-inc.com> Thank you for the reply. And one question follows: Will the patches from community of such features be accepted , or should wait for official updates? ​ ------------------原始邮件 ------------------ 发件人:Vladimir Homutov via nginx-devel 发送时间:Wed Nov 30 20:33:18 2022 收件人:Vladimir Homutov via nginx-devel 抄送:Vladimir Homutov 主题:Re: QUIC: position of RTT and congestion On Wed, Nov 30, 2022 at 08:10:29PM +0800, Yu Zhu wrote: > > Hi, > > As described in "rfc 9002 6. Loss Detection", "RTT and congestion > control are properties of the path", so moves first_rtt, > latest_rtt, avg_rtt, min_rtt, rttvar and congestion from > ngx_quic_connection_t to struct ngx_quic_path_t looks more > reasonable? yes, you are right. Currently per-path calculations are not implemented, as well as path mtu discovery and some other things. _______________________________________________ nginx-devel mailing list -- nginx-devel at nginx.org To unsubscribe send an email to nginx-devel-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitry.petroff at gmail.com Fri Dec 2 11:07:43 2022 From: dmitry.petroff at gmail.com (Dmitry Petrov) Date: Fri, 2 Dec 2022 14:07:43 +0300 Subject: Using cmake with nginx Message-ID: Hello! I'm sure many developers have faced issues with manual dependency handling in the nginx build system. I've tried several workarounds which were semi-acceptable for personal use, but definitely not something that you would offer other people to use. Then I thought that it would be easy to add objs/Makefile -> CMakeList.txt converted, because makefile is strictly-formatted, but what if we add a primitive CMakeLists.txt generator to auto/make? I've made a quick and dirty prototype and it was quite successful (for *nix environment as it relies on sed-ing compiler/linker flags). I'm attaching a small patch of that proof-of-concept attempt (based on v 1.17.10). And I just want to ask if the community is interested in having some kind of cmake support. The advantages for developers are: - automatic dependency handling - easier integration with clang-based tools (cmake can generate compile_commands.json on its own rather that using tools such as https://github.com/rizsotto/Bear) - cmake can target not only makefiles (for example Visual Studio, if anyone uses it) - it's easier to use C++ with cmake as it inherently has an ability to use different set of compiler flags for C and C++ compilers -- Regards, Dmitry -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: auto-cmake.diff Type: text/x-patch Size: 3868 bytes Desc: not available URL: From mdounin at mdounin.ru Fri Dec 2 16:05:55 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 2 Dec 2022 19:05:55 +0300 Subject: Using cmake with nginx In-Reply-To: References: Message-ID: Hello! On Fri, Dec 02, 2022 at 02:07:43PM +0300, Dmitry Petrov wrote: > I'm sure many developers have faced issues with manual dependency handling > in the nginx build system. I've tried several workarounds which were > semi-acceptable for personal use, but definitely not something that you > would offer other people to use. You may want to be more specific about issues you are talking about. In general, nginx build system requires little to no effort from developers. [...] > I'm attaching a small patch of that proof-of-concept attempt (based on v > 1.17.10). And I just want to ask if the community is interested in having > some kind of cmake support. The advantages for developers are: There are no plans to support CMake. -- Maxim Dounin http://mdounin.ru/ From yefei.dyf at alibaba-inc.com Sat Dec 3 15:16:44 2022 From: yefei.dyf at alibaba-inc.com (=?UTF-8?B?5p2c5Y+26aOeKOa3ruWPtik=?=) Date: Sat, 03 Dec 2022 23:16:44 +0800 Subject: =?UTF-8?B?Rml4ZWQgZ3ppcF9kaXNhYmxlX2RlZ3JhZGF0aW9uIGRlZmluZWQgd2l0aG91dCBOR1hfSFRU?= =?UTF-8?B?UF9ERUdSQURBVElPTiAoYnJva2VuIGJ5IDNiNTIyZDdhNWIzNCku?= Message-ID: <77c7f9a9-a1a8-457f-b881-d72b85fe9e41.yefei.dyf@alibaba-inc.com> Hello! I think gzip_disable_degradation needs NGX_HTTP_DEGRADATION in order to be consistent with where used. details: https://hg.nginx.org/nginx/rev/3b522d7a5b34 # User BullerDu # Date 1670079834 -28800 # Sat Dec 03 23:03:54 2022 +0800 # Branch bugfix # Node ID 64a105315b9e5dc20dab2416caeb6b3481a460d1 # Parent 0b360747c74e3fa7e439e0684a8cf1da2d14d8f6 Fixed gzip_disable_degradation defined without NGX_HTTP_DEGRADATION (broken by 3b522d7a5b34). diff -r 0b360747c74e -r 64a105315b9e src/http/ngx_http_core_module.h --- a/src/http/ngx_http_core_module.h Thu Nov 24 23:08:30 2022 +0400 +++ b/src/http/ngx_http_core_module.h Sat Dec 03 23:03:54 2022 +0800 @@ -315,8 +315,10 @@ unsigned auto_redirect:1; #if (NGX_HTTP_GZIP) unsigned gzip_disable_msie6:2; +#if (NGX_HTTP_DEGRADATION) unsigned gzip_disable_degradation:2; #endif +#endif ngx_http_location_tree_node_t *static_locations; #if (NGX_PCRE) -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Sun Dec 4 04:53:53 2022 From: maxim at nginx.com (Maxim Konovalov) Date: Sat, 3 Dec 2022 20:53:53 -0800 Subject: QUIC: position of RTT and congestion In-Reply-To: <55fc5eba-10fc-4c6a-8431-0ff3a8317c53.lishu.zy@alibaba-inc.com> References: <56b10dbb.3bca.184c8720fd8.Coremail.yolkking@126.com> <55fc5eba-10fc-4c6a-8431-0ff3a8317c53.lishu.zy@alibaba-inc.com> Message-ID: Hi, Your patches are more than welcome. Maxim On 01.12.2022 02:57, 朱宇(黎叔) via nginx-devel wrote: > Thank you for the reply. And one question follows: > > Will the patches from community of such features be accepted , or should > wait for official updates? > > ​ > > ------------------原始邮件 ------------------ > *发件人:*Vladimir Homutov via nginx-devel > *发送时间:*Wed Nov 30 20:33:18 2022 > *收件人:*Vladimir Homutov via nginx-devel > *抄送:*Vladimir Homutov > *主题:*Re: QUIC: position of RTT and congestion > > On Wed, Nov 30, 2022 at 08:10:29PM +0800, Yu Zhu wrote: > > > > Hi, > > > >     As described in "rfc 9002 6. Loss Detection",  "RTT and congestion > >     control are properties of the path", so moves first_rtt, > >     latest_rtt, avg_rtt, min_rtt, rttvar and congestion from > >     ngx_quic_connection_t to struct ngx_quic_path_t looks more > >     reasonable? > > yes, you are right. > > Currently per-path calculations are not implemented, as well as path mtu > discovery and some other things. > > _______________________________________________ > nginx-devel mailing list -- nginx-devel at nginx.org > To unsubscribe send an email to nginx-devel-leave at nginx.org > > > _______________________________________________ > nginx-devel mailing list -- nginx-devel at nginx.org > To unsubscribe send an email to nginx-devel-leave at nginx.org -- Maxim Konovalov From mdounin at mdounin.ru Sun Dec 4 20:09:31 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 4 Dec 2022 23:09:31 +0300 Subject: Fixed gzip_disable_degradation defined without NGX_HTTP_DEGRADATION (broken by 3b522d7a5b34). In-Reply-To: <77c7f9a9-a1a8-457f-b881-d72b85fe9e41.yefei.dyf@alibaba-inc.com> References: <77c7f9a9-a1a8-457f-b881-d72b85fe9e41.yefei.dyf@alibaba-inc.com> Message-ID: Hello! On Sat, Dec 03, 2022 at 11:16:44PM +0800, 杜叶飞(淮叶) via nginx-devel wrote: > I think gzip_disable_degradation needs NGX_HTTP_DEGRADATION in order to be consistent with where used. > details: https://hg.nginx.org/nginx/rev/3b522d7a5b34 The revision you've linked explains why this "#if" is not really needed even if we are concerned about saving these two bits in the location configuration structure (and we aren't really concerned anyway). Further, the patch you've suggested breaks binary compatibility between nginx builds with and without the degradation module without restoring appropriate flag in the binary signature. This is clearly incorrect behaviour which can result in segmentation faults or other unexpected behaviour if modules compiled with different assumptions are loaded into nginx. -- Maxim Dounin http://mdounin.ru/ From dmitry.petroff at gmail.com Sun Dec 4 20:12:55 2022 From: dmitry.petroff at gmail.com (Dmitry Petrov) Date: Sun, 4 Dec 2022 23:12:55 +0300 Subject: Using cmake with nginx In-Reply-To: References: Message-ID: >You may want to be more specific about issues you are talking about. In general, nginx build system requires little to no effort from developers. I'm speaking about manual vs automatic source-level dependency handling. For example, CORE_DEPS is an easy but inaccurate cross-platform hack. The same is true for ADDON_DEPS: if you add any header here, all addons will be rebuilt on that header change. That's not a big deal, because C compiles fast enough and ccache could be used to mitigate "false positives" of hand-written deps. But there's room for improvement for sure. For example, I had build issues because of forgetting to include headers into core/addon deps after splitting addon into a set of header/source files. >There are no plans to support CMake. Targeting any modern C-oriented build system would add many benefits including "standardized" routines for library detection, dependency handling, etc. Are there any plans on adding "per-addon header deps" so changes to a single header don't result in all addons recompiling? On Fri, Dec 2, 2022 at 7:07 PM Maxim Dounin wrote: > Hello! > > On Fri, Dec 02, 2022 at 02:07:43PM +0300, Dmitry Petrov wrote: > > > I'm sure many developers have faced issues with manual dependency > handling > > in the nginx build system. I've tried several workarounds which were > > semi-acceptable for personal use, but definitely not something that you > > would offer other people to use. > > You may want to be more specific about issues you are talking > about. In general, nginx build system requires little to no > effort from developers. > > [...] > > > I'm attaching a small patch of that proof-of-concept attempt (based on v > > 1.17.10). And I just want to ask if the community is interested in having > > some kind of cmake support. The advantages for developers are: > > There are no plans to support CMake. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx-devel mailing list -- nginx-devel at nginx.org > To unsubscribe send an email to nginx-devel-leave at nginx.org > -- Regards, Dmitry -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Dec 4 21:10:21 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 5 Dec 2022 00:10:21 +0300 Subject: Using cmake with nginx In-Reply-To: References: Message-ID: Hello! On Sun, Dec 04, 2022 at 11:12:55PM +0300, Dmitry Petrov wrote: > >You may want to be more specific about issues you are talking > about. In general, nginx build system requires little to no > effort from developers. > > I'm speaking about manual vs automatic source-level dependency handling. > For example, CORE_DEPS is an easy but inaccurate cross-platform hack. The > same is true for ADDON_DEPS: if you add any header here, all addons will be > rebuilt on that header change. > That's not a big deal, because C compiles fast enough and ccache could be > used to mitigate "false positives" of hand-written deps. But there's room > for improvement for sure. > For example, I had build issues because of forgetting to include headers > into core/addon deps after splitting addon into a set of header/source > files. So, the issues you are talking about is the fact that nginx won't magically find out which header files you are using, but instead requires you to list them in the ngx_module_deps variable when calling the auto/module in the module config script, correct? Thanks for the explanation. > >There are no plans to support CMake. > Targeting any modern C-oriented build system would add many benefits > including "standardized" routines for library detection, dependency > handling, etc. The "make" build system as used by nginx is a modern, C-oriented build system. Further, it is standardized, and provides as much portability as no other build system known. Shell, as used to generate makefiles, is believed to be simple enough to implement any needed library detection. > Are there any plans on adding "per-addon header deps" so changes to a > single header don't result in all addons recompiling? As of now, there are no such plans. Just rebuilding all the modules if a header changes is believed to be fast enough approach. Further, it is usually trivial to configure a build environment for development which does not contain any additional modules, reducing potential benefits from such a change. -- Maxim Dounin http://mdounin.ru/ From alx.manpages at gmail.com Sun Dec 4 21:10:41 2022 From: alx.manpages at gmail.com (Alejandro Colomar) Date: Sun, 4 Dec 2022 22:10:41 +0100 Subject: Using cmake with nginx In-Reply-To: References: Message-ID: <6a9f3642-c23e-cef6-bd9c-00e420ab3f5c@gmail.com> Hi Dmitry, On 12/4/22 21:12, Dmitry Petrov wrote: > >You may want to be more specific about issues you are talking > about.  In general, nginx build system requires little to no > effort from developers. > > I'm speaking about manual vs automatic source-level dependency handling. For > example, CORE_DEPS is an easy but inaccurate cross-platform hack. The same is > true for ADDON_DEPS: if you add any header here, all addons will be rebuilt on > that header change. > That's not a big deal, because C compiles fast enough and ccache could be used > to mitigate "false positives" of hand-written deps. But there's room for > improvement for sure. > For example, I had build issues because of forgetting to include headers into > core/addon deps after splitting addon into a set of header/source files. > > >There are no plans to support CMake. > Targeting any modern C-oriented build system would add many benefits including > "standardized" routines for library detection, dependency handling, etc. > Are there any plans on adding "per-addon header deps" so changes to a single > header don't result in all addons recompiling? > There's no need for "modern" build systems. GNU Make has been capable of doing that for decades. See the Linux man-pages Makefile as an example. And you don't need need to learn complex (I'd call CMake unnecessarily complex, rather than modern) stuff. And of course, shell scripts generating Makefiles can also do that (and even more portably than GNU Make). Autotools and CMake are both super-complex systems that IMO just make the build system more complex to understand, compared to sh(1)+make(1) or GNU Make. Cheers, Alex -- -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From dmitry.petroff at gmail.com Mon Dec 5 00:16:50 2022 From: dmitry.petroff at gmail.com (Dmitry Petrov) Date: Mon, 5 Dec 2022 03:16:50 +0300 Subject: Using cmake with nginx In-Reply-To: References: Message-ID: >So, the issues you are talking about is the fact that nginx won't magically find out which header files you are using, but instead requires you to list them in the ngx_module_deps variable when calling the auto/module in the module config script, correct? That's not magic. This is what make + gcc/clang/icc can do on *nix. The general pattern is simple: -include $dep $obj $dep: $src $(CC) ... -MMD -MP -MT $obj -MT $dep ... This would make the compiler generate a dependency file together with an object file and make would include/update it when needed. I'm not aware if nginx supports compilers other than gcc/clang, but this trick would work for said compilers. icc mimics gcc on Linux, so generally it will work there too . I'm attaching objs/ngx_modules.d being automatically generated by this approach. That would be a really simple change to auto/make, but it would work much better that _*DEPS. I'm providing a proof-of-working auto/make script from nginx-1.20.2 which uses this technique. You can check how it differs by observing that only several files getting recompiled after touch src/core/ngx_md5.h && make This example should handle addon dependencies too. On Mon, Dec 5, 2022 at 12:10 AM Maxim Dounin wrote: > Hello! > > On Sun, Dec 04, 2022 at 11:12:55PM +0300, Dmitry Petrov wrote: > > > >You may want to be more specific about issues you are talking > > about. In general, nginx build system requires little to no > > effort from developers. > > > > I'm speaking about manual vs automatic source-level dependency handling. > > For example, CORE_DEPS is an easy but inaccurate cross-platform hack. The > > same is true for ADDON_DEPS: if you add any header here, all addons will > be > > rebuilt on that header change. > > That's not a big deal, because C compiles fast enough and ccache could be > > used to mitigate "false positives" of hand-written deps. But there's room > > for improvement for sure. > > For example, I had build issues because of forgetting to include headers > > into core/addon deps after splitting addon into a set of header/source > > files. > > So, the issues you are talking about is the fact that nginx won't > magically find out which header files you are using, but instead > requires you to list them in the ngx_module_deps variable when > calling the auto/module in the module config script, correct? > > Thanks for the explanation. > > > >There are no plans to support CMake. > > Targeting any modern C-oriented build system would add many benefits > > including "standardized" routines for library detection, dependency > > handling, etc. > > The "make" build system as used by nginx is a modern, C-oriented > build system. Further, it is standardized, and provides as much > portability as no other build system known. Shell, as used to > generate makefiles, is believed to be simple enough to implement > any needed library detection. > > > Are there any plans on adding "per-addon header deps" so changes to a > > single header don't result in all addons recompiling? > > As of now, there are no such plans. Just rebuilding all the > modules if a header changes is believed to be fast enough > approach. Further, it is usually trivial to configure a build > environment for development which does not contain any additional > modules, reducing potential benefits from such a change. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx-devel mailing list -- nginx-devel at nginx.org > To unsubscribe send an email to nginx-devel-leave at nginx.org > -- Regards, Dmitry -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ngx_modules.d Type: text/x-dsrc Size: 2598 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: auto-make-1.20.2.gz Type: application/gzip Size: 2785 bytes Desc: not available URL: From yefei.dyf at alibaba-inc.com Mon Dec 5 01:09:13 2022 From: yefei.dyf at alibaba-inc.com (=?UTF-8?B?5p2c5Y+26aOeKOa3ruWPtik=?=) Date: Mon, 05 Dec 2022 09:09:13 +0800 Subject: =?UTF-8?B?5Zue5aSN77yaRml4ZWQgZ3ppcF9kaXNhYmxlX2RlZ3JhZGF0aW9uIGRlZmluZWQgd2l0aG91?= =?UTF-8?B?dCBOR1hfSFRUUF9ERUdSQURBVElPTiAoYnJva2VuIGJ5IDNiNTIyZDdhNWIzNCku?= In-Reply-To: References: <77c7f9a9-a1a8-457f-b881-d72b85fe9e41.yefei.dyf@alibaba-inc.com>, Message-ID: OK, I get it. Thanks for your answer. ------------------------------------------------------------------ 发件人:Maxim Dounin 发送时间:2022年12月5日(星期一) 04:13 收件人:nginx-devel 主 题:Re: Fixed gzip_disable_degradation defined without NGX_HTTP_DEGRADATION (broken by 3b522d7a5b34). Hello! On Sat, Dec 03, 2022 at 11:16:44PM +0800, 杜叶飞(淮叶) via nginx-devel wrote: > I think gzip_disable_degradation needs NGX_HTTP_DEGRADATION in order to be consistent with where used. > details: https://hg.nginx.org/nginx/rev/3b522d7a5b34 The revision you've linked explains why this "#if" is not really needed even if we are concerned about saving these two bits in the location configuration structure (and we aren't really concerned anyway). Further, the patch you've suggested breaks binary compatibility between nginx builds with and without the degradation module without restoring appropriate flag in the binary signature. This is clearly incorrect behaviour which can result in segmentation faults or other unexpected behaviour if modules compiled with different assumptions are loaded into nginx. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx-devel mailing list -- nginx-devel at nginx.org To unsubscribe send an email to nginx-devel-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From yefei.dyf at alibaba-inc.com Mon Dec 5 01:39:21 2022 From: yefei.dyf at alibaba-inc.com (=?UTF-8?B?5p2c5Y+26aOeKOa3ruWPtik=?=) Date: Mon, 05 Dec 2022 09:39:21 +0800 Subject: =?UTF-8?B?5Zue5aSN77yaRml4ZWQgZ3ppcF9kaXNhYmxlX2RlZ3JhZGF0aW9uIGRlZmluZWQgd2l0aG91?= =?UTF-8?B?dCBOR1hfSFRUUF9ERUdSQURBVElPTiAoYnJva2VuIGJ5IDNiNTIyZDdhNWIzNCku?= In-Reply-To: References: <77c7f9a9-a1a8-457f-b881-d72b85fe9e41.yefei.dyf@alibaba-inc.com>, Message-ID: <0bca93a5-824f-46cf-822a-53994f1e5a3e.yefei.dyf@alibaba-inc.com> OK, I I have another question about this. 'NGX_MODULE_SIGNATURE_27' should be with NGX_HTTP_GZIP as the following patch ? # HG changeset patch # User BullerDu # Date 1670204057 -28800 # Mon Dec 05 09:34:17 2022 +0800 # Branch bugfix_dynamic_module # Node ID 5e63c3cf514e64a5f6499b66613d4935bd026d5f # Parent 0b360747c74e3fa7e439e0684a8cf1da2d14d8f6 Modules compatibility: degradation signature must be with NGX_HTTP_GZIP(Introduced by 3b522d7a5b34). diff -r 0b360747c74e -r 5e63c3cf514e src/core/ngx_module.h --- a/src/core/ngx_module.h Thu Nov 24 23:08:30 2022 +0400 +++ b/src/core/ngx_module.h Mon Dec 05 09:34:17 2022 +0800 @@ -149,12 +149,12 @@ #if (NGX_HTTP_GZIP) #define NGX_MODULE_SIGNATURE_26 "1" +#define NGX_MODULE_SIGNATURE_27 "1" #else #define NGX_MODULE_SIGNATURE_26 "0" +#define NGX_MODULE_SIGNATURE_27 "0" #endif -#define NGX_MODULE_SIGNATURE_27 "1" - #if (NGX_HTTP_X_FORWARDED_FOR) #define NGX_MODULE_SIGNATURE_28 "1" #else ------------------------------------------------------------------ 发件人:Maxim Dounin 发送时间:2022年12月5日(星期一) 04:13 收件人:nginx-devel 主 题:Re: Fixed gzip_disable_degradation defined without NGX_HTTP_DEGRADATION (broken by 3b522d7a5b34). Hello! On Sat, Dec 03, 2022 at 11:16:44PM +0800, 杜叶飞(淮叶) via nginx-devel wrote: > I think gzip_disable_degradation needs NGX_HTTP_DEGRADATION in order to be consistent with where used. > details: https://hg.nginx.org/nginx/rev/3b522d7a5b34 The revision you've linked explains why this "#if" is not really needed even if we are concerned about saving these two bits in the location configuration structure (and we aren't really concerned anyway). Further, the patch you've suggested breaks binary compatibility between nginx builds with and without the degradation module without restoring appropriate flag in the binary signature. This is clearly incorrect behaviour which can result in segmentation faults or other unexpected behaviour if modules compiled with different assumptions are loaded into nginx. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx-devel mailing list -- nginx-devel at nginx.org To unsubscribe send an email to nginx-devel-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Dec 5 23:02:09 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 6 Dec 2022 02:02:09 +0300 Subject: Using cmake with nginx In-Reply-To: References: Message-ID: Hello! On Mon, Dec 05, 2022 at 03:16:50AM +0300, Dmitry Petrov wrote: > >So, the issues you are talking about is the fact that nginx won't > magically find out which header files you are using, but instead > requires you to list them in the ngx_module_deps variable when > calling the auto/module in the module config script, correct? > > That's not magic. This is what make + gcc/clang/icc can do on *nix. The > general pattern is simple: > -include $dep > $obj $dep: $src > $(CC) ... -MMD -MP -MT $obj -MT $dep ... > This would make the compiler generate a dependency file together with an > object file and make would include/update it when needed. > I'm not aware if nginx supports compilers other than gcc/clang, but this > trick would work for said compilers. icc mimics gcc on Linux, so generally > it will work there too . I'm attaching objs/ngx_modules.d being > automatically generated by this approach. nginx supports a wide range of C compilers, not just gcc/clang, including ones very different form gcc, such as MSVC on Windows (used in practice to build nginx binaries for Windows) or Sun C on Solaris. Further, nginx is expected to be buildable with any POSIX-complaint C compiler, which is assumed by default. Summing the above, the gcc/clang feature in question certainly won't make it possible to bypass listing header files manually. It might be a way to improve rebuild times on systems using gcc/clang, though I suspect it might not worth the effort (or might even increase build times in typical case due to additional work being done). Especially given that building nginx with default modules on a cheap i7-4770 server takes less than 5 seconds of real time, and less than 15 seconds on an old i5-8210Y laptop. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Dec 6 01:02:42 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 6 Dec 2022 04:02:42 +0300 Subject: =?utf-8?B?5Zue5aSN77yaRml4ZWQgZ3ppcF9k?= =?utf-8?Q?isable=5Fdegradation_define?= =?utf-8?Q?d?= without NGX_HTTP_DEGRADATION (broken by 3b522d7a5b34). In-Reply-To: <0bca93a5-824f-46cf-822a-53994f1e5a3e.yefei.dyf@alibaba-inc.com> References: <77c7f9a9-a1a8-457f-b881-d72b85fe9e41.yefei.dyf@alibaba-inc.com> <0bca93a5-824f-46cf-822a-53994f1e5a3e.yefei.dyf@alibaba-inc.com> Message-ID: Hello! On Mon, Dec 05, 2022 at 09:39:21AM +0800, 杜叶飞(淮叶) via nginx-devel wrote: > OK, I I have another question about this. > 'NGX_MODULE_SIGNATURE_27' should be with NGX_HTTP_GZIP as the > following patch ? No. The NGX_HTTP_GZIP macro, which affects layout of nginx public structures, already affects the binary signature, so it won't be possible to load a module built with NGX_HTTP_GZIP defined into nginx built without NGX_HTTP_GZIP (and vice versa). No additional characters in the signature are needed for NGX_HTTP_GZIP. The NGX_MODULE_SIGNATURE_27 macro is basically a spare one after 3b522d7a5b34, and can be reused for something else (or removed if we'll decide to clean up things). -- Maxim Dounin http://mdounin.ru/ From yefei.dyf at alibaba-inc.com Tue Dec 6 01:55:20 2022 From: yefei.dyf at alibaba-inc.com (=?UTF-8?B?5p2c5Y+26aOeKOa3ruWPtik=?=) Date: Tue, 06 Dec 2022 09:55:20 +0800 Subject: =?UTF-8?B?5Zue5aSN77ya5Zue5aSN77yaRml4ZWQgZ3ppcF9kaXNhYmxlX2RlZ3JhZGF0aW9uIGRlZmlu?= =?UTF-8?B?ZWQgd2l0aG91dCBOR1hfSFRUUF9ERUdSQURBVElPTiAoYnJva2VuIGJ5IDNiNTIyZDdhNWIz?= =?UTF-8?B?NCku?= In-Reply-To: References: <77c7f9a9-a1a8-457f-b881-d72b85fe9e41.yefei.dyf@alibaba-inc.com> <0bca93a5-824f-46cf-822a-53994f1e5a3e.yefei.dyf@alibaba-inc.com>, Message-ID: OK, I have no further questions. I really hope this case doesn't happen. " so it won't be possible to load a module built with NGX_HTTP_GZIP defined into nginx built without NGX_HTTP_GZIP (and vice versa)" ------------------------------------------------------------------ 发件人:Maxim Dounin 发送时间:2022年12月6日(星期二) 09:03 收件人:nginx-devel 主 题:Re: 回复:Fixed gzip_disable_degradation defined without NGX_HTTP_DEGRADATION (broken by 3b522d7a5b34). Hello! On Mon, Dec 05, 2022 at 09:39:21AM +0800, 杜叶飞(淮叶) via nginx-devel wrote: > OK, I I have another question about this. > 'NGX_MODULE_SIGNATURE_27' should be with NGX_HTTP_GZIP as the > following patch ? No. The NGX_HTTP_GZIP macro, which affects layout of nginx public structures, already affects the binary signature, so it won't be possible to load a module built with NGX_HTTP_GZIP defined into nginx built without NGX_HTTP_GZIP (and vice versa). No additional characters in the signature are needed for NGX_HTTP_GZIP. The NGX_MODULE_SIGNATURE_27 macro is basically a spare one after 3b522d7a5b34, and can be reused for something else (or removed if we'll decide to clean up things). -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx-devel mailing list -- nginx-devel at nginx.org To unsubscribe send an email to nginx-devel-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From pl080516 at gmail.com Tue Dec 6 14:35:37 2022 From: pl080516 at gmail.com (=?iso-2022-jp?B?GyRCPGsxJxsoQg==?=) Date: Tue, 6 Dec 2022 14:35:37 +0000 Subject: QUIC: reworked congestion control mechanism. Message-ID: Hi, # HG changeset patch # User Yu Zhu # Date 1670326031 -28800 # Tue Dec 06 19:27:11 2022 +0800 # Branch quic # Node ID 9a47ff1223bb32c8ddb146d731b395af89769a97 # Parent 1a320805265db14904ca9deaae8330f4979619ce QUIC: reworked congestion control mechanism. 1. move rtt measurement and congestion control to struct ngx_quic_path_t because RTT and congestion control are properities of the path. 2. introduced struct "ngx_quic_congestion_ops_t" to wrap callback functions of congestion control and extract the reno algorithm from ngx_event_quic_ack.c. No functional changes. diff -r 1a320805265d -r 9a47ff1223bb auto/make --- a/auto/make Sat Nov 19 00:31:55 2022 +0800 +++ b/auto/make Tue Dec 06 19:27:11 2022 +0800 @@ -6,7 +6,7 @@ echo "creating $NGX_MAKEFILE" mkdir -p $NGX_OBJS/src/core $NGX_OBJS/src/event $NGX_OBJS/src/event/modules \ - $NGX_OBJS/src/event/quic \ + $NGX_OBJS/src/event/quic $NGX_OBJS/src/event/quic/congestion \ $NGX_OBJS/src/os/unix $NGX_OBJS/src/os/win32 \ $NGX_OBJS/src/http $NGX_OBJS/src/http/v2 $NGX_OBJS/src/http/v3 \ $NGX_OBJS/src/http/modules $NGX_OBJS/src/http/modules/perl \ diff -r 1a320805265d -r 9a47ff1223bb auto/modules --- a/auto/modules Sat Nov 19 00:31:55 2022 +0800 +++ b/auto/modules Tue Dec 06 19:27:11 2022 +0800 @@ -1355,7 +1355,8 @@ src/event/quic/ngx_event_quic_tokens.c \ src/event/quic/ngx_event_quic_ack.c \ src/event/quic/ngx_event_quic_output.c \ - src/event/quic/ngx_event_quic_socket.c" + src/event/quic/ngx_event_quic_socket.c \ + src/event/quic/congestion/ngx_quic_reno.c" ngx_module_libs= ngx_module_link=YES diff -r 1a320805265d -r 9a47ff1223bb src/event/quic/congestion/ngx_quic_reno.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/src/event/quic/congestion/ngx_quic_reno.c Tue Dec 06 19:27:11 2022 +0800 @@ -0,0 +1,133 @@ + +/* + * Copyright (C) Nginx, Inc. + */ + + +#include +#include +#include +#include + + +static void ngx_quic_reno_on_init(ngx_connection_t *c, ngx_quic_congestion_t *cg); +static ngx_int_t ngx_quic_reno_on_ack(ngx_connection_t *c, ngx_quic_frame_t *f); +static ngx_int_t ngx_quic_reno_on_lost(ngx_connection_t *c, ngx_quic_frame_t *f); + + +ngx_quic_congestion_ops_t ngx_quic_reno = { + ngx_string("reno"), + ngx_quic_reno_on_init, + ngx_quic_reno_on_ack, + ngx_quic_reno_on_lost +}; + + +static void +ngx_quic_reno_on_init(ngx_connection_t *c, ngx_quic_congestion_t *cg) +{ + ngx_quic_connection_t *qc; + + qc = ngx_quic_get_connection(c); + + cg->window = ngx_min(10 * qc->tp.max_udp_payload_size, + ngx_max(2 * qc->tp.max_udp_payload_size, + 14720)); + cg->ssthresh = (size_t) -1; + cg->recovery_start = ngx_current_msec; +} + + +static ngx_int_t +ngx_quic_reno_on_ack(ngx_connection_t *c, ngx_quic_frame_t *f) +{ + ngx_msec_t timer; + ngx_quic_path_t *path; + ngx_quic_connection_t *qc; + ngx_quic_congestion_t *cg; + + qc = ngx_quic_get_connection(c); + path = qc->path; + + cg = &path->congestion; + + cg->in_flight -= f->plen; + + timer = f->last - cg->recovery_start; + + if ((ngx_msec_int_t) timer <= 0) { + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, + "quic congestion ack recovery win:%uz ss:%z if:%uz", + cg->window, cg->ssthresh, cg->in_flight); + + return NGX_DONE; + } + + if (cg->window < cg->ssthresh) { + cg->window += f->plen; + + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, + "quic congestion slow start win:%uz ss:%z if:%uz", + cg->window, cg->ssthresh, cg->in_flight); + + } else { + cg->window += qc->tp.max_udp_payload_size * f->plen / cg->window; + + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, + "quic congestion avoidance win:%uz ss:%z if:%uz", + cg->window, cg->ssthresh, cg->in_flight); + } + + /* prevent recovery_start from wrapping */ + + timer = cg->recovery_start - ngx_current_msec + qc->tp.max_idle_timeout * 2; + + if ((ngx_msec_int_t) timer < 0) { + cg->recovery_start = ngx_current_msec - qc->tp.max_idle_timeout * 2; + } + + return NGX_OK; +} + + +static ngx_int_t +ngx_quic_reno_on_lost(ngx_connection_t *c, ngx_quic_frame_t *f) +{ + ngx_msec_t timer; + ngx_quic_path_t *path; + ngx_quic_connection_t *qc; + ngx_quic_congestion_t *cg; + + qc = ngx_quic_get_connection(c); + path = qc->path; + + cg = &path->congestion; + + cg->in_flight -= f->plen; + f->plen = 0; + + timer = f->last - cg->recovery_start; + + if ((ngx_msec_int_t) timer <= 0) { + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, + "quic congestion lost recovery win:%uz ss:%z if:%uz", + cg->window, cg->ssthresh, cg->in_flight); + + return NGX_DONE; + } + + cg->recovery_start = ngx_current_msec; + cg->window /= 2; + + if (cg->window < qc->tp.max_udp_payload_size * 2) { + cg->window = qc->tp.max_udp_payload_size * 2; + } + + cg->ssthresh = cg->window; + + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, + "quic congestion lost win:%uz ss:%z if:%uz", + cg->window, cg->ssthresh, cg->in_flight); + + return NGX_OK; +} diff -r 1a320805265d -r 9a47ff1223bb src/event/quic/ngx_event_quic.c --- a/src/event/quic/ngx_event_quic.c Sat Nov 19 00:31:55 2022 +0800 +++ b/src/event/quic/ngx_event_quic.c Tue Dec 06 19:27:11 2022 +0800 @@ -264,15 +264,6 @@ ngx_queue_init(&qc->free_frames); - qc->avg_rtt = NGX_QUIC_INITIAL_RTT; - qc->rttvar = NGX_QUIC_INITIAL_RTT / 2; - qc->min_rtt = NGX_TIMER_INFINITE; - qc->first_rtt = NGX_TIMER_INFINITE; - - /* - * qc->latest_rtt = 0 - */ - qc->pto.log = c->log; qc->pto.data = c; qc->pto.handler = ngx_quic_pto_handler; @@ -311,12 +302,6 @@ qc->streams.client_max_streams_uni = qc->tp.initial_max_streams_uni; qc->streams.client_max_streams_bidi = qc->tp.initial_max_streams_bidi; - qc->congestion.window = ngx_min(10 * qc->tp.max_udp_payload_size, - ngx_max(2 * qc->tp.max_udp_payload_size, - 14720)); - qc->congestion.ssthresh = (size_t) -1; - qc->congestion.recovery_start = ngx_current_msec; - if (pkt->validated && pkt->retried) { qc->tp.retry_scid.len = pkt->dcid.len; qc->tp.retry_scid.data = ngx_pstrdup(c->pool, &pkt->dcid); diff -r 1a320805265d -r 9a47ff1223bb src/event/quic/ngx_event_quic_ack.c --- a/src/event/quic/ngx_event_quic_ack.c Sat Nov 19 00:31:55 2022 +0800 +++ b/src/event/quic/ngx_event_quic_ack.c Tue Dec 06 19:27:11 2022 +0800 @@ -29,7 +29,7 @@ } ngx_quic_ack_stat_t; -static ngx_inline ngx_msec_t ngx_quic_lost_threshold(ngx_quic_connection_t *qc); +static ngx_inline ngx_msec_t ngx_quic_lost_threshold(ngx_quic_path_t *path); static void ngx_quic_rtt_sample(ngx_connection_t *c, ngx_quic_ack_frame_t *ack, enum ssl_encryption_level_t level, ngx_msec_t send_time); static ngx_int_t ngx_quic_handle_ack_frame_range(ngx_connection_t *c, @@ -48,11 +48,11 @@ /* RFC 9002, 6.1.2. Time Threshold: kTimeThreshold, kGranularity */ static ngx_inline ngx_msec_t -ngx_quic_lost_threshold(ngx_quic_connection_t *qc) +ngx_quic_lost_threshold(ngx_quic_path_t *path) { ngx_msec_t thr; - thr = ngx_max(qc->latest_rtt, qc->avg_rtt); + thr = ngx_max(path->latest_rtt, path->avg_rtt); thr += thr >> 3; return ngx_max(thr, NGX_QUIC_TIME_GRANULARITY); @@ -179,21 +179,23 @@ enum ssl_encryption_level_t level, ngx_msec_t send_time) { ngx_msec_t latest_rtt, ack_delay, adjusted_rtt, rttvar_sample; + ngx_quic_path_t *path; ngx_quic_connection_t *qc; qc = ngx_quic_get_connection(c); + path = qc->path; latest_rtt = ngx_current_msec - send_time; - qc->latest_rtt = latest_rtt; + path->latest_rtt = latest_rtt; - if (qc->min_rtt == NGX_TIMER_INFINITE) { - qc->min_rtt = latest_rtt; - qc->avg_rtt = latest_rtt; - qc->rttvar = latest_rtt / 2; - qc->first_rtt = ngx_current_msec; + if (path->min_rtt == NGX_TIMER_INFINITE) { + path->min_rtt = latest_rtt; + path->avg_rtt = latest_rtt; + path->rttvar = latest_rtt / 2; + path->first_rtt = ngx_current_msec; } else { - qc->min_rtt = ngx_min(qc->min_rtt, latest_rtt); + path->min_rtt = ngx_min(path->min_rtt, latest_rtt); ack_delay = ack->delay * (1 << qc->ctp.ack_delay_exponent) / 1000; @@ -203,18 +205,18 @@ adjusted_rtt = latest_rtt; - if (qc->min_rtt + ack_delay < latest_rtt) { + if (path->min_rtt + ack_delay < latest_rtt) { adjusted_rtt -= ack_delay; } - qc->avg_rtt += (adjusted_rtt >> 3) - (qc->avg_rtt >> 3); - rttvar_sample = ngx_abs((ngx_msec_int_t) (qc->avg_rtt - adjusted_rtt)); - qc->rttvar += (rttvar_sample >> 2) - (qc->rttvar >> 2); + path->avg_rtt += (adjusted_rtt >> 3) - (path->avg_rtt >> 3); + rttvar_sample = ngx_abs((ngx_msec_int_t) (path->avg_rtt - adjusted_rtt)); + path->rttvar += (rttvar_sample >> 2) - (path->rttvar >> 2); } ngx_log_debug4(NGX_LOG_DEBUG_EVENT, c->log, 0, "quic rtt sample latest:%M min:%M avg:%M var:%M", - latest_rtt, qc->min_rtt, qc->avg_rtt, qc->rttvar); + latest_rtt, path->min_rtt, path->avg_rtt, path->rttvar); } @@ -307,7 +309,6 @@ ngx_quic_congestion_ack(ngx_connection_t *c, ngx_quic_frame_t *f) { ngx_uint_t blocked; - ngx_msec_t timer; ngx_quic_congestion_t *cg; ngx_quic_connection_t *qc; @@ -316,46 +317,11 @@ } qc = ngx_quic_get_connection(c); - cg = &qc->congestion; + cg = &qc->path->congestion; blocked = (cg->in_flight >= cg->window) ? 1 : 0; - cg->in_flight -= f->plen; - - timer = f->last - cg->recovery_start; - - if ((ngx_msec_int_t) timer <= 0) { - ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, - "quic congestion ack recovery win:%uz ss:%z if:%uz", - cg->window, cg->ssthresh, cg->in_flight); - - goto done; - } - - if (cg->window < cg->ssthresh) { - cg->window += f->plen; - - ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, - "quic congestion slow start win:%uz ss:%z if:%uz", - cg->window, cg->ssthresh, cg->in_flight); - - } else { - cg->window += qc->tp.max_udp_payload_size * f->plen / cg->window; - - ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, - "quic congestion avoidance win:%uz ss:%z if:%uz", - cg->window, cg->ssthresh, cg->in_flight); - } - - /* prevent recovery_start from wrapping */ - - timer = cg->recovery_start - ngx_current_msec + qc->tp.max_idle_timeout * 2; - - if ((ngx_msec_int_t) timer < 0) { - cg->recovery_start = ngx_current_msec - qc->tp.max_idle_timeout * 2; - } - -done: + cg->cc_ops->ack(c, f); if (blocked && cg->in_flight < cg->window) { ngx_post_event(&qc->push, &ngx_posted_events); @@ -433,7 +399,7 @@ qc = ngx_quic_get_connection(c); now = ngx_current_msec; - thr = ngx_quic_lost_threshold(qc); + thr = ngx_quic_lost_threshold(qc->path); /* send time of lost packets across all send contexts */ oldest = NGX_TIMER_INFINITE; @@ -470,7 +436,7 @@ break; } - if (start->last > qc->first_rtt) { + if (start->last > qc->path->first_rtt) { if (oldest == NGX_TIMER_INFINITE || start->last < oldest) { oldest = start->last; @@ -518,8 +484,8 @@ qc = ngx_quic_get_connection(c); - duration = qc->avg_rtt; - duration += ngx_max(4 * qc->rttvar, NGX_QUIC_TIME_GRANULARITY); + duration = qc->path->avg_rtt; + duration += ngx_max(4 * qc->path->rttvar, NGX_QUIC_TIME_GRANULARITY); duration += qc->ctp.max_ack_delay; duration *= NGX_QUIC_PERSISTENT_CONGESTION_THR; @@ -534,7 +500,7 @@ ngx_quic_connection_t *qc; qc = ngx_quic_get_connection(c); - cg = &qc->congestion; + cg = &qc->path->congestion; cg->recovery_start = ngx_current_msec; cg->window = qc->tp.max_udp_payload_size * 2; @@ -646,7 +612,6 @@ ngx_quic_congestion_lost(ngx_connection_t *c, ngx_quic_frame_t *f) { ngx_uint_t blocked; - ngx_msec_t timer; ngx_quic_congestion_t *cg; ngx_quic_connection_t *qc; @@ -655,37 +620,11 @@ } qc = ngx_quic_get_connection(c); - cg = &qc->congestion; + cg = &qc->path->congestion; blocked = (cg->in_flight >= cg->window) ? 1 : 0; - cg->in_flight -= f->plen; - f->plen = 0; - - timer = f->last - cg->recovery_start; - - if ((ngx_msec_int_t) timer <= 0) { - ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, - "quic congestion lost recovery win:%uz ss:%z if:%uz", - cg->window, cg->ssthresh, cg->in_flight); - - goto done; - } - - cg->recovery_start = ngx_current_msec; - cg->window /= 2; - - if (cg->window < qc->tp.max_udp_payload_size * 2) { - cg->window = qc->tp.max_udp_payload_size * 2; - } - - cg->ssthresh = cg->window; - - ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, - "quic congestion lost win:%uz ss:%z if:%uz", - cg->window, cg->ssthresh, cg->in_flight); - -done: + cg->cc_ops->lost(c, f); if (blocked && cg->in_flight < cg->window) { ngx_post_event(&qc->push, &ngx_posted_events); @@ -720,7 +659,7 @@ if (ctx->largest_ack != NGX_QUIC_UNSET_PN) { q = ngx_queue_head(&ctx->sent); f = ngx_queue_data(q, ngx_quic_frame_t, queue); - w = (ngx_msec_int_t) (f->last + ngx_quic_lost_threshold(qc) - now); + w = (ngx_msec_int_t) (f->last + ngx_quic_lost_threshold(qc->path) - now); if (f->pnum <= ctx->largest_ack) { if (w < 0 || ctx->largest_ack - f->pnum >= NGX_QUIC_PKT_THR) { @@ -781,12 +720,12 @@ qc = ngx_quic_get_connection(c); /* RFC 9002, Appendix A.8. Setting the Loss Detection Timer */ - duration = qc->avg_rtt; + duration = qc->path->avg_rtt; - duration += ngx_max(4 * qc->rttvar, NGX_QUIC_TIME_GRANULARITY); + duration += ngx_max(4 * qc->path->rttvar, NGX_QUIC_TIME_GRANULARITY); duration <<= qc->pto_count; - if (qc->congestion.in_flight == 0) { /* no in-flight packets */ + if (qc->path->congestion.in_flight == 0) { /* no in-flight packets */ return duration; } diff -r 1a320805265d -r 9a47ff1223bb src/event/quic/ngx_event_quic_connection.h --- a/src/event/quic/ngx_event_quic_connection.h Sat Nov 19 00:31:55 2022 +0800 +++ b/src/event/quic/ngx_event_quic_connection.h Tue Dec 06 19:27:11 2022 +0800 @@ -18,6 +18,7 @@ /* #define NGX_QUIC_DEBUG_CRYPTO */ typedef struct ngx_quic_connection_s ngx_quic_connection_t; +typedef struct ngx_quic_congestion_s ngx_quic_congestion_t; typedef struct ngx_quic_server_id_s ngx_quic_server_id_t; typedef struct ngx_quic_client_id_s ngx_quic_client_id_t; typedef struct ngx_quic_send_ctx_s ngx_quic_send_ctx_t; @@ -63,6 +64,12 @@ #define ngx_quic_get_socket(c) ((ngx_quic_socket_t *)((c)->udp)) +typedef void (*ngx_quic_congestion_init_pt)(ngx_connection_t *c, + ngx_quic_congestion_t *cg); +typedef ngx_int_t (*ngx_quic_congestion_event_pt)(ngx_connection_t *c, + ngx_quic_frame_t *f); + + struct ngx_quic_client_id_s { ngx_queue_t queue; uint64_t seqnum; @@ -80,6 +87,23 @@ }; +typedef struct { + ngx_str_t name; + ngx_quic_congestion_init_pt init; + ngx_quic_congestion_event_pt ack; + ngx_quic_congestion_event_pt lost; +} ngx_quic_congestion_ops_t; + + +struct ngx_quic_congestion_s { + size_t in_flight; + size_t window; + size_t ssthresh; + ngx_msec_t recovery_start; + ngx_quic_congestion_ops_t *cc_ops; +}; + + struct ngx_quic_path_s { ngx_queue_t queue; struct sockaddr *sockaddr; @@ -96,6 +120,15 @@ uint64_t seqnum; ngx_str_t addr_text; u_char text[NGX_SOCKADDR_STRLEN]; + + ngx_msec_t first_rtt; + ngx_msec_t latest_rtt; + ngx_msec_t avg_rtt; + ngx_msec_t min_rtt; + ngx_msec_t rttvar; + + ngx_quic_congestion_t congestion; + unsigned validated:1; unsigned validating:1; unsigned limited:1; @@ -143,14 +176,6 @@ } ngx_quic_streams_t; -typedef struct { - size_t in_flight; - size_t window; - size_t ssthresh; - ngx_msec_t recovery_start; -} ngx_quic_congestion_t; - - /* * RFC 9000, 12.3. Packet Numbers * @@ -218,12 +243,6 @@ ngx_event_t path_validation; ngx_msec_t last_cc; - ngx_msec_t first_rtt; - ngx_msec_t latest_rtt; - ngx_msec_t avg_rtt; - ngx_msec_t min_rtt; - ngx_msec_t rttvar; - ngx_uint_t pto_count; ngx_queue_t free_frames; @@ -237,7 +256,6 @@ #endif ngx_quic_streams_t streams; - ngx_quic_congestion_t congestion; off_t received; @@ -273,4 +291,8 @@ #define ngx_quic_connstate_dbg(c) #endif + +extern ngx_quic_congestion_ops_t ngx_quic_reno; + + #endif /* _NGX_EVENT_QUIC_CONNECTION_H_INCLUDED_ */ diff -r 1a320805265d -r 9a47ff1223bb src/event/quic/ngx_event_quic_migration.c --- a/src/event/quic/ngx_event_quic_migration.c Sat Nov 19 00:31:55 2022 +0800 +++ b/src/event/quic/ngx_event_quic_migration.c Tue Dec 06 19:27:11 2022 +0800 @@ -135,17 +135,17 @@ { /* address did not change */ rst = 0; + + ngx_memcpy(&path->congestion, &prev->congestion, + sizeof(ngx_quic_congestion_t)); } } if (rst) { - ngx_memzero(&qc->congestion, sizeof(ngx_quic_congestion_t)); + ngx_memzero(&path->congestion, sizeof(ngx_quic_congestion_t)); - qc->congestion.window = ngx_min(10 * qc->tp.max_udp_payload_size, - ngx_max(2 * qc->tp.max_udp_payload_size, - 14720)); - qc->congestion.ssthresh = (size_t) -1; - qc->congestion.recovery_start = ngx_current_msec; + path->congestion.cc_ops = &ngx_quic_reno; + path->congestion.cc_ops->init(c, &path->congestion); } /* @@ -217,6 +217,18 @@ path->addr_text.len = ngx_sock_ntop(sockaddr, socklen, path->text, NGX_SOCKADDR_STRLEN, 1); + path->avg_rtt = NGX_QUIC_INITIAL_RTT; + path->rttvar = NGX_QUIC_INITIAL_RTT / 2; + path->min_rtt = NGX_TIMER_INFINITE; + path->first_rtt = NGX_TIMER_INFINITE; + + /* + * path->latest_rtt = 0 + */ + + path->congestion.cc_ops = &ngx_quic_reno; + path->congestion.cc_ops->init(c, &path->congestion); + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, "quic path seq:%uL created addr:%V", path->seqnum, &path->addr_text); diff -r 1a320805265d -r 9a47ff1223bb src/event/quic/ngx_event_quic_output.c --- a/src/event/quic/ngx_event_quic_output.c Sat Nov 19 00:31:55 2022 +0800 +++ b/src/event/quic/ngx_event_quic_output.c Tue Dec 06 19:27:11 2022 +0800 @@ -87,7 +87,7 @@ c->log->action = "sending frames"; qc = ngx_quic_get_connection(c); - cg = &qc->congestion; + cg = &qc->path->congestion; in_flight = cg->in_flight; @@ -135,7 +135,7 @@ static u_char dst[NGX_QUIC_MAX_UDP_PAYLOAD_SIZE]; qc = ngx_quic_get_connection(c); - cg = &qc->congestion; + cg = &qc->path->congestion; path = qc->path; while (cg->in_flight < cg->window) { @@ -223,7 +223,7 @@ qc = ngx_quic_get_connection(c); - cg = &qc->congestion; + cg = &qc->path->congestion; while (!ngx_queue_empty(&ctx->sending)) { @@ -336,7 +336,7 @@ static u_char dst[NGX_QUIC_MAX_UDP_SEGMENT_BUF]; qc = ngx_quic_get_connection(c); - cg = &qc->congestion; + cg = &qc->path->congestion; path = qc->path; ctx = ngx_quic_get_send_ctx(qc, ssl_encryption_application); /* ----------------------------QUIC-TEST---------------------------------*/ /* h3_absolute_redirect.t */ $ TEST_NGINX_LEAVE=1 TEST_NGINX_BINARY=/data/nginx-quic/objs/nginx prove h3_absolute_redirect.t -v h3_absolute_redirect.t .. 1..25 ok 1 - directory ok 2 - directory alias ok 3 - directory escaped ok 4 - directory escaped args ok 5 - auto ok 6 - auto args ok 7 - auto escaped ok 8 - auto escaped args ok 9 - return ok 10 - server_name_in_redirect on ok 11 - server_name_in_redirect off - using host ok 12 - invalid host - using local sockaddr ok 13 - port_in_redirect off ok 14 - off directory ok 15 - off directory alias ok 16 - off directory escaped ok 17 - off directory escaped args ok 18 - off auto ok 19 - off auto args ok 20 - auto escaped ok 21 - auto escaped args ok 22 - off return ok 23 - auto escaped strict ok 24 - no alerts ok 25 - no sanitizer errors ok All tests successful. Files=1, Tests=25, 1 wallclock secs ( 0.01 usr 0.00 sys + 0.37 cusr 0.03 csys = 0.41 CPU) Result: PASS /* h3_headers.t */ $ TEST_NGINX_LEAVE=1 TEST_NGINX_BINARY=/data/nginx-quic/objs/nginx prove h3_headers.t -v h3_headers.t .. 1..68 ok 1 - indexed ok 2 - indexed dynamic ok 3 - indexed dynamic huffman ok 4 - indexed dynamic previous ok 5 - indexed reference ok 6 - indexed reference dynamic ok 7 - indexed reference huffman ok 8 - post-base index ok 9 - reference ok 10 - reference huffman ok 11 - reference never indexed ok 12 - reference dynamic ok 13 - base-base ref ok 14 - post-base ref huffman ok 15 - post-base ref never indexed ok 16 - literal ok 17 - literal huffman ok 18 - literal never indexed ok 19 - rare chars ok 20 - rare chars - no huffman encoding ok 21 - well known chars ok 22 - well known chars - huffman encoding ok 23 - all chars ok 24 - all chars - huffman encoding ok 25 - capacity insert ok 26 - capacity replace ok 27 - capacity eviction ok 28 - capacity invalid ok 29 - multiple request header fields - cookie ok 30 - multiple request header fields - cookie 2 ok 31 - multiple request header fields - semi-colon ok 32 - multiple request header fields proxied - semi-colon ok 33 - multiple request header fields proxied - dublicate cookie ok 34 - multiple request header fields proxied - cookie 1 ok 35 - multiple request header fields proxied - cookie 2 ok 36 - multiple response header fields - cookie ok 37 - multiple response header fields - cookie 2 ok 38 - multiple response header proxied - cookie ok 39 - multiple response header proxied - cookie 2 ok 40 - multiple response header proxied - upstream cookie ok 41 - multiple response header proxied - upstream cookie 2 ok 42 - field name size less ok 43 - field name size equal ok 44 - field name size greater ok 45 - field value size less ok 46 - field value size equal ok 47 - field value size greater ok 48 - header size less ok 49 - header size equal ok 50 - header size greater ok 51 - header size indexed ok 52 - header size indexed greater not ok 53 # TODO & SKIP not yet not ok 54 # TODO & SKIP not yet not ok 55 # TODO & SKIP not yet ok 56 - after invalid header name not ok 57 # TODO & SKIP not yet not ok 58 # TODO & SKIP not yet not ok 59 # TODO & SKIP not yet not ok 60 # TODO & SKIP not yet ok 61 - underscore in header name - underscores_in_headers ok 62 - underscore in header name - ignore_invalid_headers ok 63 - incomplete headers ok 64 - empty authority ok 65 - invalid path ok 66 - invalid path control ok 67 - no alerts ok 68 - no sanitizer errors ok All tests successful. Files=1, Tests=68, 1 wallclock secs ( 0.03 usr 0.00 sys + 0.48 cusr 0.04 csys = 0.55 CPU) Result: PASS -- Yu Zhu -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Tue Dec 6 15:26:33 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 06 Dec 2022 15:26:33 +0000 Subject: [nginx] Fixed alignment of ngx_memmove()/ngx_movemem() macro definitions. Message-ID: details: https://hg.nginx.org/nginx/rev/351d7f4e326f branches: changeset: 8108:351d7f4e326f user: Maxim Dounin date: Wed Nov 30 18:01:43 2022 +0300 description: Fixed alignment of ngx_memmove()/ngx_movemem() macro definitions. diffstat: src/core/ngx_string.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff -r 0b360747c74e -r 351d7f4e326f src/core/ngx_string.h --- a/src/core/ngx_string.h Thu Nov 24 23:08:30 2022 +0400 +++ b/src/core/ngx_string.h Wed Nov 30 18:01:43 2022 +0300 @@ -140,8 +140,8 @@ ngx_copy(u_char *dst, u_char *src, size_ #endif -#define ngx_memmove(dst, src, n) (void) memmove(dst, src, n) -#define ngx_movemem(dst, src, n) (((u_char *) memmove(dst, src, n)) + (n)) +#define ngx_memmove(dst, src, n) (void) memmove(dst, src, n) +#define ngx_movemem(dst, src, n) (((u_char *) memmove(dst, src, n)) + (n)) /* msvc and icc7 compile memcmp() to the inline loop */ From arut at nginx.com Tue Dec 6 15:26:36 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 06 Dec 2022 15:26:36 +0000 Subject: [nginx] Removed casts from ngx_memcmp() macro. Message-ID: details: https://hg.nginx.org/nginx/rev/2ffefe2f892e branches: changeset: 8109:2ffefe2f892e user: Maxim Dounin date: Wed Nov 30 18:01:53 2022 +0300 description: Removed casts from ngx_memcmp() macro. Casts are believed to be not needed, since memcmp() has "const void *" arguments since introduction of the "void" type in C89. And on pre-C89 platforms nginx is unlikely to compile without warnings anyway, as there are no casts in memcpy() and memmove() calls. These casts were added in 1648:89a47f19b9ec without any details on why they were added, and Igor does not remember details either. The most plausible explanation is that they were copied from ngx_strcmp() and were not really needed even at that time. Prodded by Alejandro Colomar. diffstat: src/core/ngx_string.h | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r 351d7f4e326f -r 2ffefe2f892e src/core/ngx_string.h --- a/src/core/ngx_string.h Wed Nov 30 18:01:43 2022 +0300 +++ b/src/core/ngx_string.h Wed Nov 30 18:01:53 2022 +0300 @@ -145,7 +145,7 @@ ngx_copy(u_char *dst, u_char *src, size_ /* msvc and icc7 compile memcmp() to the inline loop */ -#define ngx_memcmp(s1, s2, n) memcmp((const char *) s1, (const char *) s2, n) +#define ngx_memcmp(s1, s2, n) memcmp(s1, s2, n) u_char *ngx_cpystrn(u_char *dst, u_char *src, size_t n); From arut at nginx.com Tue Dec 6 15:26:42 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 06 Dec 2022 15:26:42 +0000 Subject: [nginx] Win32: event flags handling edge cases in ngx_wsarecv(). Message-ID: details: https://hg.nginx.org/nginx/rev/56819a9491fe branches: changeset: 8111:56819a9491fe user: Maxim Dounin date: Thu Dec 01 04:22:36 2022 +0300 description: Win32: event flags handling edge cases in ngx_wsarecv(). Fixed event flags handling edge cases in ngx_wsarecv() and ngx_wsarecv_chain(), notably to always reset rev->ready in case of errors (which wasn't the case after ngx_socket_nread() errors), and after EOF (rev->ready was not cleared if due to a misconfiguration a zero-sized buffer was used for reading). diffstat: src/os/win32/ngx_wsarecv.c | 2 ++ src/os/win32/ngx_wsarecv_chain.c | 2 ++ 2 files changed, 4 insertions(+), 0 deletions(-) diffs (38 lines): diff -r 06c7d84cafdb -r 56819a9491fe src/os/win32/ngx_wsarecv.c --- a/src/os/win32/ngx_wsarecv.c Thu Dec 01 04:22:31 2022 +0300 +++ b/src/os/win32/ngx_wsarecv.c Thu Dec 01 04:22:36 2022 +0300 @@ -78,6 +78,7 @@ ngx_wsarecv(ngx_connection_t *c, u_char ngx_socket_nread_n " failed"); if (n == NGX_ERROR) { + rev->ready = 0; rev->error = 1; } @@ -95,6 +96,7 @@ ngx_wsarecv(ngx_connection_t *c, u_char } if (bytes == 0) { + rev->ready = 0; rev->eof = 1; } diff -r 06c7d84cafdb -r 56819a9491fe src/os/win32/ngx_wsarecv_chain.c --- a/src/os/win32/ngx_wsarecv_chain.c Thu Dec 01 04:22:31 2022 +0300 +++ b/src/os/win32/ngx_wsarecv_chain.c Thu Dec 01 04:22:36 2022 +0300 @@ -121,6 +121,7 @@ ngx_wsarecv_chain(ngx_connection_t *c, n } else if (bytes == size) { if (ngx_socket_nread(c->fd, &rev->available) == -1) { + rev->ready = 0; rev->error = 1; ngx_connection_error(c, ngx_socket_errno, ngx_socket_nread_n " failed"); @@ -138,6 +139,7 @@ ngx_wsarecv_chain(ngx_connection_t *c, n } if (bytes == 0) { + rev->ready = 0; rev->eof = 1; } From arut at nginx.com Tue Dec 6 15:26:39 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 06 Dec 2022 15:26:39 +0000 Subject: [nginx] SSL: fixed ngx_ssl_recv() to reset c->read->ready after errors. Message-ID: details: https://hg.nginx.org/nginx/rev/06c7d84cafdb branches: changeset: 8110:06c7d84cafdb user: Maxim Dounin date: Thu Dec 01 04:22:31 2022 +0300 description: SSL: fixed ngx_ssl_recv() to reset c->read->ready after errors. With this change, behaviour of ngx_ssl_recv() now matches ngx_unix_recv(), which used to always reset c->read->ready to 0 when returning errors. This fixes an infinite loop in unbuffered SSL proxying if writing to the client is blocked and an SSL error happens (ticket #2418). With this change, the fix for a similar issue in the stream module (6868:ee3645078759), which used a different approach of explicitly testing c->read->error instead, is no longer needed and was reverted. diffstat: src/event/ngx_event_openssl.c | 5 +++++ src/stream/ngx_stream_proxy_module.c | 5 ++--- 2 files changed, 7 insertions(+), 3 deletions(-) diffs (58 lines): diff -r 2ffefe2f892e -r 06c7d84cafdb src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Wed Nov 30 18:01:53 2022 +0300 +++ b/src/event/ngx_event_openssl.c Thu Dec 01 04:22:31 2022 +0300 @@ -2204,6 +2204,7 @@ ngx_ssl_recv(ngx_connection_t *c, u_char #endif if (c->ssl->last == NGX_ERROR) { + c->read->ready = 0; c->read->error = 1; return NGX_ERROR; } @@ -2270,6 +2271,7 @@ ngx_ssl_recv(ngx_connection_t *c, u_char #if (NGX_HAVE_FIONREAD) if (ngx_socket_nread(c->fd, &c->read->available) == -1) { + c->read->ready = 0; c->read->error = 1; ngx_connection_error(c, ngx_socket_errno, ngx_socket_nread_n " failed"); @@ -2306,6 +2308,7 @@ ngx_ssl_recv(ngx_connection_t *c, u_char return 0; case NGX_ERROR: + c->read->ready = 0; c->read->error = 1; /* fall through */ @@ -2326,6 +2329,7 @@ ngx_ssl_recv_early(ngx_connection_t *c, size_t readbytes; if (c->ssl->last == NGX_ERROR) { + c->read->ready = 0; c->read->error = 1; return NGX_ERROR; } @@ -2425,6 +2429,7 @@ ngx_ssl_recv_early(ngx_connection_t *c, return 0; case NGX_ERROR: + c->read->ready = 0; c->read->error = 1; /* fall through */ diff -r 2ffefe2f892e -r 06c7d84cafdb src/stream/ngx_stream_proxy_module.c --- a/src/stream/ngx_stream_proxy_module.c Wed Nov 30 18:01:53 2022 +0300 +++ b/src/stream/ngx_stream_proxy_module.c Thu Dec 01 04:22:31 2022 +0300 @@ -1675,9 +1675,8 @@ ngx_stream_proxy_process(ngx_stream_sess size = b->end - b->last; - if (size && src->read->ready && !src->read->delayed - && !src->read->error) - { + if (size && src->read->ready && !src->read->delayed) { + if (limit_rate) { limit = (off_t) limit_rate * (ngx_time() - u->start_sec + 1) - *received; From v.zhestikov at f5.com Wed Dec 7 02:50:34 2022 From: v.zhestikov at f5.com (Vadim Zhestikov) Date: Wed, 07 Dec 2022 02:50:34 +0000 Subject: [njs] Improved performance of conditional jumps. Message-ID: details: https://hg.nginx.org/njs/rev/ef1fd66c094e branches: changeset: 2008:ef1fd66c094e user: Vadim Zhestikov date: Tue Dec 06 18:47:53 2022 -0800 description: Improved performance of conditional jumps. diffstat: src/njs_vmcode.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (23 lines): diff -r c51adee54dfe -r ef1fd66c094e src/njs_vmcode.c --- a/src/njs_vmcode.c Fri Nov 18 14:10:25 2022 -0800 +++ b/src/njs_vmcode.c Tue Dec 06 18:47:53 2022 -0800 @@ -1355,8 +1355,8 @@ NEXT_LBL; CASE (NJS_VMCODE_IF_TRUE_JUMP): njs_vmcode_debug_opcode(); + njs_vmcode_operand(vm, vmcode->operand2, value1); value2 = (njs_value_t *) vmcode->operand1; - njs_vmcode_operand(vm, vmcode->operand2, value1); ret = njs_is_true(value1); @@ -1368,8 +1368,8 @@ NEXT_LBL; CASE (NJS_VMCODE_IF_FALSE_JUMP): njs_vmcode_debug_opcode(); + njs_vmcode_operand(vm, vmcode->operand2, value1); value2 = (njs_value_t *) vmcode->operand1; - njs_vmcode_operand(vm, vmcode->operand2, value1); ret = njs_is_true(value1); From sangdeuk.kwon at quantil.com Wed Dec 7 05:26:16 2022 From: sangdeuk.kwon at quantil.com (Sangdeuk Kwon) Date: Wed, 7 Dec 2022 14:26:16 +0900 Subject: [PATCH] proxy_cache_max_range_offset could affect the background updating Message-ID: # HG changeset patch # User Sangdeuk Kwon # Date 1670390583 -32400 # Wed Dec 07 14:23:03 2022 +0900 # Node ID a1069fbf10ffd806b7c8d6deb3f6546edc7b0427 # Parent 0b360747c74e3fa7e439e0684a8cf1da2d14d8f6 proxy_cache_max_range_offset could affect the background updating proxy_cache_max_range_offset doesn't care about the upstream of background updating. So, nginx drops the new cache file after background updating. This behavior is strange because background updating is just to fetch new content after serving a stale cache, not to serve it. I think the background updating should be not affected by proxy_cache_max_range_offset. Related directives: proxy_cache_max_range_offset 10; proxy_cache_use_stale updating; proxy_cache_background_update on; diff -r 0b360747c74e -r a1069fbf10ff src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Thu Nov 24 23:08:30 2022 +0400 +++ b/src/http/ngx_http_upstream.c Wed Dec 07 14:23:03 2022 +0900 @@ -986,7 +986,9 @@ return rc; } - if (ngx_http_upstream_cache_check_range(r, u) == NGX_DECLINED) { + if (!r->background + && ngx_http_upstream_cache_check_range(r, u) == NGX_DECLINED) + { u->cacheable = 0; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Wed Dec 7 15:01:52 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 7 Dec 2022 19:01:52 +0400 Subject: QUIC: reworked congestion control mechanism. In-Reply-To: References: Message-ID: <20221207150152.icsf4uu7lv57zeag@N00W24XTQX> Hi, Thanks for the path. On Tue, Dec 06, 2022 at 02:35:37PM +0000, 朱宇 wrote: > Hi, > > # HG changeset patch > # User Yu Zhu > # Date 1670326031 -28800 > # Tue Dec 06 19:27:11 2022 +0800 > # Branch quic > # Node ID 9a47ff1223bb32c8ddb146d731b395af89769a97 > # Parent 1a320805265db14904ca9deaae8330f4979619ce > QUIC: reworked congestion control mechanism. > > 1. move rtt measurement and congestion control to struct ngx_quic_path_t > because RTT and congestion control are properities of the path. I think this part should be moved out to a separate patch. > 2. introduced struct "ngx_quic_congestion_ops_t" to wrap callback functions > of congestion control and extract the reno algorithm from ngx_event_quic_ack.c. The biggest question about this part is how extensible is this approach? We are planning to implement more congestion control algorithms in the future and need a framework that would allow us to do that. Even CUBIC needs more data fields that we have now, and BBR will prooably need much more than that. Not sure how we'll add those data fields considering the proposed modular design. Also, we need to make sure the API is enough for future algorithms. I suggest that we finish the first part which moves congestion control to the path object. Then, until we have at least one other congestion control algorithm supported, it's hard to come up with a good API for it. I this we can postpone the second part until then. Also, I think CUBIC can be hardcoded into Reno without modular redesign of the code. > No functional changes. [..] > diff -r 1a320805265d -r 9a47ff1223bb src/event/quic/congestion/ngx_quic_reno.c > --- /dev/null Thu Jan 01 00:00:00 1970 +0000 > +++ b/src/event/quic/congestion/ngx_quic_reno.c Tue Dec 06 19:27:11 2022 +0800 > @@ -0,0 +1,133 @@ > + > +/* > + * Copyright (C) Nginx, Inc. > + */ > + > + > +#include > +#include > +#include > +#include > + > + > +static void ngx_quic_reno_on_init(ngx_connection_t *c, ngx_quic_congestion_t *cg); > +static ngx_int_t ngx_quic_reno_on_ack(ngx_connection_t *c, ngx_quic_frame_t *f); > +static ngx_int_t ngx_quic_reno_on_lost(ngx_connection_t *c, ngx_quic_frame_t *f); > + > + > +ngx_quic_congestion_ops_t ngx_quic_reno = { > + ngx_string("reno"), > + ngx_quic_reno_on_init, > + ngx_quic_reno_on_ack, > + ngx_quic_reno_on_lost > +}; > + > + > +static void > +ngx_quic_reno_on_init(ngx_connection_t *c, ngx_quic_congestion_t *cg) > +{ > + ngx_quic_connection_t *qc; > + > + qc = ngx_quic_get_connection(c); > + > + cg->window = ngx_min(10 * qc->tp.max_udp_payload_size, > + ngx_max(2 * qc->tp.max_udp_payload_size, > + 14720)); > + cg->ssthresh = (size_t) -1; > + cg->recovery_start = ngx_current_msec; > +} > + > + > +static ngx_int_t > +ngx_quic_reno_on_ack(ngx_connection_t *c, ngx_quic_frame_t *f) > +{ > + ngx_msec_t timer; > + ngx_quic_path_t *path; > + ngx_quic_connection_t *qc; > + ngx_quic_congestion_t *cg; > + > + qc = ngx_quic_get_connection(c); > + path = qc->path; What if the packet was sent on a different path? > + > + cg = &path->congestion; > + > + cg->in_flight -= f->plen; > + > + timer = f->last - cg->recovery_start; > + > + if ((ngx_msec_int_t) timer <= 0) { > + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, > + "quic congestion ack recovery win:%uz ss:%z if:%uz", > + cg->window, cg->ssthresh, cg->in_flight); > + > + return NGX_DONE; > + } > + > + if (cg->window < cg->ssthresh) { > + cg->window += f->plen; > + > + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, > + "quic congestion slow start win:%uz ss:%z if:%uz", > + cg->window, cg->ssthresh, cg->in_flight); > + > + } else { > + cg->window += qc->tp.max_udp_payload_size * f->plen / cg->window; > + > + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, > + "quic congestion avoidance win:%uz ss:%z if:%uz", > + cg->window, cg->ssthresh, cg->in_flight); > + } > + > + /* prevent recovery_start from wrapping */ > + > + timer = cg->recovery_start - ngx_current_msec + qc->tp.max_idle_timeout * 2; > + > + if ((ngx_msec_int_t) timer < 0) { > + cg->recovery_start = ngx_current_msec - qc->tp.max_idle_timeout * 2; > + } > + > + return NGX_OK; > +} > + > + > +static ngx_int_t > +ngx_quic_reno_on_lost(ngx_connection_t *c, ngx_quic_frame_t *f) > +{ > + ngx_msec_t timer; > + ngx_quic_path_t *path; > + ngx_quic_connection_t *qc; > + ngx_quic_congestion_t *cg; > + > + qc = ngx_quic_get_connection(c); > + path = qc->path; Same here. > + > + cg = &path->congestion; > + > + cg->in_flight -= f->plen; > + f->plen = 0; > + > + timer = f->last - cg->recovery_start; > + > + if ((ngx_msec_int_t) timer <= 0) { > + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, > + "quic congestion lost recovery win:%uz ss:%z if:%uz", > + cg->window, cg->ssthresh, cg->in_flight); > + > + return NGX_DONE; > + } > + > + cg->recovery_start = ngx_current_msec; > + cg->window /= 2; > + > + if (cg->window < qc->tp.max_udp_payload_size * 2) { > + cg->window = qc->tp.max_udp_payload_size * 2; > + } > + > + cg->ssthresh = cg->window; > + > + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, > + "quic congestion lost win:%uz ss:%z if:%uz", > + cg->window, cg->ssthresh, cg->in_flight); > + > + return NGX_OK; > +} [..] -- Roman Arutyunyan From xeioex at nginx.com Thu Dec 8 02:13:52 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 08 Dec 2022 02:13:52 +0000 Subject: [njs] Allowing to declare exotic slot for the external objects. Message-ID: details: https://hg.nginx.org/njs/rev/a2a24c4b2541 branches: changeset: 2009:a2a24c4b2541 user: Dmitry Volyntsev date: Wed Dec 07 18:11:54 2022 -0800 description: Allowing to declare exotic slot for the external objects. diffstat: src/njs.h | 8 ++++ src/njs_extern.c | 69 +++++++++++++++++++++++++++++++----------- src/test/njs_externals_test.c | 33 ++++++++++++++++++++ src/test/njs_unit_test.c | 3 + 4 files changed, 94 insertions(+), 19 deletions(-) diffs (198 lines): diff -r ef1fd66c094e -r a2a24c4b2541 src/njs.h --- a/src/njs.h Tue Dec 06 18:47:53 2022 -0800 +++ b/src/njs.h Wed Dec 07 18:11:54 2022 -0800 @@ -132,9 +132,17 @@ typedef enum { typedef enum { + /* + * Extern property type. + */ NJS_EXTERN_PROPERTY = 0, NJS_EXTERN_METHOD = 1, NJS_EXTERN_OBJECT = 2, + NJS_EXTERN_SELF = 3, +#define NJS_EXTERN_TYPE_MASK 3 + /* + * Extern property flags. + */ NJS_EXTERN_SYMBOL = 4, } njs_extern_flag_t; diff -r ef1fd66c094e -r a2a24c4b2541 src/njs_extern.c --- a/src/njs_extern.c Tue Dec 06 18:47:53 2022 -0800 +++ b/src/njs_extern.c Wed Dec 07 18:11:54 2022 -0800 @@ -17,15 +17,16 @@ static njs_int_t njs_external_add(njs_vm_t *vm, njs_arr_t *protos, const njs_external_t *external, njs_uint_t n) { - size_t size; - ssize_t length; - njs_int_t ret; - njs_lvlhsh_t *hash; - const u_char *start; - njs_function_t *function; - njs_object_prop_t *prop; - njs_lvlhsh_query_t lhq; - njs_exotic_slots_t *slot, *next; + size_t size; + ssize_t length; + njs_int_t ret; + njs_lvlhsh_t *hash; + const u_char *start; + njs_function_t *function; + njs_object_prop_t *prop; + njs_lvlhsh_query_t lhq; + njs_exotic_slots_t *slot, *next; + const njs_external_t *end; slot = njs_arr_add(protos); njs_memzero(slot, sizeof(njs_exotic_slots_t)); @@ -37,7 +38,22 @@ njs_external_add(njs_vm_t *vm, njs_arr_t lhq.proto = &njs_object_hash_proto; lhq.pool = vm->mem_pool; - while (n != 0) { + end = external + n; + + while (external < end) { + + if ((external->flags & NJS_EXTERN_TYPE_MASK) == NJS_EXTERN_SELF) { + slot->writable = external->u.object.writable; + slot->configurable = external->u.object.configurable; + slot->enumerable = external->u.object.enumerable; + slot->prop_handler = external->u.object.prop_handler; + slot->magic32 = external->u.object.magic32; + slot->keys = external->u.object.keys; + + external++; + continue; + } + prop = njs_object_prop_alloc(vm, &njs_string_empty, &njs_value_invalid, 1); if (njs_slow_path(prop == NULL)) { @@ -48,7 +64,7 @@ njs_external_add(njs_vm_t *vm, njs_arr_t prop->configurable = external->configurable; prop->enumerable = external->enumerable; - if (external->flags & 4) { + if (external->flags & NJS_EXTERN_SYMBOL) { njs_set_symbol(&prop->name, external->name.symbol, NULL); lhq.key_hash = external->name.symbol; @@ -66,7 +82,7 @@ njs_external_add(njs_vm_t *vm, njs_arr_t lhq.value = prop; - switch (external->flags & 3) { + switch (external->flags & NJS_EXTERN_TYPE_MASK) { case NJS_EXTERN_METHOD: function = njs_mp_zalloc(vm->mem_pool, sizeof(njs_function_t)); if (njs_slow_path(function == NULL)) { @@ -128,12 +144,28 @@ njs_external_add(njs_vm_t *vm, njs_arr_t njs_prop_magic32(prop) = lhq.key_hash; njs_prop_handler(prop) = njs_external_prop_handler; - next->writable = external->u.object.writable; - next->configurable = external->u.object.configurable; - next->enumerable = external->u.object.enumerable; - next->prop_handler = external->u.object.prop_handler; - next->magic32 = external->u.object.magic32; - next->keys = external->u.object.keys; + if (external->u.object.prop_handler) { + if (next->prop_handler) { + njs_internal_error(vm, "overwritten self prop_handler"); + return NJS_ERROR; + } + + next->writable = external->u.object.writable; + next->configurable = external->u.object.configurable; + next->enumerable = external->u.object.enumerable; + + next->prop_handler = external->u.object.prop_handler; + next->magic32 = external->u.object.magic32; + } + + if (external->u.object.keys) { + if (next->keys) { + njs_internal_error(vm, "overwritten self keys"); + return NJS_ERROR; + } + + next->keys = external->u.object.keys; + } break; } @@ -144,7 +176,6 @@ njs_external_add(njs_vm_t *vm, njs_arr_t return NJS_ERROR; } - n--; external++; } diff -r ef1fd66c094e -r a2a24c4b2541 src/test/njs_externals_test.c --- a/src/test/njs_externals_test.c Tue Dec 06 18:47:53 2022 -0800 +++ b/src/test/njs_externals_test.c Wed Dec 07 18:11:54 2022 -0800 @@ -585,6 +585,28 @@ static njs_external_t njs_unit_test_r_h }; +static njs_external_t njs_unit_test_r_header_props2[] = { + + { + .flags = NJS_EXTERN_PROPERTY | NJS_EXTERN_SYMBOL, + .name.symbol = NJS_SYMBOL_TO_STRING_TAG, + .u.property = { + .value = "Header2", + } + }, + + { + .flags = NJS_EXTERN_SELF, + .u.object = { + .enumerable = 1, + .prop_handler = njs_unit_test_r_header, + .keys = njs_unit_test_r_header_keys, + } + }, + +}; + + static njs_external_t njs_unit_test_r_external[] = { { @@ -645,6 +667,17 @@ static njs_external_t njs_unit_test_r_e }, { + .flags = NJS_EXTERN_OBJECT, + .name.string = njs_str("header2"), + .writable = 1, + .configurable = 1, + .u.object = { + .properties = njs_unit_test_r_header_props2, + .nproperties = njs_nitems(njs_unit_test_r_header_props2), + } + }, + + { .flags = NJS_EXTERN_PROPERTY, .name.string = njs_str("host"), .enumerable = 1, diff -r ef1fd66c094e -r a2a24c4b2541 src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Tue Dec 06 18:47:53 2022 -0800 +++ b/src/test/njs_unit_test.c Wed Dec 07 18:11:54 2022 -0800 @@ -21712,6 +21712,9 @@ static njs_unit_test_t njs_externals_te { njs_str("njs.dump($r.header)"), njs_str("Header {01:'01|АБВ',02:'02|АБВ',03:'03|АБВ'}") }, + { njs_str("njs.dump($r.header2)"), + njs_str("Header2 {01:'01|АБВ',02:'02|АБВ',03:'03|АБВ'}") }, + { njs_str("var o = {b:$r.props.b}; o.b"), njs_str("42") }, From xeioex at nginx.com Thu Dec 8 02:13:54 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 08 Dec 2022 02:13:54 +0000 Subject: [njs] Added njs_value_function_set(). Message-ID: details: https://hg.nginx.org/njs/rev/f2ab76784741 branches: changeset: 2010:f2ab76784741 user: Dmitry Volyntsev date: Wed Dec 07 18:11:55 2022 -0800 description: Added njs_value_function_set(). diffstat: src/njs.h | 2 ++ src/njs_value.c | 7 +++++++ 2 files changed, 9 insertions(+), 0 deletions(-) diffs (29 lines): diff -r a2a24c4b2541 -r f2ab76784741 src/njs.h --- a/src/njs.h Wed Dec 07 18:11:54 2022 -0800 +++ b/src/njs.h Wed Dec 07 18:11:55 2022 -0800 @@ -457,6 +457,8 @@ NJS_EXPORT void njs_value_null_set(njs_v NJS_EXPORT void njs_value_invalid_set(njs_value_t *value); NJS_EXPORT void njs_value_boolean_set(njs_value_t *value, int yn); NJS_EXPORT void njs_value_number_set(njs_value_t *value, double num); +NJS_EXPORT void njs_value_function_set(njs_value_t *value, + njs_function_t *function); NJS_EXPORT uint8_t njs_value_bool(const njs_value_t *value); NJS_EXPORT double njs_value_number(const njs_value_t *value); diff -r a2a24c4b2541 -r f2ab76784741 src/njs_value.c --- a/src/njs_value.c Wed Dec 07 18:11:54 2022 -0800 +++ b/src/njs_value.c Wed Dec 07 18:11:55 2022 -0800 @@ -404,6 +404,13 @@ njs_value_number_set(njs_value_t *value, } +void +njs_value_function_set(njs_value_t *value, njs_function_t *function) +{ + njs_set_function(value, function); +} + + uint8_t njs_value_bool(const njs_value_t *value) { From xeioex at nginx.com Thu Dec 8 02:13:56 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 08 Dec 2022 02:13:56 +0000 Subject: [njs] Extended njs_vm_function_alloc(). Message-ID: details: https://hg.nginx.org/njs/rev/68b28e924908 branches: changeset: 2011:68b28e924908 user: Dmitry Volyntsev date: Wed Dec 07 18:11:56 2022 -0800 description: Extended njs_vm_function_alloc(). diffstat: external/njs_webcrypto_module.c | 2 +- nginx/ngx_http_js_module.c | 3 +- nginx/ngx_js_fetch.c | 4 +- src/njs.h | 2 +- src/njs_function.c | 8 ++++++- src/test/njs_externals_test.c | 44 ++++++++++++++++++++++++++++++++++++++++- src/test/njs_unit_test.c | 5 +++- 7 files changed, 60 insertions(+), 8 deletions(-) diffs (184 lines): diff -r f2ab76784741 -r 68b28e924908 external/njs_webcrypto_module.c --- a/external/njs_webcrypto_module.c Wed Dec 07 18:11:55 2022 -0800 +++ b/external/njs_webcrypto_module.c Wed Dec 07 18:11:56 2022 -0800 @@ -2779,7 +2779,7 @@ njs_webcrypto_result(njs_vm_t *vm, njs_v goto error; } - callback = njs_vm_function_alloc(vm, njs_promise_trampoline); + callback = njs_vm_function_alloc(vm, njs_promise_trampoline, 0, 0); if (callback == NULL) { goto error; } diff -r f2ab76784741 -r 68b28e924908 nginx/ngx_http_js_module.c --- a/nginx/ngx_http_js_module.c Wed Dec 07 18:11:55 2022 -0800 +++ b/nginx/ngx_http_js_module.c Wed Dec 07 18:11:56 2022 -0800 @@ -3191,7 +3191,8 @@ ngx_http_js_ext_subrequest(njs_vm_t *vm, } if (!detached && callback == NULL) { - callback = njs_vm_function_alloc(vm, ngx_http_js_promise_trampoline); + callback = njs_vm_function_alloc(vm, ngx_http_js_promise_trampoline, 0, + 0); if (callback == NULL) { goto memory_error; } diff -r f2ab76784741 -r 68b28e924908 nginx/ngx_js_fetch.c --- a/nginx/ngx_js_fetch.c Wed Dec 07 18:11:55 2022 -0800 +++ b/nginx/ngx_js_fetch.c Wed Dec 07 18:11:56 2022 -0800 @@ -636,7 +636,7 @@ ngx_js_http_alloc(njs_vm_t *vm, ngx_pool goto failed; } - callback = njs_vm_function_alloc(vm, ngx_js_http_promise_trampoline); + callback = njs_vm_function_alloc(vm, ngx_js_http_promise_trampoline, 0, 0); if (callback == NULL) { goto failed; } @@ -804,7 +804,7 @@ ngx_js_fetch_promissified_result(njs_vm_ goto error; } - callback = njs_vm_function_alloc(vm, ngx_js_http_promise_trampoline); + callback = njs_vm_function_alloc(vm, ngx_js_http_promise_trampoline, 0, 0); if (callback == NULL) { goto error; } diff -r f2ab76784741 -r 68b28e924908 src/njs.h --- a/src/njs.h Wed Dec 07 18:11:55 2022 -0800 +++ b/src/njs.h Wed Dec 07 18:11:56 2022 -0800 @@ -384,7 +384,7 @@ NJS_EXPORT njs_int_t njs_external_proper NJS_EXPORT uintptr_t njs_vm_meta(njs_vm_t *vm, njs_uint_t index); NJS_EXPORT njs_function_t *njs_vm_function_alloc(njs_vm_t *vm, - njs_function_native_t native); + njs_function_native_t native, njs_bool_t shared, njs_bool_t ctor); NJS_EXPORT void njs_disassembler(njs_vm_t *vm); diff -r f2ab76784741 -r 68b28e924908 src/njs_function.c --- a/src/njs_function.c Wed Dec 07 18:11:55 2022 -0800 +++ b/src/njs_function.c Wed Dec 07 18:11:56 2022 -0800 @@ -65,7 +65,8 @@ fail: njs_function_t * -njs_vm_function_alloc(njs_vm_t *vm, njs_function_native_t native) +njs_vm_function_alloc(njs_vm_t *vm, njs_function_native_t native, + njs_bool_t shared, njs_bool_t ctor) { njs_function_t *function; @@ -76,7 +77,12 @@ njs_vm_function_alloc(njs_vm_t *vm, njs_ } function->native = 1; + function->ctor = ctor; + function->object.shared = shared; function->u.native = native; + function->object.shared_hash = vm->shared->function_instance_hash; + function->object.__proto__ = &vm->prototypes[NJS_OBJ_TYPE_FUNCTION].object; + function->object.type = NJS_FUNCTION; return function; } diff -r f2ab76784741 -r 68b28e924908 src/test/njs_externals_test.c --- a/src/test/njs_externals_test.c Wed Dec 07 18:11:55 2022 -0800 +++ b/src/test/njs_externals_test.c Wed Dec 07 18:11:56 2022 -0800 @@ -417,7 +417,8 @@ njs_unit_test_r_subrequest(njs_vm_t *vm, return NJS_ERROR; } - callback = njs_vm_function_alloc(vm, njs_unit_test_promise_trampoline); + callback = njs_vm_function_alloc(vm, njs_unit_test_promise_trampoline, 0, + 0); if (callback == NULL) { return NJS_ERROR; } @@ -524,6 +525,29 @@ njs_unit_test_r_bind(njs_vm_t *vm, njs_v } +static njs_int_t +njs_unit_test_constructor(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, + njs_index_t unused) +{ + njs_unit_test_req_t *sr; + + sr = njs_mp_zalloc(vm->mem_pool, sizeof(njs_unit_test_req_t)); + if (sr == NULL) { + njs_memory_error(vm); + return NJS_ERROR; + } + + if (njs_vm_value_to_bytes(vm, &sr->uri, njs_arg(args, nargs, 1)) + != NJS_OK) + { + return NJS_ERROR; + } + + return njs_vm_external_create(vm, &vm->retval, njs_external_r_proto_id, + sr, 0); +} + + static njs_external_t njs_unit_test_r_c[] = { { @@ -831,9 +855,13 @@ njs_externals_init_internal(njs_vm_t *vm { njs_int_t ret; njs_uint_t i, j; + njs_function_t *f; + njs_opaque_value_t value; njs_unit_test_req_t *requests; njs_unit_test_prop_t *prop; + static const njs_str_t external_ctor = njs_str("ExternalConstructor"); + if (shared) { njs_external_r_proto_id = njs_vm_external_prototype(vm, njs_unit_test_r_external, @@ -842,6 +870,20 @@ njs_externals_init_internal(njs_vm_t *vm njs_printf("njs_vm_external_prototype() failed\n"); return NJS_ERROR; } + + f = njs_vm_function_alloc(vm, njs_unit_test_constructor, 1, 1); + if (f == NULL) { + njs_printf("njs_vm_function_alloc() failed\n"); + return NJS_ERROR; + } + + njs_value_function_set(njs_value_arg(&value), f); + + ret = njs_vm_bind(vm, &external_ctor, njs_value_arg(&value), 1); + if (njs_slow_path(ret != NJS_OK)) { + njs_printf("njs_vm_bind() failed\n"); + return NJS_ERROR; + } } requests = njs_mp_zalloc(vm->mem_pool, n * sizeof(njs_unit_test_req_t)); diff -r f2ab76784741 -r 68b28e924908 src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Wed Dec 07 18:11:55 2022 -0800 +++ b/src/test/njs_unit_test.c Wed Dec 07 18:11:56 2022 -0800 @@ -21652,6 +21652,9 @@ static njs_unit_test_t njs_externals_te { njs_str("var sr = $r.create('XXX'); sr.vars.p = 'a'; sr.vars.p"), njs_str("a") }, + { njs_str("var r = new ExternalConstructor('XXX'); r.uri"), + njs_str("XXX") }, + { njs_str("var p; for (p in $r.method);"), njs_str("undefined") }, @@ -21734,7 +21737,7 @@ static njs_unit_test_t njs_externals_te #endif { njs_str("Object.keys(this).sort()"), - njs_str(N262 "$r,$r2,$r3,$shared," NCRYPTO "global,njs,process") }, + njs_str(N262 "$r,$r2,$r3,$shared,ExternalConstructor," NCRYPTO "global,njs,process") }, { njs_str("Object.getOwnPropertySymbols($r2)[0] == Symbol.toStringTag"), njs_str("true") }, From xeioex at nginx.com Thu Dec 8 02:13:57 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 08 Dec 2022 02:13:57 +0000 Subject: [njs] Added njs_vm_external_ptr(). Message-ID: details: https://hg.nginx.org/njs/rev/61357fb10f4a branches: changeset: 2012:61357fb10f4a user: Dmitry Volyntsev date: Wed Dec 07 18:11:56 2022 -0800 description: Added njs_vm_external_ptr(). diffstat: src/njs.h | 1 + src/njs_vm.c | 7 +++++++ 2 files changed, 8 insertions(+), 0 deletions(-) diffs (28 lines): diff -r 68b28e924908 -r 61357fb10f4a src/njs.h --- a/src/njs.h Wed Dec 07 18:11:56 2022 -0800 +++ b/src/njs.h Wed Dec 07 18:11:56 2022 -0800 @@ -397,6 +397,7 @@ NJS_EXPORT njs_function_t *njs_vm_functi NJS_EXPORT njs_value_t *njs_vm_retval(njs_vm_t *vm); NJS_EXPORT void njs_vm_retval_set(njs_vm_t *vm, const njs_value_t *value); NJS_EXPORT njs_mp_t *njs_vm_memory_pool(njs_vm_t *vm); +NJS_EXPORT njs_external_ptr_t njs_vm_external_ptr(njs_vm_t *vm); /* Gets string value, no copy. */ NJS_EXPORT void njs_value_string_get(njs_value_t *value, njs_str_t *dst); diff -r 68b28e924908 -r 61357fb10f4a src/njs_vm.c --- a/src/njs_vm.c Wed Dec 07 18:11:56 2022 -0800 +++ b/src/njs_vm.c Wed Dec 07 18:11:56 2022 -0800 @@ -663,6 +663,13 @@ njs_vm_memory_pool(njs_vm_t *vm) } +njs_external_ptr_t +njs_vm_external_ptr(njs_vm_t *vm) +{ + return vm->external; +} + + uintptr_t njs_vm_meta(njs_vm_t *vm, njs_uint_t index) { From xeioex at nginx.com Thu Dec 8 02:13:59 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 08 Dec 2022 02:13:59 +0000 Subject: [njs] Added njs_vm_string_compare(). Message-ID: details: https://hg.nginx.org/njs/rev/23607989a28b branches: changeset: 2013:23607989a28b user: Dmitry Volyntsev date: Wed Dec 07 18:11:57 2022 -0800 description: Added njs_vm_string_compare(). diffstat: src/njs.h | 2 ++ src/njs_string.c | 3 +++ src/njs_vm.c | 7 +++++++ 3 files changed, 12 insertions(+), 0 deletions(-) diffs (42 lines): diff -r 61357fb10f4a -r 23607989a28b src/njs.h --- a/src/njs.h Wed Dec 07 18:11:56 2022 -0800 +++ b/src/njs.h Wed Dec 07 18:11:57 2022 -0800 @@ -411,6 +411,8 @@ NJS_EXPORT u_char *njs_vm_value_string_a uint32_t size); NJS_EXPORT njs_int_t njs_vm_value_string_copy(njs_vm_t *vm, njs_str_t *retval, njs_value_t *value, uintptr_t *next); +NJS_EXPORT njs_int_t njs_vm_string_compare(const njs_value_t *v1, + const njs_value_t *v2); NJS_EXPORT njs_int_t njs_vm_value_array_buffer_set(njs_vm_t *vm, njs_value_t *value, const u_char *start, uint32_t size); diff -r 61357fb10f4a -r 23607989a28b src/njs_string.c --- a/src/njs_string.c Wed Dec 07 18:11:56 2022 -0800 +++ b/src/njs_string.c Wed Dec 07 18:11:57 2022 -0800 @@ -728,6 +728,9 @@ njs_string_cmp(const njs_value_t *v1, co njs_int_t ret; const u_char *start1, *start2; + njs_assert(njs_is_string(v1)); + njs_assert(njs_is_string(v2)); + size1 = v1->short_string.size; if (size1 != NJS_STRING_LONG) { diff -r 61357fb10f4a -r 23607989a28b src/njs_vm.c --- a/src/njs_vm.c Wed Dec 07 18:11:56 2022 -0800 +++ b/src/njs_vm.c Wed Dec 07 18:11:57 2022 -0800 @@ -1308,6 +1308,13 @@ njs_vm_value_to_bytes(njs_vm_t *vm, njs_ njs_int_t +njs_vm_string_compare(const njs_value_t *v1, const njs_value_t *v2) +{ + return njs_string_cmp(v1, v2); +} + + +njs_int_t njs_vm_value_string_copy(njs_vm_t *vm, njs_str_t *retval, njs_value_t *value, uintptr_t *next) { From arut at nginx.com Fri Dec 9 09:38:46 2022 From: arut at nginx.com (=?iso-8859-1?q?Roman_Arutyunyan?=) Date: Fri, 09 Dec 2022 09:38:46 +0000 Subject: [PATCH 0 of 6] QUIC packet routing improvements Message-ID: Tha patchset addresses the packet routing problem in current QUIC implementation. The problem is how to route a QUIC datagram to the right nginx worker. The simplest and currently available solution is to use SO_REUSEPORT (SO_REUSEPORT_LB in FreeBSD) socket option, which allows to route a packet based on client/server address pair. The downside of the solution is not handling nginx reloads/restarts properly. Also, basic eBPF routing is currently implemented which supports client migration, but it also breaks on reloads and restarts. Details are in [1], which is my previous attempt to address the issue. Two solutions are implemented. Patch #5 improves the QUIC eBPF module to properly handle nginx reloads and restarts. Each QUIC listening allocates several worker sockets which handle existing worker connections. The main QUIC UDP sockets only handle new connections and are inherited as usual. A reuseport eBPF program routes a packet to a worker socket based on its DCIDs or to a main sockets based on the address hash. Patch #6 adds experimental support for UDP client sockets. This is a direct analogy with TCP listen/accept model. [1] https://mailman.nginx.org/archives/list/nginx-devel at nginx.org/thread/AAEKCDMXWNORDYQ5I2I36DADLF5MCWT4/ -- Roman Arutyunyan From arut at nginx.com Fri Dec 9 09:38:47 2022 From: arut at nginx.com (=?iso-8859-1?q?Roman_Arutyunyan?=) Date: Fri, 09 Dec 2022 09:38:47 +0000 Subject: [PATCH 1 of 6] QUIC: ignore server address while looking up a connection In-Reply-To: References: Message-ID: <1038d7300c29eea02b47.1670578727@ip-10-1-18-114.eu-central-1.compute.internal> # HG changeset patch # User Roman Arutyunyan # Date 1670322119 0 # Tue Dec 06 10:21:59 2022 +0000 # Branch quic # Node ID 1038d7300c29eea02b47eac3f205e293b1e55f5b # Parent b87a0dbc1150f415def5bc1e1f00d02b33519026 QUIC: ignore server address while looking up a connection. The server connection check was copied from the common UDP code in c2f5d79cde64. In QUIC it does not make much sense though. Technically client is not allowed to migrate to a different server address. However, migrating withing a single wildcard listening does not seem to affect anything. diff --git a/src/event/quic/ngx_event_quic_udp.c b/src/event/quic/ngx_event_quic_udp.c --- a/src/event/quic/ngx_event_quic_udp.c +++ b/src/event/quic/ngx_event_quic_udp.c @@ -13,7 +13,7 @@ static void ngx_quic_close_accepted_connection(ngx_connection_t *c); static ngx_connection_t *ngx_quic_lookup_connection(ngx_listening_t *ls, - ngx_str_t *key, struct sockaddr *local_sockaddr, socklen_t local_socklen); + ngx_str_t *key); void @@ -156,7 +156,7 @@ ngx_quic_recvmsg(ngx_event_t *ev) goto next; } - c = ngx_quic_lookup_connection(ls, &key, local_sockaddr, local_socklen); + c = ngx_quic_lookup_connection(ls, &key); if (c) { @@ -370,7 +370,6 @@ ngx_quic_rbtree_insert_value(ngx_rbtree_ ngx_rbtree_node_t *node, ngx_rbtree_node_t *sentinel) { ngx_int_t rc; - ngx_connection_t *c, *ct; ngx_rbtree_node_t **p; ngx_quic_socket_t *qsock, *qsockt; @@ -387,19 +386,11 @@ ngx_quic_rbtree_insert_value(ngx_rbtree_ } else { /* node->key == temp->key */ qsock = (ngx_quic_socket_t *) node; - c = qsock->udp.connection; - qsockt = (ngx_quic_socket_t *) temp; - ct = qsockt->udp.connection; rc = ngx_memn2cmp(qsock->sid.id, qsockt->sid.id, qsock->sid.len, qsockt->sid.len); - if (rc == 0 && c->listening->wildcard) { - rc = ngx_cmp_sockaddr(c->local_sockaddr, c->local_socklen, - ct->local_sockaddr, ct->local_socklen, 1); - } - p = (rc < 0) ? &temp->left : &temp->right; } @@ -419,8 +410,7 @@ ngx_quic_rbtree_insert_value(ngx_rbtree_ static ngx_connection_t * -ngx_quic_lookup_connection(ngx_listening_t *ls, ngx_str_t *key, - struct sockaddr *local_sockaddr, socklen_t local_socklen) +ngx_quic_lookup_connection(ngx_listening_t *ls, ngx_str_t *key) { uint32_t hash; ngx_int_t rc; @@ -454,14 +444,8 @@ ngx_quic_lookup_connection(ngx_listening rc = ngx_memn2cmp(key->data, qsock->sid.id, key->len, qsock->sid.len); - c = qsock->udp.connection; - - if (rc == 0 && ls->wildcard) { - rc = ngx_cmp_sockaddr(local_sockaddr, local_socklen, - c->local_sockaddr, c->local_socklen, 1); - } - if (rc == 0) { + c = qsock->udp.connection; c->udp = &qsock->udp; return c; } From arut at nginx.com Fri Dec 9 09:38:48 2022 From: arut at nginx.com (=?iso-8859-1?q?Roman_Arutyunyan?=) Date: Fri, 09 Dec 2022 09:38:48 +0000 Subject: [PATCH 2 of 6] QUIC: handle datagrams directly in ngx_quic_recvmsg() In-Reply-To: References: Message-ID: <8a7f2c71db202141d169.1670578728@ip-10-1-18-114.eu-central-1.compute.internal> # HG changeset patch # User Roman Arutyunyan # Date 1670428292 0 # Wed Dec 07 15:51:32 2022 +0000 # Branch quic # Node ID 8a7f2c71db202141d169f3ab292027bfc16ff8ec # Parent 1038d7300c29eea02b47eac3f205e293b1e55f5b QUIC: handle datagrams directly in ngx_quic_recvmsg(). Previously, ngx_quic_recvmsg() called client connection's read event handler to emulate normal event processing. Further, the read event handler handled the datagram by calling ngx_quic_handle_datagram(). Now ngx_quic_handle_datagram() is called directly from ngx_quic_recvmsg(), which simplifies the code. diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c --- a/src/event/quic/ngx_event_quic.c +++ b/src/event/quic/ngx_event_quic.c @@ -17,8 +17,6 @@ static ngx_int_t ngx_quic_handle_statele static void ngx_quic_input_handler(ngx_event_t *rev); static void ngx_quic_close_handler(ngx_event_t *ev); -static ngx_int_t ngx_quic_handle_datagram(ngx_connection_t *c, ngx_buf_t *b, - ngx_quic_conf_t *conf); static ngx_int_t ngx_quic_handle_packet(ngx_connection_t *c, ngx_quic_conf_t *conf, ngx_quic_header_t *pkt); static ngx_int_t ngx_quic_handle_payload(ngx_connection_t *c, @@ -397,8 +395,6 @@ ngx_quic_handle_stateless_reset(ngx_conn static void ngx_quic_input_handler(ngx_event_t *rev) { - ngx_int_t rc; - ngx_buf_t *b; ngx_connection_t *c; ngx_quic_connection_t *qc; @@ -432,29 +428,6 @@ ngx_quic_input_handler(ngx_event_t *rev) return; } - - b = c->udp->buffer; - if (b == NULL) { - return; - } - - rc = ngx_quic_handle_datagram(c, b, NULL); - - if (rc == NGX_ERROR) { - ngx_quic_close_connection(c, NGX_ERROR); - return; - } - - if (rc == NGX_DONE) { - return; - } - - /* rc == NGX_OK */ - - qc->send_timer_set = 0; - ngx_add_timer(rev, qc->tp.max_idle_timeout); - - ngx_quic_connstate_dbg(c); } @@ -654,7 +627,7 @@ ngx_quic_close_handler(ngx_event_t *ev) } -static ngx_int_t +ngx_int_t ngx_quic_handle_datagram(ngx_connection_t *c, ngx_buf_t *b, ngx_quic_conf_t *conf) { @@ -753,6 +726,11 @@ ngx_quic_handle_datagram(ngx_connection_ qc->error_reason = "QUIC flood detected"; return NGX_ERROR; } + + qc->send_timer_set = 0; + ngx_add_timer(c->read, qc->tp.max_idle_timeout); + + ngx_quic_connstate_dbg(c); } return NGX_OK; diff --git a/src/event/quic/ngx_event_quic_connection.h b/src/event/quic/ngx_event_quic_connection.h --- a/src/event/quic/ngx_event_quic_connection.h +++ b/src/event/quic/ngx_event_quic_connection.h @@ -260,6 +260,8 @@ struct ngx_quic_connection_s { }; +ngx_int_t ngx_quic_handle_datagram(ngx_connection_t *c, ngx_buf_t *b, + ngx_quic_conf_t *conf); ngx_int_t ngx_quic_apply_transport_params(ngx_connection_t *c, ngx_quic_tp_t *ctp); void ngx_quic_discard_ctx(ngx_connection_t *c, diff --git a/src/event/quic/ngx_event_quic_migration.c b/src/event/quic/ngx_event_quic_migration.c --- a/src/event/quic/ngx_event_quic_migration.c +++ b/src/event/quic/ngx_event_quic_migration.c @@ -264,12 +264,6 @@ ngx_quic_set_path(ngx_connection_t *c, n len = pkt->raw->last - pkt->raw->start; - if (c->udp->buffer == NULL) { - /* first ever packet in connection, path already exists */ - path = qc->path; - goto update; - } - probe = NULL; for (q = ngx_queue_head(&qc->paths); diff --git a/src/event/quic/ngx_event_quic_socket.c b/src/event/quic/ngx_event_quic_socket.c --- a/src/event/quic/ngx_event_quic_socket.c +++ b/src/event/quic/ngx_event_quic_socket.c @@ -61,6 +61,9 @@ ngx_quic_open_sockets(ngx_connection_t * } ngx_memcpy(qc->tp.initial_scid.data, qsock->sid.id, qsock->sid.len); + ngx_memcpy(&qsock->sockaddr.sockaddr, c->sockaddr, c->socklen); + qsock->socklen = c->socklen; + /* for all packets except first, this is set at udp layer */ c->udp = &qsock->udp; diff --git a/src/event/quic/ngx_event_quic_udp.c b/src/event/quic/ngx_event_quic_udp.c --- a/src/event/quic/ngx_event_quic_udp.c +++ b/src/event/quic/ngx_event_quic_udp.c @@ -186,21 +186,11 @@ ngx_quic_recvmsg(ngx_event_t *ev) ngx_memcpy(&qsock->sockaddr.sockaddr, sockaddr, socklen); qsock->socklen = socklen; - c->udp->buffer = &buf; - - rev = c->read; - rev->ready = 1; - rev->active = 0; - - rev->handler(rev); - - if (c->udp) { - c->udp->buffer = NULL; + if (ngx_quic_handle_datagram(c, &buf, NULL) == NGX_ERROR) { + ngx_quic_close_connection(c, NGX_ERROR); + return; } - rev->ready = 0; - rev->active = 1; - goto next; } From arut at nginx.com Fri Dec 9 09:38:49 2022 From: arut at nginx.com (=?iso-8859-1?q?Roman_Arutyunyan?=) Date: Fri, 09 Dec 2022 09:38:49 +0000 Subject: [PATCH 3 of 6] QUIC: eliminated timeout handling in listen connection read event In-Reply-To: References: Message-ID: # HG changeset patch # User Roman Arutyunyan # Date 1670428974 0 # Wed Dec 07 16:02:54 2022 +0000 # Branch quic # Node ID b5c30f16ec8ba3ace2f58d77d294d9b355bf3267 # Parent 8a7f2c71db202141d169f3ab292027bfc16ff8ec QUIC: eliminated timeout handling in listen connection read event. The timeout is never set for QUIC. diff --git a/src/event/quic/ngx_event_quic_udp.c b/src/event/quic/ngx_event_quic_udp.c --- a/src/event/quic/ngx_event_quic_udp.c +++ b/src/event/quic/ngx_event_quic_udp.c @@ -40,14 +40,6 @@ ngx_quic_recvmsg(ngx_event_t *ev) u_char msg_control[CMSG_SPACE(sizeof(ngx_addrinfo_t))]; #endif - if (ev->timedout) { - if (ngx_enable_accept_events((ngx_cycle_t *) ngx_cycle) != NGX_OK) { - return; - } - - ev->timedout = 0; - } - ecf = ngx_event_get_conf(ngx_cycle->conf_ctx, ngx_event_core_module); if (!(ngx_event_flags & NGX_USE_KQUEUE_EVENT)) { From arut at nginx.com Fri Dec 9 09:38:50 2022 From: arut at nginx.com (=?iso-8859-1?q?Roman_Arutyunyan?=) Date: Fri, 09 Dec 2022 09:38:50 +0000 Subject: [PATCH 4 of 6] QUIC: never disable QUIC socket events In-Reply-To: References: Message-ID: # HG changeset patch # User Roman Arutyunyan # Date 1670256830 0 # Mon Dec 05 16:13:50 2022 +0000 # Branch quic # Node ID de8bcaea559d151f5945d0a2e06c61b56a26a52b # Parent b5c30f16ec8ba3ace2f58d77d294d9b355bf3267 QUIC: never disable QUIC socket events. Unlike TCP accept(), current QUIC implementation does not require new file descriptors for new clients. Also, it does not work with accept mutex since it normally requires reuseport option. diff --git a/src/event/ngx_event_accept.c b/src/event/ngx_event_accept.c --- a/src/event/ngx_event_accept.c +++ b/src/event/ngx_event_accept.c @@ -416,6 +416,12 @@ ngx_disable_accept_events(ngx_cycle_t *c #endif +#if (NGX_QUIC) + if (ls[i].quic) { + continue; + } +#endif + if (ngx_del_event(c->read, NGX_READ_EVENT, NGX_DISABLE_EVENT) == NGX_ERROR) { From arut at nginx.com Fri Dec 9 09:38:51 2022 From: arut at nginx.com (=?iso-8859-1?q?Roman_Arutyunyan?=) Date: Fri, 09 Dec 2022 09:38:51 +0000 Subject: [PATCH 5 of 6] QUIC: eBPF worker sockets In-Reply-To: References: Message-ID: # HG changeset patch # User Roman Arutyunyan # Date 1670494771 0 # Thu Dec 08 10:19:31 2022 +0000 # Branch quic # Node ID afbac4ba4c75023e10e68bae39df5b1a0fdbd17b # Parent de8bcaea559d151f5945d0a2e06c61b56a26a52b QUIC: eBPF worker sockets. For each nginx worker, a worker socket is created. Worker sockets process existing QUIC connections bound to the worker, while listen sockets handle new connections. When shutting down a worker, listen sockets are closed as uaual, while worker sockets keep handling existing connections. Reuseport eBPF program looks up a worker socket by packet DCID and, if not found, chooses a listen socket based on packet address hash. The mode is enabled by "quic_bpf on" directive and is only available on Linux. Reuseport listen parameter is requried to enable the feature on a QUIC listen. diff --git a/auto/modules b/auto/modules --- a/auto/modules +++ b/auto/modules @@ -1363,11 +1363,11 @@ if [ $USE_OPENSSL_QUIC = YES ]; then . auto/module - if [ $QUIC_BPF = YES -a $SO_COOKIE_FOUND = YES ]; then + if [ $QUIC_BPF = YES ]; then ngx_module_type=CORE ngx_module_name=ngx_quic_bpf_module ngx_module_incs= - ngx_module_deps= + ngx_module_deps=src/event/quic/ngx_event_quic_bpf.h ngx_module_srcs="src/event/quic/ngx_event_quic_bpf.c \ src/event/quic/ngx_event_quic_bpf_code.c" ngx_module_libs= diff --git a/auto/options b/auto/options --- a/auto/options +++ b/auto/options @@ -171,8 +171,6 @@ USE_GEOIP=NO NGX_GOOGLE_PERFTOOLS=NO NGX_CPP_TEST=NO -SO_COOKIE_FOUND=NO - NGX_LIBATOMIC=NO NGX_CPU_CACHE_LINE= diff --git a/auto/os/linux b/auto/os/linux --- a/auto/os/linux +++ b/auto/os/linux @@ -234,7 +234,7 @@ ngx_include="sys/vfs.h"; . auto/incl # BPF sockhash -ngx_feature="BPF sockhash" +ngx_feature="BPF maps" ngx_feature_name="NGX_HAVE_BPF" ngx_feature_run=no ngx_feature_incs="#include @@ -245,7 +245,14 @@ ngx_feature_test="union bpf_attr attr = attr.map_flags = 0; attr.map_type = BPF_MAP_TYPE_SOCKHASH; + syscall(__NR_bpf, 0, &attr, 0); + attr.map_flags = 0; + attr.map_type = BPF_MAP_TYPE_SOCKMAP; + syscall(__NR_bpf, 0, &attr, 0); + + attr.map_flags = 0; + attr.map_type = BPF_MAP_TYPE_ARRAY; syscall(__NR_bpf, 0, &attr, 0);" . auto/feature @@ -259,23 +266,6 @@ if [ $ngx_found = yes ]; then fi -ngx_feature="SO_COOKIE" -ngx_feature_name="NGX_HAVE_SO_COOKIE" -ngx_feature_run=no -ngx_feature_incs="#include - #include " -ngx_feature_path= -ngx_feature_libs= -ngx_feature_test="socklen_t optlen = sizeof(uint64_t); - uint64_t cookie; - getsockopt(0, SOL_SOCKET, SO_COOKIE, &cookie, &optlen)" -. auto/feature - -if [ $ngx_found = yes ]; then - SO_COOKIE_FOUND=YES -fi - - # UDP segmentation offloading ngx_feature="UDP_SEGMENT" diff --git a/src/core/ngx_connection.c b/src/core/ngx_connection.c --- a/src/core/ngx_connection.c +++ b/src/core/ngx_connection.c @@ -1033,12 +1033,6 @@ ngx_close_listening_sockets(ngx_cycle_t ls = cycle->listening.elts; for (i = 0; i < cycle->listening.nelts; i++) { -#if (NGX_QUIC) - if (ls[i].quic) { - continue; - } -#endif - c = ls[i].connection; if (c) { diff --git a/src/event/ngx_event.h b/src/event/ngx_event.h --- a/src/event/ngx_event.h +++ b/src/event/ngx_event.h @@ -73,6 +73,7 @@ struct ngx_event_s { /* to test on worker exit */ unsigned channel:1; unsigned resolver:1; + unsigned quic:1; unsigned cancelable:1; diff --git a/src/event/quic/bpf/makefile b/src/event/quic/bpf/makefile --- a/src/event/quic/bpf/makefile +++ b/src/event/quic/bpf/makefile @@ -1,4 +1,4 @@ -CFLAGS=-O2 -Wall +CFLAGS=-O2 -Wall $(MAKE_CFLAGS) LICENSE=BSD diff --git a/src/event/quic/bpf/ngx_quic_reuseport_helper.c b/src/event/quic/bpf/ngx_quic_reuseport_helper.c --- a/src/event/quic/bpf/ngx_quic_reuseport_helper.c +++ b/src/event/quic/bpf/ngx_quic_reuseport_helper.c @@ -44,97 +44,93 @@ char _license[] SEC("license") = LICENSE #define NGX_QUIC_SERVER_CID_LEN 20 -#define advance_data(nbytes) \ - offset += nbytes; \ - if (start + offset > end) { \ - debugmsg("cannot read %ld bytes at offset %ld", nbytes, offset); \ - goto failed; \ - } \ - data = start + offset - 1; - - -#define ngx_quic_parse_uint64(p) \ - (((__u64)(p)[0] << 56) | \ - ((__u64)(p)[1] << 48) | \ - ((__u64)(p)[2] << 40) | \ - ((__u64)(p)[3] << 32) | \ - ((__u64)(p)[4] << 24) | \ - ((__u64)(p)[5] << 16) | \ - ((__u64)(p)[6] << 8) | \ - ((__u64)(p)[7])) - -/* - * actual map object is created by the "bpf" system call, - * all pointers to this variable are replaced by the bpf loader - */ -struct bpf_map_def SEC("maps") ngx_quic_sockmap; +struct bpf_map_def SEC("maps") ngx_quic_listen; +struct bpf_map_def SEC("maps") ngx_quic_worker; +struct bpf_map_def SEC("maps") ngx_quic_nlisten; SEC(PROGNAME) -int ngx_quic_select_socket_by_dcid(struct sk_reuseport_md *ctx) +int ngx_quic_select_socket_by_dcid(struct sk_reuseport_md *ctx) \ { - int rc; - __u64 key; + int rc, flags; + __u32 zero, *nsockets, ns; size_t len, offset; - unsigned char *start, *end, *data, *dcid; + unsigned char *start, *end, dcid[NGX_QUIC_SERVER_CID_LEN]; start = ctx->data; - end = (unsigned char *) ctx->data_end; - offset = 0; + end = ctx->data_end; - advance_data(sizeof(struct udphdr)); /* data at UDP header */ - advance_data(1); /* data at QUIC flags */ - - if (data[0] & NGX_QUIC_PKT_LONG) { + offset = sizeof(struct udphdr) + 1; /* UDP header + QUIC flags */ + if (start + offset > end) { + goto bad_dgram; + } - advance_data(4); /* data at QUIC version */ - advance_data(1); /* data at DCID len */ + flags = start[offset - 1]; + if (flags & NGX_QUIC_PKT_LONG) { - len = data[0]; /* read DCID length */ - - if (len < 8) { - /* it's useless to search for key in such short DCID */ - return SK_PASS; + offset += 5; /* QUIC version + DCID len */ + if (start + offset > end) { + goto bad_dgram; } - } else { - len = NGX_QUIC_SERVER_CID_LEN; + len = start[offset - 1]; + if (len != NGX_QUIC_SERVER_CID_LEN) { + goto new_conn; + } + } + + if (start + offset + NGX_QUIC_SERVER_CID_LEN > end) { + goto bad_dgram; + } + + memcpy(dcid, start + offset, NGX_QUIC_SERVER_CID_LEN); + + rc = bpf_sk_select_reuseport(ctx, &ngx_quic_worker, dcid, 0); + + if (rc == 0) { + debugmsg("nginx quic socket selected by dcid"); + return SK_PASS; } - dcid = &data[1]; - advance_data(len); /* we expect the packet to have full DCID */ + if (rc != -ENOENT) { + debugmsg("nginx quic bpf_sk_select_reuseport() failed:%d", rc); + return SK_DROP; + } + +new_conn: - /* make verifier happy */ - if (dcid + sizeof(__u64) > end) { - goto failed; + zero = 0; + + nsockets = bpf_map_lookup_elem(&ngx_quic_nlisten, &zero); + + if (nsockets == NULL) { + debugmsg("nginx quic nsockets undefined"); + return SK_DROP; } - key = ngx_quic_parse_uint64(dcid); - - rc = bpf_sk_select_reuseport(ctx, &ngx_quic_sockmap, &key, 0); + ns = ctx->hash % *nsockets; - switch (rc) { - case 0: - debugmsg("nginx quic socket selected by key 0x%llx", key); - return SK_PASS; + rc = bpf_sk_select_reuseport(ctx, &ngx_quic_listen, &ns, 0); - /* kernel returns positive error numbers, errno.h defines positive */ - case -ENOENT: - debugmsg("nginx quic default route for key 0x%llx", key); - /* let the default reuseport logic decide which socket to choose */ + if (rc == 0) { + debugmsg("nginx quic socket selected by hash:%d", (int) ns); return SK_PASS; - - default: - debugmsg("nginx quic bpf_sk_select_reuseport err: %d key 0x%llx", - rc, key); - goto failed; } -failed: - /* - * SK_DROP will generate ICMP, but we may want to process "invalid" packet - * in userspace quic to investigate further and finally react properly - * (maybe ignore, maybe send something in response or close connection) - */ - return SK_PASS; + if (rc != -ENOENT) { + debugmsg("nginx quic bpf_sk_select_reuseport() failed:%d", rc); + return SK_DROP; + } + + (void) bpf_map_update_elem(&ngx_quic_nlisten, &zero, &ns, BPF_ANY); + + debugmsg("nginx quic cut sockets array:%d", (int) ns); + + return SK_DROP; + +bad_dgram: + + debugmsg("nginx quic bad datagram"); + + return SK_DROP; } diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c --- a/src/event/quic/ngx_event_quic.c +++ b/src/event/quic/ngx_event_quic.c @@ -315,6 +315,10 @@ ngx_quic_new_connection(ngx_connection_t qc->congestion.ssthresh = (size_t) -1; qc->congestion.recovery_start = ngx_current_msec; + if (c->fd == c->listening->fd) { + qc->listen_bound = 1; + } + if (pkt->validated && pkt->retried) { qc->tp.retry_scid.len = pkt->dcid.len; qc->tp.retry_scid.data = ngx_pstrdup(c->pool, &pkt->dcid); @@ -422,6 +426,15 @@ ngx_quic_input_handler(ngx_event_t *rev) return; } + if (qc->listen_bound) { + c->fd = (ngx_socket_t) -1; + + qc->error = NGX_QUIC_ERR_NO_ERROR; + qc->error_reason = "graceful shutdown"; + ngx_quic_close_connection(c, NGX_ERROR); + return; + } + if (!qc->closing && qc->conf->shutdown) { qc->conf->shutdown(c); } @@ -894,14 +907,6 @@ ngx_quic_handle_packet(ngx_connection_t pkt->odcid = pkt->dcid; } - if (ngx_terminate || ngx_exiting) { - if (conf->retry) { - return ngx_quic_send_retry(c, conf, pkt); - } - - return NGX_ERROR; - } - c->log->action = "creating quic connection"; qc = ngx_quic_new_connection(c, conf, pkt); diff --git a/src/event/quic/ngx_event_quic_bpf.c b/src/event/quic/ngx_event_quic_bpf.c --- a/src/event/quic/ngx_event_quic_bpf.c +++ b/src/event/quic/ngx_event_quic_bpf.c @@ -6,6 +6,8 @@ #include #include +#include +#include #define NGX_QUIC_BPF_VARNAME "NGINX_BPF_MAPS" @@ -26,39 +28,57 @@ typedef struct { ngx_queue_t queue; - int map_fd; + + int listen_map; + int worker_map; + int nlisten_map; struct sockaddr *sockaddr; socklen_t socklen; - ngx_uint_t unused; /* unsigned unused:1; */ -} ngx_quic_sock_group_t; + + ngx_array_t listening; + + ngx_uint_t nlisten; + ngx_uint_t old_nlisten; +} ngx_quic_bpf_group_t; + + +typedef struct { + ngx_socket_t fd; + ngx_listening_t *listening; + ngx_connection_t *connection; +} ngx_quic_bpf_listening_t; typedef struct { ngx_flag_t enabled; - ngx_uint_t map_size; - ngx_queue_t groups; /* of ngx_quic_sock_group_t */ + ngx_uint_t max_connection_ids; + ngx_uint_t max_workers; + ngx_queue_t groups; } ngx_quic_bpf_conf_t; static void *ngx_quic_bpf_create_conf(ngx_cycle_t *cycle); +static char *ngx_quic_bpf_init_conf(ngx_cycle_t *cycle, void *conf); static ngx_int_t ngx_quic_bpf_module_init(ngx_cycle_t *cycle); static void ngx_quic_bpf_cleanup(void *data); static ngx_inline void ngx_quic_bpf_close(ngx_log_t *log, int fd, const char *name); -static ngx_quic_sock_group_t *ngx_quic_bpf_find_group(ngx_quic_bpf_conf_t *bcf, +static ngx_quic_bpf_group_t *ngx_quic_bpf_find_group(ngx_cycle_t *cycle, + ngx_listening_t *ls); +static ngx_quic_bpf_group_t *ngx_quic_bpf_alloc_group(ngx_cycle_t *cycle, ngx_listening_t *ls); -static ngx_quic_sock_group_t *ngx_quic_bpf_alloc_group(ngx_cycle_t *cycle, - struct sockaddr *sa, socklen_t socklen); -static ngx_quic_sock_group_t *ngx_quic_bpf_create_group(ngx_cycle_t *cycle, +static ngx_quic_bpf_group_t *ngx_quic_bpf_create_group(ngx_cycle_t *cycle, ngx_listening_t *ls); -static ngx_quic_sock_group_t *ngx_quic_bpf_get_group(ngx_cycle_t *cycle, +static ngx_int_t ngx_quic_bpf_inherit_fd(ngx_cycle_t *cycle, int fd); +static ngx_quic_bpf_group_t *ngx_quic_bpf_get_group(ngx_cycle_t *cycle, ngx_listening_t *ls); static ngx_int_t ngx_quic_bpf_group_add_socket(ngx_cycle_t *cycle, ngx_listening_t *ls); -static uint64_t ngx_quic_bpf_socket_key(ngx_fd_t fd, ngx_log_t *log); +static ngx_int_t ngx_quic_bpf_add_worker_socket(ngx_cycle_t *cycle, + ngx_quic_bpf_group_t *grp, ngx_listening_t *ls); static ngx_int_t ngx_quic_bpf_export_maps(ngx_cycle_t *cycle); static ngx_int_t ngx_quic_bpf_import_maps(ngx_cycle_t *cycle); @@ -82,7 +102,7 @@ static ngx_command_t ngx_quic_bpf_comma static ngx_core_module_t ngx_quic_bpf_module_ctx = { ngx_string("quic_bpf"), ngx_quic_bpf_create_conf, - NULL + ngx_quic_bpf_init_conf }; @@ -113,7 +133,6 @@ ngx_quic_bpf_create_conf(ngx_cycle_t *cy } bcf->enabled = NGX_CONF_UNSET; - bcf->map_size = NGX_CONF_UNSET_UINT; ngx_queue_init(&bcf->groups); @@ -121,12 +140,41 @@ ngx_quic_bpf_create_conf(ngx_cycle_t *cy } +static char * +ngx_quic_bpf_init_conf(ngx_cycle_t *cycle, void *conf) +{ + ngx_quic_bpf_conf_t *bcf = conf; + + ngx_quic_bpf_conf_t *obcf; + + ngx_conf_init_value(bcf->enabled, 0); + + if (cycle->old_cycle->conf_ctx == NULL) { + return NGX_CONF_OK; + } + + obcf = ngx_quic_bpf_get_conf(cycle->old_cycle); + if (obcf == NULL) { + return NGX_CONF_OK; + } + + if (obcf->enabled != bcf->enabled) { + ngx_log_error(NGX_LOG_EMERG, cycle->log, 0, + "cannot change \"quic_bpf\" after reload, ignoring"); + bcf->enabled = obcf->enabled; + } + + return NGX_CONF_OK; +} + + static ngx_int_t ngx_quic_bpf_module_init(ngx_cycle_t *cycle) { ngx_uint_t i; ngx_listening_t *ls; ngx_core_conf_t *ccf; + ngx_event_conf_t *ecf; ngx_pool_cleanup_t *cln; ngx_quic_bpf_conf_t *bcf; @@ -138,12 +186,16 @@ ngx_quic_bpf_module_init(ngx_cycle_t *cy return NGX_OK; } - ccf = ngx_core_get_conf(cycle); bcf = ngx_quic_bpf_get_conf(cycle); + if (!bcf->enabled) { + return NGX_OK; + } - ngx_conf_init_value(bcf->enabled, 0); + ccf = ngx_core_get_conf(cycle); + ecf = ngx_event_get_conf(cycle->conf_ctx, ngx_event_core_module); - bcf->map_size = ccf->worker_processes * 4; + bcf->max_connection_ids = ecf->connections * NGX_QUIC_MAX_SERVER_IDS; + bcf->max_workers = ccf->worker_processes * 4; cln = ngx_pool_cleanup_add(cycle->pool, 0); if (cln == NULL) { @@ -153,6 +205,8 @@ ngx_quic_bpf_module_init(ngx_cycle_t *cy cln->data = bcf; cln->handler = ngx_quic_bpf_cleanup; + ls = cycle->listening.elts; + if (ngx_inherited && ngx_is_init_cycle(cycle->old_cycle)) { if (ngx_quic_bpf_import_maps(cycle) != NGX_OK) { goto failed; @@ -208,16 +262,32 @@ ngx_quic_bpf_cleanup(void *data) { ngx_quic_bpf_conf_t *bcf = (ngx_quic_bpf_conf_t *) data; - ngx_queue_t *q; - ngx_quic_sock_group_t *grp; + ngx_uint_t i; + ngx_queue_t *q; + ngx_quic_bpf_group_t *grp; + ngx_quic_bpf_listening_t *bls; for (q = ngx_queue_head(&bcf->groups); q != ngx_queue_sentinel(&bcf->groups); q = ngx_queue_next(q)) { - grp = ngx_queue_data(q, ngx_quic_sock_group_t, queue); + grp = ngx_queue_data(q, ngx_quic_bpf_group_t, queue); + + ngx_quic_bpf_close(ngx_cycle->log, grp->listen_map, "listen"); + ngx_quic_bpf_close(ngx_cycle->log, grp->worker_map, "worker"); + ngx_quic_bpf_close(ngx_cycle->log, grp->nlisten_map, "nlisten"); + + bls = grp->listening.elts; - ngx_quic_bpf_close(ngx_cycle->log, grp->map_fd, "map"); + for (i = 0; i < grp->listening.nelts; i++) { + if (bls[i].fd != (ngx_socket_t) -1) { + if (ngx_close_socket(bls[i].fd) == -1) { + ngx_log_error(NGX_LOG_ALERT, ngx_cycle->log, + ngx_socket_errno, + ngx_close_socket_n " failed"); + } + } + } } } @@ -230,25 +300,32 @@ ngx_quic_bpf_close(ngx_log_t *log, int f } ngx_log_error(NGX_LOG_EMERG, log, ngx_errno, - "quic bpf close %s fd:%d failed", name, fd); + "QUIC BPF close %s map fd:%d failed", name, fd); } -static ngx_quic_sock_group_t * -ngx_quic_bpf_find_group(ngx_quic_bpf_conf_t *bcf, ngx_listening_t *ls) +static ngx_quic_bpf_group_t * +ngx_quic_bpf_find_group(ngx_cycle_t *cycle, ngx_listening_t *ls) { - ngx_queue_t *q; - ngx_quic_sock_group_t *grp; + ngx_queue_t *q; + ngx_quic_bpf_conf_t *bcf; + ngx_quic_bpf_group_t *grp; + + bcf = ngx_quic_bpf_get_conf(cycle); + + if (!bcf->enabled || !ls->quic || !ls->reuseport) { + return NULL; + } for (q = ngx_queue_head(&bcf->groups); q != ngx_queue_sentinel(&bcf->groups); q = ngx_queue_next(q)) { - grp = ngx_queue_data(q, ngx_quic_sock_group_t, queue); + grp = ngx_queue_data(q, ngx_quic_bpf_group_t, queue); if (ngx_cmp_sockaddr(ls->sockaddr, ls->socklen, grp->sockaddr, grp->socklen, 1) - == NGX_OK) + == 0) { return grp; } @@ -258,26 +335,32 @@ ngx_quic_bpf_find_group(ngx_quic_bpf_con } -static ngx_quic_sock_group_t * -ngx_quic_bpf_alloc_group(ngx_cycle_t *cycle, struct sockaddr *sa, - socklen_t socklen) +static ngx_quic_bpf_group_t * +ngx_quic_bpf_alloc_group(ngx_cycle_t *cycle, ngx_listening_t *ls) { ngx_quic_bpf_conf_t *bcf; - ngx_quic_sock_group_t *grp; + ngx_quic_bpf_group_t *grp; bcf = ngx_quic_bpf_get_conf(cycle); - grp = ngx_pcalloc(cycle->pool, sizeof(ngx_quic_sock_group_t)); + grp = ngx_pcalloc(cycle->pool, sizeof(ngx_quic_bpf_group_t)); if (grp == NULL) { return NULL; } - grp->socklen = socklen; - grp->sockaddr = ngx_palloc(cycle->pool, socklen); - if (grp->sockaddr == NULL) { + grp->listen_map = -1; + grp->worker_map = -1; + grp->nlisten_map = -1; + + grp->sockaddr = ls->sockaddr; + grp->socklen = ls->socklen; + + if (ngx_array_init(&grp->listening, cycle->pool, 1, + sizeof(ngx_quic_bpf_listening_t)) + != NGX_OK) + { return NULL; } - ngx_memcpy(grp->sockaddr, sa, socklen); ngx_queue_insert_tail(&bcf->groups, &grp->queue); @@ -285,50 +368,72 @@ ngx_quic_bpf_alloc_group(ngx_cycle_t *cy } -static ngx_quic_sock_group_t * +static ngx_quic_bpf_group_t * ngx_quic_bpf_create_group(ngx_cycle_t *cycle, ngx_listening_t *ls) { - int progfd, failed, flags, rc; - ngx_quic_bpf_conf_t *bcf; - ngx_quic_sock_group_t *grp; + int progfd, failed; + ngx_quic_bpf_conf_t *bcf; + ngx_quic_bpf_group_t *grp; bcf = ngx_quic_bpf_get_conf(cycle); - if (!bcf->enabled) { - return NULL; - } - - grp = ngx_quic_bpf_alloc_group(cycle, ls->sockaddr, ls->socklen); + grp = ngx_quic_bpf_alloc_group(cycle, ls); if (grp == NULL) { return NULL; } - grp->map_fd = ngx_bpf_map_create(cycle->log, BPF_MAP_TYPE_SOCKHASH, - sizeof(uint64_t), sizeof(uint64_t), - bcf->map_size, 0); - if (grp->map_fd == -1) { + grp->listen_map = ngx_bpf_map_create(cycle->log, BPF_MAP_TYPE_SOCKMAP, + sizeof(uint32_t), sizeof(uint64_t), + bcf->max_workers, 0); + if (grp->listen_map == -1) { goto failed; } - flags = fcntl(grp->map_fd, F_GETFD); - if (flags == -1) { - ngx_log_error(NGX_LOG_EMERG, cycle->log, errno, - "quic bpf getfd failed"); - goto failed; - } - - /* need to inherit map during binary upgrade after exec */ - flags &= ~FD_CLOEXEC; - - rc = fcntl(grp->map_fd, F_SETFD, flags); - if (rc == -1) { - ngx_log_error(NGX_LOG_EMERG, cycle->log, errno, - "quic bpf setfd failed"); + if (ngx_quic_bpf_inherit_fd(cycle, grp->listen_map) != NGX_OK) { goto failed; } ngx_bpf_program_link(&ngx_quic_reuseport_helper, - "ngx_quic_sockmap", grp->map_fd); + "ngx_quic_listen", grp->listen_map); + + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, 0, + "quic bpf listen map created fd:%d", grp->listen_map); + + + grp->worker_map = ngx_bpf_map_create(cycle->log, BPF_MAP_TYPE_SOCKHASH, + NGX_QUIC_SERVER_CID_LEN, sizeof(uint64_t), + bcf->max_connection_ids, 0); + if (grp->worker_map == -1) { + goto failed; + } + + if (ngx_quic_bpf_inherit_fd(cycle, grp->worker_map) != NGX_OK) { + goto failed; + } + + ngx_bpf_program_link(&ngx_quic_reuseport_helper, + "ngx_quic_worker", grp->worker_map); + + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, 0, + "quic bpf worker map created fd:%d", grp->worker_map); + + + grp->nlisten_map = ngx_bpf_map_create(cycle->log, BPF_MAP_TYPE_ARRAY, + sizeof(uint32_t), sizeof(uint32_t), 1, 0); + if (grp->nlisten_map == -1) { + goto failed; + } + + if (ngx_quic_bpf_inherit_fd(cycle, grp->nlisten_map) != NGX_OK) { + goto failed; + } + + ngx_bpf_program_link(&ngx_quic_reuseport_helper, + "ngx_quic_nlisten", grp->nlisten_map); + + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, 0, + "quic bpf nlisten map created fd:%d", grp->nlisten_map); + progfd = ngx_bpf_load_program(cycle->log, &ngx_quic_reuseport_helper); if (progfd < 0) { @@ -352,14 +457,116 @@ ngx_quic_bpf_create_group(ngx_cycle_t *c goto failed; } - ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, 0, - "quic bpf sockmap created fd:%d", grp->map_fd); return grp; failed: - if (grp->map_fd != -1) { - ngx_quic_bpf_close(cycle->log, grp->map_fd, "map"); + if (grp->listen_map != -1) { + ngx_quic_bpf_close(cycle->log, grp->listen_map, "listen"); + } + + if (grp->worker_map != -1) { + ngx_quic_bpf_close(cycle->log, grp->worker_map, "worker"); + } + + if (grp->nlisten_map != -1) { + ngx_quic_bpf_close(cycle->log, grp->nlisten_map, "nlisten"); + } + + ngx_queue_remove(&grp->queue); + + return NULL; +} + + +static ngx_int_t +ngx_quic_bpf_inherit_fd(ngx_cycle_t *cycle, int fd) +{ + int flags; + + flags = fcntl(fd, F_GETFD); + if (flags == -1) { + ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, + "fcntl(F_GETFD) failed"); + return NGX_ERROR; + } + + flags &= ~FD_CLOEXEC; + + if (fcntl(fd, F_SETFD, flags) == -1) { + ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, + "fcntl(F_SETFD) failed"); + return NGX_ERROR; + } + + return NGX_OK; +} + + +static ngx_quic_bpf_group_t * +ngx_quic_bpf_get_group(ngx_cycle_t *cycle, ngx_listening_t *ls) +{ + ngx_quic_bpf_conf_t *old_bcf; + ngx_quic_bpf_group_t *grp, *ogrp; + + grp = ngx_quic_bpf_find_group(cycle, ls); + if (grp) { + return grp; + } + + old_bcf = ngx_quic_bpf_get_old_conf(cycle); + if (old_bcf == NULL) { + return ngx_quic_bpf_create_group(cycle, ls); + } + + ogrp = ngx_quic_bpf_find_group(cycle->old_cycle, ls); + if (ogrp == NULL) { + return ngx_quic_bpf_create_group(cycle, ls); + } + + grp = ngx_quic_bpf_alloc_group(cycle, ls); + if (grp == NULL) { + return NULL; + } + + grp->old_nlisten = ogrp->nlisten; + + grp->listen_map = dup(ogrp->listen_map); + if (grp->listen_map == -1) { + ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, + "failed to duplicate QUIC BPF listen map"); + + goto failed; + } + + grp->worker_map = dup(ogrp->worker_map); + if (grp->worker_map == -1) { + ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, + "failed to duplicate QUIC BPF worker map"); + goto failed; + } + + grp->nlisten_map = dup(ogrp->nlisten_map); + if (grp->nlisten_map == -1) { + ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, + "failed to duplicate QUIC BPF nlisten map"); + goto failed; + } + + return grp; + +failed: + + if (grp->listen_map != -1) { + ngx_quic_bpf_close(cycle->log, grp->listen_map, "listen"); + } + + if (grp->worker_map != -1) { + ngx_quic_bpf_close(cycle->log, grp->worker_map, "worker"); + } + + if (grp->nlisten_map != -1) { + ngx_quic_bpf_close(cycle->log, grp->nlisten_map, "nlisten"); } ngx_queue_remove(&grp->queue); @@ -368,129 +575,148 @@ failed: } -static ngx_quic_sock_group_t * -ngx_quic_bpf_get_group(ngx_cycle_t *cycle, ngx_listening_t *ls) -{ - ngx_quic_bpf_conf_t *bcf, *old_bcf; - ngx_quic_sock_group_t *grp, *ogrp; - - bcf = ngx_quic_bpf_get_conf(cycle); - - grp = ngx_quic_bpf_find_group(bcf, ls); - if (grp) { - return grp; - } - - old_bcf = ngx_quic_bpf_get_old_conf(cycle); - - if (old_bcf == NULL) { - return ngx_quic_bpf_create_group(cycle, ls); - } - - ogrp = ngx_quic_bpf_find_group(old_bcf, ls); - if (ogrp == NULL) { - return ngx_quic_bpf_create_group(cycle, ls); - } - - grp = ngx_quic_bpf_alloc_group(cycle, ls->sockaddr, ls->socklen); - if (grp == NULL) { - return NULL; - } - - grp->map_fd = dup(ogrp->map_fd); - if (grp->map_fd == -1) { - ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, - "quic bpf failed to duplicate bpf map descriptor"); - - ngx_queue_remove(&grp->queue); - - return NULL; - } - - ngx_log_debug2(NGX_LOG_DEBUG_EVENT, cycle->log, 0, - "quic bpf sockmap fd duplicated old:%d new:%d", - ogrp->map_fd, grp->map_fd); - - return grp; -} - - static ngx_int_t ngx_quic_bpf_group_add_socket(ngx_cycle_t *cycle, ngx_listening_t *ls) { - uint64_t cookie; - ngx_quic_bpf_conf_t *bcf; - ngx_quic_sock_group_t *grp; - - bcf = ngx_quic_bpf_get_conf(cycle); + uint32_t zero, key; + ngx_quic_bpf_group_t *grp; grp = ngx_quic_bpf_get_group(cycle, ls); + if (grp == NULL) { + return NGX_ERROR; + } - if (grp == NULL) { - if (!bcf->enabled) { - return NGX_OK; - } - + if (ngx_quic_bpf_add_worker_socket(cycle, grp, ls) != NGX_OK) { return NGX_ERROR; } - grp->unused = 0; + key = ls->worker; - cookie = ngx_quic_bpf_socket_key(ls->fd, cycle->log); - if (cookie == (uint64_t) NGX_ERROR) { + if (ngx_bpf_map_update(grp->listen_map, &key, &ls->fd, BPF_ANY) == -1) { + ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, + "failed to update QUIC BPF listen map"); return NGX_ERROR; } - /* map[cookie] = socket; for use in kernel helper */ - if (ngx_bpf_map_update(grp->map_fd, &cookie, &ls->fd, BPF_ANY) == -1) { - ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, - "quic bpf failed to update socket map key=%xL", cookie); - return NGX_ERROR; + if (grp->nlisten >= ls->worker + 1) { + return NGX_OK; } - ngx_log_debug4(NGX_LOG_DEBUG_EVENT, cycle->log, 0, - "quic bpf sockmap fd:%d add socket:%d cookie:0x%xL worker:%ui", - grp->map_fd, ls->fd, cookie, ls->worker); + grp->nlisten = ls->worker + 1; + + if (grp->nlisten <= grp->old_nlisten) { + return NGX_OK; + } - /* do not inherit this socket */ - ls->ignore = 1; + zero = 0; + key = grp->nlisten; + + if (ngx_bpf_map_update(grp->nlisten_map, &zero, &key, BPF_ANY) == -1) { + ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, + "failed to update QUIC BPF nlisten map"); + return NGX_ERROR; + } return NGX_OK; } -static uint64_t -ngx_quic_bpf_socket_key(ngx_fd_t fd, ngx_log_t *log) +static ngx_int_t +ngx_quic_bpf_add_worker_socket(ngx_cycle_t *cycle, ngx_quic_bpf_group_t *grp, + ngx_listening_t *ls) { - uint64_t cookie; - socklen_t optlen; + int value; + ngx_uint_t i, n; + ngx_socket_t s; + ngx_quic_bpf_listening_t *bls; + + s = ngx_socket(ls->sockaddr->sa_family, SOCK_DGRAM, 0); + if (s == (ngx_socket_t) -1) { + ngx_log_error(NGX_LOG_ERR, cycle->log, ngx_socket_errno, + ngx_socket_n " failed"); + return NGX_ERROR; + } - optlen = sizeof(cookie); + if (ngx_nonblocking(s) == -1) { + ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_socket_errno, + ngx_nonblocking_n " client socket failed"); + goto failed; + } + + value = 1; - if (getsockopt(fd, SOL_SOCKET, SO_COOKIE, &cookie, &optlen) == -1) { - ngx_log_error(NGX_LOG_EMERG, log, ngx_socket_errno, - "quic bpf getsockopt(SO_COOKIE) failed"); - - return (ngx_uint_t) NGX_ERROR; + if (setsockopt(s, SOL_SOCKET, SO_REUSEADDR, + (const void *) &value, sizeof(int)) + == -1) + { + ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_socket_errno, + "setsockopt(SO_REUSEADDR) client socket failed"); + goto failed; } - return cookie; + if (setsockopt(s, SOL_SOCKET, SO_REUSEPORT, + (const void *) &value, sizeof(int)) + == -1) + { + ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_socket_errno, + "setsockopt(SO_REUSEPORT) client socket failed"); + goto failed; + } + + if (bind(s, ls->sockaddr, ls->socklen) == -1) { + ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_socket_errno, + "bind() failed"); + goto failed; + } + + if (ls->worker >= grp->listening.nelts) { + n = ls->worker + 1 - grp->listening.nelts; + + bls = ngx_array_push_n(&grp->listening, n); + if (bls == NULL) { + goto failed; + } + + ngx_memzero(bls, n * sizeof(ngx_quic_bpf_listening_t)); + + for (i = 0; i < n; i++) { + bls[i].fd = (ngx_socket_t) -1; + } + } + + bls = grp->listening.elts; + bls[ls->worker].fd = s; + bls[ls->worker].listening = ls; + + return NGX_OK; + +failed: + + if (ngx_close_socket(s) == -1) { + ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_socket_errno, + ngx_close_socket_n " failed"); + } + + return NGX_ERROR; } - static ngx_int_t ngx_quic_bpf_export_maps(ngx_cycle_t *cycle) { - u_char *p, *buf; - size_t len; - ngx_str_t *var; - ngx_queue_t *q; - ngx_core_conf_t *ccf; - ngx_quic_bpf_conf_t *bcf; - ngx_quic_sock_group_t *grp; + u_char *p, *buf; + size_t len; + ngx_str_t *var; + ngx_queue_t *q; + ngx_core_conf_t *ccf; + ngx_quic_bpf_conf_t *bcf; + ngx_quic_bpf_group_t *grp; + + bcf = ngx_quic_bpf_get_conf(cycle); + if (!bcf->enabled) { + return NGX_OK; + } ccf = ngx_core_get_conf(cycle); - bcf = ngx_quic_bpf_get_conf(cycle); len = sizeof(NGX_QUIC_BPF_VARNAME) + 1; @@ -498,24 +724,26 @@ ngx_quic_bpf_export_maps(ngx_cycle_t *cy while (q != ngx_queue_sentinel(&bcf->groups)) { - grp = ngx_queue_data(q, ngx_quic_sock_group_t, queue); + grp = ngx_queue_data(q, ngx_quic_bpf_group_t, queue); q = ngx_queue_next(q); - if (grp->unused) { + if (grp->nlisten == 0) { /* * map was inherited, but it is not used in this configuration; * do not pass such map further and drop the group to prevent * interference with changes during reload */ - ngx_quic_bpf_close(cycle->log, grp->map_fd, "map"); + ngx_quic_bpf_close(cycle->log, grp->listen_map, "listen"); + ngx_quic_bpf_close(cycle->log, grp->worker_map, "worker"); + ngx_quic_bpf_close(cycle->log, grp->nlisten_map, "nlisten"); + ngx_queue_remove(&grp->queue); - continue; } - len += NGX_INT32_LEN + 1 + NGX_SOCKADDR_STRLEN + 1; + len += (NGX_INT32_LEN + 1) * 3 + NGX_SOCKADDR_STRLEN + 1; } len++; @@ -525,22 +753,23 @@ ngx_quic_bpf_export_maps(ngx_cycle_t *cy return NGX_ERROR; } - p = ngx_cpymem(buf, NGX_QUIC_BPF_VARNAME "=", - sizeof(NGX_QUIC_BPF_VARNAME)); + p = ngx_cpymem(buf, NGX_QUIC_BPF_VARNAME "=", sizeof(NGX_QUIC_BPF_VARNAME)); for (q = ngx_queue_head(&bcf->groups); q != ngx_queue_sentinel(&bcf->groups); q = ngx_queue_next(q)) { - grp = ngx_queue_data(q, ngx_quic_sock_group_t, queue); + grp = ngx_queue_data(q, ngx_quic_bpf_group_t, queue); - p = ngx_sprintf(p, "%ud", grp->map_fd); - + p = ngx_sprintf(p, "%ud", grp->listen_map); + *p++ = NGX_QUIC_BPF_ADDRSEP; + p = ngx_sprintf(p, "%ud", grp->worker_map); + *p++ = NGX_QUIC_BPF_ADDRSEP; + p = ngx_sprintf(p, "%ud", grp->nlisten_map); *p++ = NGX_QUIC_BPF_ADDRSEP; p += ngx_sock_ntop(grp->sockaddr, grp->socklen, p, NGX_SOCKADDR_STRLEN, 1); - *p++ = NGX_QUIC_BPF_VARSEP; } @@ -561,12 +790,14 @@ ngx_quic_bpf_export_maps(ngx_cycle_t *cy static ngx_int_t ngx_quic_bpf_import_maps(ngx_cycle_t *cycle) { - int s; - u_char *inherited, *p, *v; - ngx_uint_t in_fd; - ngx_addr_t tmp; - ngx_quic_bpf_conf_t *bcf; - ngx_quic_sock_group_t *grp; + int fds[3]; + u_char *inherited, *p, *v; + uint32_t zero, nlisten; + ngx_int_t fd; + ngx_uint_t nfd; + ngx_addr_t tmp; + ngx_quic_bpf_conf_t *bcf; + ngx_quic_bpf_group_t *grp; inherited = (u_char *) getenv(NGX_QUIC_BPF_VARNAME); @@ -574,13 +805,13 @@ ngx_quic_bpf_import_maps(ngx_cycle_t *cy return NGX_OK; } + ngx_log_error(NGX_LOG_NOTICE, cycle->log, 0, + "using inherited QUIC BPF maps from \"%s\"", inherited); + bcf = ngx_quic_bpf_get_conf(cycle); -#if (NGX_SUPPRESS_WARN) - s = -1; -#endif - - in_fd = 1; + zero = 0; + nfd = 0; for (p = inherited, v = p; *p; p++) { @@ -588,63 +819,69 @@ ngx_quic_bpf_import_maps(ngx_cycle_t *cy case NGX_QUIC_BPF_ADDRSEP: - if (!in_fd) { - ngx_log_error(NGX_LOG_EMERG, cycle->log, 0, - "quic bpf failed to parse inherited env"); - return NGX_ERROR; - } - in_fd = 0; - - s = ngx_atoi(v, p - v); - if (s == NGX_ERROR) { - ngx_log_error(NGX_LOG_EMERG, cycle->log, 0, - "quic bpf failed to parse inherited map fd"); - return NGX_ERROR; + if (nfd > 2) { + goto failed; } + fd = ngx_atoi(v, p - v); + if (fd == NGX_ERROR) { + goto failed; + } + + fds[nfd++] = fd; v = p + 1; break; case NGX_QUIC_BPF_VARSEP: - if (in_fd) { - ngx_log_error(NGX_LOG_EMERG, cycle->log, 0, - "quic bpf failed to parse inherited env"); - return NGX_ERROR; + if (nfd != 3) { + goto failed; } - in_fd = 1; - grp = ngx_pcalloc(cycle->pool, - sizeof(ngx_quic_sock_group_t)); + grp = ngx_pcalloc(cycle->pool, sizeof(ngx_quic_bpf_group_t)); if (grp == NULL) { return NGX_ERROR; } - grp->map_fd = s; - - if (ngx_parse_addr_port(cycle->pool, &tmp, v, p - v) + if (ngx_array_init(&grp->listening, cycle->pool, 1, + sizeof(ngx_quic_bpf_listening_t)) != NGX_OK) { - ngx_log_error(NGX_LOG_EMERG, cycle->log, 0, - "quic bpf failed to parse inherited" - " address '%*s'", p - v , v); + return NGX_ERROR; + } + + grp->listen_map = fds[0]; + grp->worker_map = fds[1]; + grp->nlisten_map = fds[2]; - ngx_quic_bpf_close(cycle->log, s, "inherited map"); + if (ngx_parse_addr_port(cycle->pool, &tmp, v, p - v) != NGX_OK) { + goto failed; + } + grp->sockaddr = ngx_pcalloc(cycle->pool, tmp.socklen); + if (grp->sockaddr == NULL) { return NGX_ERROR; } - grp->sockaddr = tmp.sockaddr; + ngx_memcpy(grp->sockaddr, tmp.sockaddr, tmp.socklen); grp->socklen = tmp.socklen; - grp->unused = 1; + if (ngx_bpf_map_lookup(fds[2], &zero, &nlisten) == -1) { + ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno, + "failed to lookup QUIC BPF listen number"); + return NGX_ERROR; + } + + grp->old_nlisten = nlisten; ngx_queue_insert_tail(&bcf->groups, &grp->queue); - ngx_log_debug3(NGX_LOG_DEBUG_EVENT, cycle->log, 0, + ngx_log_debug5(NGX_LOG_DEBUG_EVENT, cycle->log, 0, "quic bpf sockmap inherited with " - "fd:%d address:%*s", - grp->map_fd, p - v, v); + "fds:%d/%d/%d address:%*s", + fds[0], fds[1], fds[2], p - v, v); + + nfd = 0; v = p + 1; break; @@ -654,4 +891,127 @@ ngx_quic_bpf_import_maps(ngx_cycle_t *cy } return NGX_OK; + +failed: + + ngx_log_error(NGX_LOG_EMERG, cycle->log, 0, + "failed to parse inherited QUIC BPF variable"); + + return NGX_ERROR; } + + +ngx_int_t +ngx_quic_bpf_get_client_connection(ngx_connection_t *lc, ngx_connection_t **pc) +{ + ngx_event_t *rev; + ngx_connection_t *c; + ngx_quic_bpf_group_t *grp; + ngx_quic_bpf_listening_t *bpf_listening, *bls; + + grp = ngx_quic_bpf_find_group((ngx_cycle_t *) ngx_cycle, lc->listening); + + if (grp == NULL || ngx_worker >= grp->listening.nelts) { + return NGX_OK; + } + + bpf_listening = grp->listening.elts; + bls = &bpf_listening[ngx_worker]; + + if (bls->fd == (ngx_socket_t) -1) { + return NGX_OK; + } + + if (bls->connection == NULL) { + c = ngx_get_connection(bls->fd, lc->log); + if (c == NULL) { + return NGX_ERROR; + } + + c->type = SOCK_DGRAM; + c->log = lc->log; + c->listening = bls->listening; + + rev = c->read; + rev->quic = 1; + rev->log = c->log; + rev->handler = ngx_quic_recvmsg; + + if (ngx_add_event(rev, NGX_READ_EVENT, 0) == NGX_ERROR) { + ngx_free_connection(c); + return NGX_ERROR; + } + + bls->connection = c; + + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, lc->log, 0, + "quic bpf worker socket connection fd:%d", bls->fd); + + } + + *pc = ngx_get_connection(bls->fd, lc->log); + if (*pc == NULL) { + return NGX_ERROR; + } + + (*pc)->shared = 1; + + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, lc->log, 0, + "quic bpf client connection fd:%d", bls->fd); + + return NGX_OK; +} + + +ngx_int_t +ngx_quic_bpf_insert(ngx_connection_t *c, ngx_quic_connection_t *qc, + ngx_quic_socket_t *qsock) +{ + ngx_quic_bpf_group_t *grp; + + if (qsock->sid.len != NGX_QUIC_SERVER_CID_LEN) { + /* route by address */ + return NGX_OK; + } + + grp = ngx_quic_bpf_find_group((ngx_cycle_t *) ngx_cycle, c->listening); + if (grp == NULL) { + return NGX_OK; + } + + if (ngx_bpf_map_update(grp->worker_map, qsock->sid.id, &c->fd, BPF_ANY) + == -1) + { + ngx_log_error(NGX_LOG_EMERG, c->log, ngx_errno, + "failed to update QUIC BPF worker map"); + return NGX_ERROR; + } + + return NGX_OK; +} + + +ngx_int_t +ngx_quic_bpf_delete(ngx_connection_t *c, ngx_quic_connection_t *qc, + ngx_quic_socket_t *qsock) +{ + ngx_quic_bpf_group_t *grp; + + if (qsock->sid.len != NGX_QUIC_SERVER_CID_LEN) { + /* route by address */ + return NGX_OK; + } + + grp = ngx_quic_bpf_find_group((ngx_cycle_t *) ngx_cycle, c->listening); + if (grp == NULL) { + return NGX_OK; + } + + if (ngx_bpf_map_delete(grp->worker_map, qsock->sid.id) == -1) { + ngx_log_error(NGX_LOG_EMERG, c->log, ngx_errno, + "failed to update QUIC BPF worker map"); + return NGX_ERROR; + } + + return NGX_OK; +} diff --git a/src/event/quic/ngx_event_quic_bpf.h b/src/event/quic/ngx_event_quic_bpf.h new file mode 100644 --- /dev/null +++ b/src/event/quic/ngx_event_quic_bpf.h @@ -0,0 +1,23 @@ + +/* + * Copyright (C) Nginx, Inc. + */ + + +#ifndef _NGX_EVENT_QUIC_BPF_H_INCLUDED_ +#define _NGX_EVENT_QUIC_BPF_H_INCLUDED_ + + +#include +#include + + +ngx_int_t ngx_quic_bpf_get_client_connection(ngx_connection_t *lc, + ngx_connection_t **pc); +ngx_int_t ngx_quic_bpf_insert(ngx_connection_t *c, ngx_quic_connection_t *qc, + ngx_quic_socket_t *qsock); +ngx_int_t ngx_quic_bpf_delete(ngx_connection_t *c, ngx_quic_connection_t *qc, + ngx_quic_socket_t *qsock); + + +#endif /* _NGX_EVENT_QUIC_BPF_H_INCLUDED_ */ diff --git a/src/event/quic/ngx_event_quic_bpf_code.c b/src/event/quic/ngx_event_quic_bpf_code.c --- a/src/event/quic/ngx_event_quic_bpf_code.c +++ b/src/event/quic/ngx_event_quic_bpf_code.c @@ -7,71 +7,146 @@ static ngx_bpf_reloc_t bpf_reloc_prog_ngx_quic_reuseport_helper[] = { - { "ngx_quic_sockmap", 55 }, + { "ngx_quic_worker", 82 }, + { "ngx_quic_nlisten", 99 }, + { "ngx_quic_listen", 110 }, + { "ngx_quic_nlisten", 127 }, }; static struct bpf_insn bpf_insn_prog_ngx_quic_reuseport_helper[] = { /* opcode dst src offset imm */ - { 0x79, BPF_REG_4, BPF_REG_1, (int16_t) 0, 0x0 }, - { 0x79, BPF_REG_3, BPF_REG_1, (int16_t) 8, 0x0 }, - { 0xbf, BPF_REG_2, BPF_REG_4, (int16_t) 0, 0x0 }, - { 0x7, BPF_REG_2, BPF_REG_0, (int16_t) 0, 0x8 }, - { 0x2d, BPF_REG_2, BPF_REG_3, (int16_t) 54, 0x0 }, - { 0xbf, BPF_REG_5, BPF_REG_4, (int16_t) 0, 0x0 }, - { 0x7, BPF_REG_5, BPF_REG_0, (int16_t) 0, 0x9 }, - { 0x2d, BPF_REG_5, BPF_REG_3, (int16_t) 51, 0x0 }, - { 0xb7, BPF_REG_5, BPF_REG_0, (int16_t) 0, 0x14 }, - { 0xb7, BPF_REG_0, BPF_REG_0, (int16_t) 0, 0x9 }, - { 0x71, BPF_REG_6, BPF_REG_2, (int16_t) 0, 0x0 }, - { 0x67, BPF_REG_6, BPF_REG_0, (int16_t) 0, 0x38 }, - { 0xc7, BPF_REG_6, BPF_REG_0, (int16_t) 0, 0x38 }, - { 0x65, BPF_REG_6, BPF_REG_0, (int16_t) 10, 0xffffffff }, - { 0xbf, BPF_REG_2, BPF_REG_4, (int16_t) 0, 0x0 }, - { 0x7, BPF_REG_2, BPF_REG_0, (int16_t) 0, 0xd }, - { 0x2d, BPF_REG_2, BPF_REG_3, (int16_t) 42, 0x0 }, - { 0xbf, BPF_REG_5, BPF_REG_4, (int16_t) 0, 0x0 }, - { 0x7, BPF_REG_5, BPF_REG_0, (int16_t) 0, 0xe }, - { 0x2d, BPF_REG_5, BPF_REG_3, (int16_t) 39, 0x0 }, - { 0xb7, BPF_REG_0, BPF_REG_0, (int16_t) 0, 0xe }, - { 0x71, BPF_REG_5, BPF_REG_2, (int16_t) 0, 0x0 }, - { 0xb7, BPF_REG_6, BPF_REG_0, (int16_t) 0, 0x8 }, - { 0x2d, BPF_REG_6, BPF_REG_5, (int16_t) 35, 0x0 }, - { 0xf, BPF_REG_5, BPF_REG_0, (int16_t) 0, 0x0 }, - { 0xf, BPF_REG_4, BPF_REG_5, (int16_t) 0, 0x0 }, - { 0x2d, BPF_REG_4, BPF_REG_3, (int16_t) 32, 0x0 }, - { 0xbf, BPF_REG_4, BPF_REG_2, (int16_t) 0, 0x0 }, - { 0x7, BPF_REG_4, BPF_REG_0, (int16_t) 0, 0x9 }, - { 0x2d, BPF_REG_4, BPF_REG_3, (int16_t) 29, 0x0 }, - { 0x71, BPF_REG_4, BPF_REG_2, (int16_t) 1, 0x0 }, + { 0xbf, BPF_REG_6, BPF_REG_1, (int16_t) 0, 0x0 }, + { 0xb7, BPF_REG_7, BPF_REG_0, (int16_t) 0, 0x0 }, + { 0x79, BPF_REG_2, BPF_REG_6, (int16_t) 8, 0x0 }, + { 0x79, BPF_REG_1, BPF_REG_6, (int16_t) 0, 0x0 }, + { 0xbf, BPF_REG_3, BPF_REG_1, (int16_t) 0, 0x0 }, + { 0x7, BPF_REG_3, BPF_REG_0, (int16_t) 0, 0x9 }, + { 0x2d, BPF_REG_3, BPF_REG_2, (int16_t) 124, 0x0 }, + { 0xb7, BPF_REG_3, BPF_REG_0, (int16_t) 0, 0x9 }, + { 0x71, BPF_REG_4, BPF_REG_1, (int16_t) 8, 0x0 }, { 0x67, BPF_REG_4, BPF_REG_0, (int16_t) 0, 0x38 }, - { 0x71, BPF_REG_3, BPF_REG_2, (int16_t) 2, 0x0 }, - { 0x67, BPF_REG_3, BPF_REG_0, (int16_t) 0, 0x30 }, - { 0x4f, BPF_REG_3, BPF_REG_4, (int16_t) 0, 0x0 }, - { 0x71, BPF_REG_4, BPF_REG_2, (int16_t) 3, 0x0 }, - { 0x67, BPF_REG_4, BPF_REG_0, (int16_t) 0, 0x28 }, - { 0x4f, BPF_REG_3, BPF_REG_4, (int16_t) 0, 0x0 }, - { 0x71, BPF_REG_4, BPF_REG_2, (int16_t) 4, 0x0 }, - { 0x67, BPF_REG_4, BPF_REG_0, (int16_t) 0, 0x20 }, - { 0x4f, BPF_REG_3, BPF_REG_4, (int16_t) 0, 0x0 }, - { 0x71, BPF_REG_4, BPF_REG_2, (int16_t) 5, 0x0 }, - { 0x67, BPF_REG_4, BPF_REG_0, (int16_t) 0, 0x18 }, + { 0xc7, BPF_REG_4, BPF_REG_0, (int16_t) 0, 0x38 }, + { 0x65, BPF_REG_4, BPF_REG_0, (int16_t) 6, 0xffffffff }, + { 0xbf, BPF_REG_3, BPF_REG_1, (int16_t) 0, 0x0 }, + { 0x7, BPF_REG_3, BPF_REG_0, (int16_t) 0, 0xe }, + { 0x2d, BPF_REG_3, BPF_REG_2, (int16_t) 116, 0x0 }, + { 0xb7, BPF_REG_3, BPF_REG_0, (int16_t) 0, 0xe }, + { 0x71, BPF_REG_4, BPF_REG_1, (int16_t) 13, 0x0 }, + { 0x55, BPF_REG_4, BPF_REG_0, (int16_t) 77, 0x14 }, + { 0xf, BPF_REG_1, BPF_REG_3, (int16_t) 0, 0x0 }, + { 0xbf, BPF_REG_3, BPF_REG_1, (int16_t) 0, 0x0 }, + { 0x7, BPF_REG_3, BPF_REG_0, (int16_t) 0, 0x14 }, + { 0x2d, BPF_REG_3, BPF_REG_2, (int16_t) 109, 0x0 }, + { 0x71, BPF_REG_3, BPF_REG_1, (int16_t) 13, 0x0 }, + { 0x67, BPF_REG_3, BPF_REG_0, (int16_t) 0, 0x8 }, + { 0x71, BPF_REG_2, BPF_REG_1, (int16_t) 12, 0x0 }, + { 0x4f, BPF_REG_3, BPF_REG_2, (int16_t) 0, 0x0 }, + { 0x71, BPF_REG_2, BPF_REG_1, (int16_t) 15, 0x0 }, + { 0x67, BPF_REG_2, BPF_REG_0, (int16_t) 0, 0x8 }, + { 0x71, BPF_REG_4, BPF_REG_1, (int16_t) 14, 0x0 }, + { 0x4f, BPF_REG_2, BPF_REG_4, (int16_t) 0, 0x0 }, + { 0x71, BPF_REG_5, BPF_REG_1, (int16_t) 9, 0x0 }, + { 0x67, BPF_REG_5, BPF_REG_0, (int16_t) 0, 0x8 }, + { 0x71, BPF_REG_4, BPF_REG_1, (int16_t) 8, 0x0 }, + { 0x4f, BPF_REG_5, BPF_REG_4, (int16_t) 0, 0x0 }, + { 0x71, BPF_REG_4, BPF_REG_1, (int16_t) 11, 0x0 }, + { 0x67, BPF_REG_4, BPF_REG_0, (int16_t) 0, 0x8 }, + { 0x71, BPF_REG_0, BPF_REG_1, (int16_t) 10, 0x0 }, + { 0x4f, BPF_REG_4, BPF_REG_0, (int16_t) 0, 0x0 }, + { 0x67, BPF_REG_4, BPF_REG_0, (int16_t) 0, 0x10 }, + { 0x4f, BPF_REG_4, BPF_REG_5, (int16_t) 0, 0x0 }, + { 0x67, BPF_REG_2, BPF_REG_0, (int16_t) 0, 0x10 }, + { 0x4f, BPF_REG_2, BPF_REG_3, (int16_t) 0, 0x0 }, + { 0x71, BPF_REG_3, BPF_REG_1, (int16_t) 17, 0x0 }, + { 0x67, BPF_REG_3, BPF_REG_0, (int16_t) 0, 0x8 }, + { 0x71, BPF_REG_5, BPF_REG_1, (int16_t) 16, 0x0 }, + { 0x4f, BPF_REG_3, BPF_REG_5, (int16_t) 0, 0x0 }, + { 0x71, BPF_REG_5, BPF_REG_1, (int16_t) 19, 0x0 }, + { 0x67, BPF_REG_5, BPF_REG_0, (int16_t) 0, 0x8 }, + { 0x71, BPF_REG_0, BPF_REG_1, (int16_t) 18, 0x0 }, + { 0x4f, BPF_REG_5, BPF_REG_0, (int16_t) 0, 0x0 }, + { 0x67, BPF_REG_2, BPF_REG_0, (int16_t) 0, 0x20 }, + { 0x4f, BPF_REG_2, BPF_REG_4, (int16_t) 0, 0x0 }, + { 0x67, BPF_REG_5, BPF_REG_0, (int16_t) 0, 0x10 }, + { 0x4f, BPF_REG_5, BPF_REG_3, (int16_t) 0, 0x0 }, + { 0x71, BPF_REG_4, BPF_REG_1, (int16_t) 1, 0x0 }, + { 0x67, BPF_REG_4, BPF_REG_0, (int16_t) 0, 0x8 }, + { 0x71, BPF_REG_3, BPF_REG_1, (int16_t) 0, 0x0 }, + { 0x4f, BPF_REG_4, BPF_REG_3, (int16_t) 0, 0x0 }, + { 0x71, BPF_REG_3, BPF_REG_1, (int16_t) 3, 0x0 }, + { 0x67, BPF_REG_3, BPF_REG_0, (int16_t) 0, 0x8 }, + { 0x71, BPF_REG_0, BPF_REG_1, (int16_t) 2, 0x0 }, + { 0x4f, BPF_REG_3, BPF_REG_0, (int16_t) 0, 0x0 }, + { 0x63, BPF_REG_10, BPF_REG_5, (int16_t) 65520, 0x0 }, + { 0x7b, BPF_REG_10, BPF_REG_2, (int16_t) 65512, 0x0 }, + { 0x67, BPF_REG_3, BPF_REG_0, (int16_t) 0, 0x10 }, { 0x4f, BPF_REG_3, BPF_REG_4, (int16_t) 0, 0x0 }, - { 0x71, BPF_REG_4, BPF_REG_2, (int16_t) 6, 0x0 }, - { 0x67, BPF_REG_4, BPF_REG_0, (int16_t) 0, 0x10 }, - { 0x4f, BPF_REG_3, BPF_REG_4, (int16_t) 0, 0x0 }, - { 0x71, BPF_REG_4, BPF_REG_2, (int16_t) 7, 0x0 }, - { 0x67, BPF_REG_4, BPF_REG_0, (int16_t) 0, 0x8 }, - { 0x4f, BPF_REG_3, BPF_REG_4, (int16_t) 0, 0x0 }, - { 0x71, BPF_REG_2, BPF_REG_2, (int16_t) 8, 0x0 }, - { 0x4f, BPF_REG_3, BPF_REG_2, (int16_t) 0, 0x0 }, - { 0x7b, BPF_REG_10, BPF_REG_3, (int16_t) 65528, 0x0 }, + { 0x71, BPF_REG_2, BPF_REG_1, (int16_t) 5, 0x0 }, + { 0x67, BPF_REG_2, BPF_REG_0, (int16_t) 0, 0x8 }, + { 0x71, BPF_REG_4, BPF_REG_1, (int16_t) 4, 0x0 }, + { 0x4f, BPF_REG_2, BPF_REG_4, (int16_t) 0, 0x0 }, + { 0x71, BPF_REG_4, BPF_REG_1, (int16_t) 6, 0x0 }, + { 0x71, BPF_REG_1, BPF_REG_1, (int16_t) 7, 0x0 }, + { 0x67, BPF_REG_1, BPF_REG_0, (int16_t) 0, 0x8 }, + { 0x4f, BPF_REG_1, BPF_REG_4, (int16_t) 0, 0x0 }, + { 0x67, BPF_REG_1, BPF_REG_0, (int16_t) 0, 0x10 }, + { 0x4f, BPF_REG_1, BPF_REG_2, (int16_t) 0, 0x0 }, + { 0x67, BPF_REG_1, BPF_REG_0, (int16_t) 0, 0x20 }, + { 0x4f, BPF_REG_1, BPF_REG_3, (int16_t) 0, 0x0 }, + { 0x7b, BPF_REG_10, BPF_REG_1, (int16_t) 65504, 0x0 }, { 0xbf, BPF_REG_3, BPF_REG_10, (int16_t) 0, 0x0 }, - { 0x7, BPF_REG_3, BPF_REG_0, (int16_t) 0, 0xfffffff8 }, + { 0x7, BPF_REG_3, BPF_REG_0, (int16_t) 0, 0xffffffe0 }, + { 0xbf, BPF_REG_1, BPF_REG_6, (int16_t) 0, 0x0 }, { 0x18, BPF_REG_2, BPF_REG_0, (int16_t) 0, 0x0 }, { 0x0, BPF_REG_0, BPF_REG_0, (int16_t) 0, 0x0 }, { 0xb7, BPF_REG_4, BPF_REG_0, (int16_t) 0, 0x0 }, { 0x85, BPF_REG_0, BPF_REG_0, (int16_t) 0, 0x52 }, - { 0xb7, BPF_REG_0, BPF_REG_0, (int16_t) 0, 0x1 }, + { 0xb7, BPF_REG_7, BPF_REG_0, (int16_t) 0, 0x1 }, + { 0x67, BPF_REG_0, BPF_REG_0, (int16_t) 0, 0x20 }, + { 0x77, BPF_REG_0, BPF_REG_0, (int16_t) 0, 0x20 }, + { 0x15, BPF_REG_0, BPF_REG_0, (int16_t) 41, 0x0 }, + { 0x18, BPF_REG_1, BPF_REG_0, (int16_t) 0, 0xfffffffe }, + { 0x0, BPF_REG_0, BPF_REG_0, (int16_t) 0, 0x0 }, + { 0x1d, BPF_REG_0, BPF_REG_1, (int16_t) 2, 0x0 }, + { 0xb7, BPF_REG_7, BPF_REG_0, (int16_t) 0, 0x0 }, + { 0x5, BPF_REG_0, BPF_REG_0, (int16_t) 36, 0x0 }, + { 0xb7, BPF_REG_7, BPF_REG_0, (int16_t) 0, 0x0 }, + { 0x63, BPF_REG_10, BPF_REG_7, (int16_t) 65532, 0x0 }, + { 0xbf, BPF_REG_2, BPF_REG_10, (int16_t) 0, 0x0 }, + { 0x7, BPF_REG_2, BPF_REG_0, (int16_t) 0, 0xfffffffc }, + { 0x18, BPF_REG_1, BPF_REG_0, (int16_t) 0, 0x0 }, + { 0x0, BPF_REG_0, BPF_REG_0, (int16_t) 0, 0x0 }, + { 0x85, BPF_REG_0, BPF_REG_0, (int16_t) 0, 0x1 }, + { 0x15, BPF_REG_0, BPF_REG_0, (int16_t) 28, 0x0 }, + { 0x61, BPF_REG_1, BPF_REG_0, (int16_t) 0, 0x0 }, + { 0x61, BPF_REG_2, BPF_REG_6, (int16_t) 32, 0x0 }, + { 0x9f, BPF_REG_2, BPF_REG_1, (int16_t) 0, 0x0 }, + { 0x63, BPF_REG_10, BPF_REG_2, (int16_t) 65528, 0x0 }, + { 0xbf, BPF_REG_3, BPF_REG_10, (int16_t) 0, 0x0 }, + { 0x7, BPF_REG_3, BPF_REG_0, (int16_t) 0, 0xfffffff8 }, + { 0xbf, BPF_REG_1, BPF_REG_6, (int16_t) 0, 0x0 }, + { 0x18, BPF_REG_2, BPF_REG_0, (int16_t) 0, 0x0 }, + { 0x0, BPF_REG_0, BPF_REG_0, (int16_t) 0, 0x0 }, + { 0xb7, BPF_REG_4, BPF_REG_0, (int16_t) 0, 0x0 }, + { 0x85, BPF_REG_0, BPF_REG_0, (int16_t) 0, 0x52 }, + { 0xb7, BPF_REG_7, BPF_REG_0, (int16_t) 0, 0x1 }, + { 0x67, BPF_REG_0, BPF_REG_0, (int16_t) 0, 0x20 }, + { 0x77, BPF_REG_0, BPF_REG_0, (int16_t) 0, 0x20 }, + { 0x15, BPF_REG_0, BPF_REG_0, (int16_t) 13, 0x0 }, + { 0x18, BPF_REG_1, BPF_REG_0, (int16_t) 0, 0xfffffffe }, + { 0x0, BPF_REG_0, BPF_REG_0, (int16_t) 0, 0x0 }, + { 0x1d, BPF_REG_0, BPF_REG_1, (int16_t) 1, 0x0 }, + { 0x5, BPF_REG_0, BPF_REG_0, (int16_t) 65507, 0x0 }, + { 0xbf, BPF_REG_2, BPF_REG_10, (int16_t) 0, 0x0 }, + { 0x7, BPF_REG_2, BPF_REG_0, (int16_t) 0, 0xfffffffc }, + { 0xbf, BPF_REG_3, BPF_REG_10, (int16_t) 0, 0x0 }, + { 0x7, BPF_REG_3, BPF_REG_0, (int16_t) 0, 0xfffffff8 }, + { 0xb7, BPF_REG_7, BPF_REG_0, (int16_t) 0, 0x0 }, + { 0x18, BPF_REG_1, BPF_REG_0, (int16_t) 0, 0x0 }, + { 0x0, BPF_REG_0, BPF_REG_0, (int16_t) 0, 0x0 }, + { 0xb7, BPF_REG_4, BPF_REG_0, (int16_t) 0, 0x0 }, + { 0x85, BPF_REG_0, BPF_REG_0, (int16_t) 0, 0x2 }, + { 0xbf, BPF_REG_0, BPF_REG_7, (int16_t) 0, 0x0 }, { 0x95, BPF_REG_0, BPF_REG_0, (int16_t) 0, 0x0 }, }; @@ -86,3 +161,4 @@ ngx_bpf_program_t ngx_quic_reuseport_hel .license = "BSD", .type = BPF_PROG_TYPE_SK_REUSEPORT, }; + diff --git a/src/event/quic/ngx_event_quic_connection.h b/src/event/quic/ngx_event_quic_connection.h --- a/src/event/quic/ngx_event_quic_connection.h +++ b/src/event/quic/ngx_event_quic_connection.h @@ -36,6 +36,9 @@ typedef struct ngx_quic_keys_s ng #include #include #include +#if (NGX_QUIC_BPF) +#include +#endif /* RFC 9002, 6.2.2. Handshakes and New Paths: kInitialRtt */ @@ -45,6 +48,8 @@ typedef struct ngx_quic_keys_s ng #define NGX_QUIC_SEND_CTX_LAST (NGX_QUIC_ENCRYPTION_LAST - 1) +#define NGX_QUIC_MAX_SERVER_IDS 8 + /* 0-RTT and 1-RTT data exist in the same packet number space, * so we have 3 packet number spaces: * @@ -257,6 +262,7 @@ struct ngx_quic_connection_s { unsigned key_phase:1; unsigned validated:1; unsigned client_tp_done:1; + unsigned listen_bound:1; }; diff --git a/src/event/quic/ngx_event_quic_connid.c b/src/event/quic/ngx_event_quic_connid.c --- a/src/event/quic/ngx_event_quic_connid.c +++ b/src/event/quic/ngx_event_quic_connid.c @@ -9,12 +9,7 @@ #include #include -#define NGX_QUIC_MAX_SERVER_IDS 8 - -#if (NGX_QUIC_BPF) -static ngx_int_t ngx_quic_bpf_attach_id(ngx_connection_t *c, u_char *id); -#endif static ngx_int_t ngx_quic_retire_client_id(ngx_connection_t *c, ngx_quic_client_id_t *cid); static ngx_quic_client_id_t *ngx_quic_alloc_client_id(ngx_connection_t *c, @@ -30,46 +25,10 @@ ngx_quic_create_server_id(ngx_connection return NGX_ERROR; } -#if (NGX_QUIC_BPF) - if (ngx_quic_bpf_attach_id(c, id) != NGX_OK) { - ngx_log_error(NGX_LOG_ERR, c->log, 0, - "quic bpf failed to generate socket key"); - /* ignore error, things still may work */ - } -#endif - return NGX_OK; } -#if (NGX_QUIC_BPF) - -static ngx_int_t -ngx_quic_bpf_attach_id(ngx_connection_t *c, u_char *id) -{ - int fd; - uint64_t cookie; - socklen_t optlen; - - fd = c->listening->fd; - - optlen = sizeof(cookie); - - if (getsockopt(fd, SOL_SOCKET, SO_COOKIE, &cookie, &optlen) == -1) { - ngx_log_error(NGX_LOG_ERR, c->log, ngx_socket_errno, - "quic getsockopt(SO_COOKIE) failed"); - - return NGX_ERROR; - } - - ngx_quic_dcid_encode_key(id, cookie); - - return NGX_OK; -} - -#endif - - ngx_int_t ngx_quic_handle_new_connection_id_frame(ngx_connection_t *c, ngx_quic_new_conn_id_frame_t *f) diff --git a/src/event/quic/ngx_event_quic_output.c b/src/event/quic/ngx_event_quic_output.c --- a/src/event/quic/ngx_event_quic_output.c +++ b/src/event/quic/ngx_event_quic_output.c @@ -84,6 +84,10 @@ ngx_quic_output(ngx_connection_t *c) ngx_quic_congestion_t *cg; ngx_quic_connection_t *qc; + if (c->fd == (ngx_socket_t) -1) { + return NGX_ERROR; + } + c->log->action = "sending frames"; qc = ngx_quic_get_connection(c); @@ -1031,7 +1035,6 @@ ngx_quic_send_retry(ngx_connection_t *c, pkt.odcid = inpkt->dcid; pkt.dcid = inpkt->scid; - /* TODO: generate routable dcid */ if (RAND_bytes(dcid, NGX_QUIC_SERVER_CID_LEN) != 1) { return NGX_ERROR; } diff --git a/src/event/quic/ngx_event_quic_socket.c b/src/event/quic/ngx_event_quic_socket.c --- a/src/event/quic/ngx_event_quic_socket.c +++ b/src/event/quic/ngx_event_quic_socket.c @@ -109,6 +109,13 @@ ngx_quic_open_sockets(ngx_connection_t * failed: ngx_rbtree_delete(&c->listening->rbtree, &qsock->udp.node); + +#if (NGX_QUIC_BPF) + if (ngx_quic_bpf_delete(c, qc, qsock) != NGX_OK) { + return NGX_ERROR; + } +#endif + c->udp = NULL; return NGX_ERROR; @@ -160,6 +167,13 @@ ngx_quic_close_socket(ngx_connection_t * ngx_queue_insert_head(&qc->free_sockets, &qsock->queue); ngx_rbtree_delete(&c->listening->rbtree, &qsock->udp.node); + +#if (NGX_QUIC_BPF) + if (ngx_quic_bpf_delete(c, qc, qsock) != NGX_OK) { + return; + } +#endif + qc->nsockets--; ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, @@ -185,6 +199,12 @@ ngx_quic_listen(ngx_connection_t *c, ngx ngx_rbtree_insert(&c->listening->rbtree, &qsock->udp.node); +#if (NGX_QUIC_BPF) + if (ngx_quic_bpf_insert(c, qc, qsock) != NGX_OK) { + return NGX_ERROR; + } +#endif + ngx_queue_insert_tail(&qc->sockets, &qsock->queue); qc->nsockets++; diff --git a/src/event/quic/ngx_event_quic_udp.c b/src/event/quic/ngx_event_quic_udp.c --- a/src/event/quic/ngx_event_quic_udp.c +++ b/src/event/quic/ngx_event_quic_udp.c @@ -160,7 +160,7 @@ ngx_quic_recvmsg(ngx_event_t *ev) c->log->handler = NULL; ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, - "quic recvmsg: fd:%d n:%z", c->fd, n); + "quic recvmsg: fd:%d n:%z", lc->fd, n); c->log->handler = handler; } @@ -193,12 +193,23 @@ ngx_quic_recvmsg(ngx_event_t *ev) ngx_accept_disabled = ngx_cycle->connection_n / 8 - ngx_cycle->free_connection_n; - c = ngx_get_connection(lc->fd, ev->log); - if (c == NULL) { + c = NULL; + +#if (NGX_QUIC_BPF) + if (ngx_quic_bpf_get_client_connection(lc, &c) != NGX_OK) { return; } +#endif - c->shared = 1; + if (c == NULL) { + c = ngx_get_connection(lc->fd, ev->log); + if (c == NULL) { + return; + } + + c->shared = 1; + } + c->type = SOCK_DGRAM; c->socklen = socklen; @@ -309,7 +320,7 @@ ngx_quic_recvmsg(ngx_event_t *ev) ngx_log_debug4(NGX_LOG_DEBUG_EVENT, log, 0, "*%uA quic recvmsg: %V fd:%d n:%z", - c->number, &addr, c->fd, n); + c->number, &addr, lc->fd, n); } } diff --git a/src/os/unix/ngx_process_cycle.c b/src/os/unix/ngx_process_cycle.c --- a/src/os/unix/ngx_process_cycle.c +++ b/src/os/unix/ngx_process_cycle.c @@ -964,7 +964,8 @@ ngx_worker_process_exit(ngx_cycle_t *cyc && c[i].read && !c[i].read->accept && !c[i].read->channel - && !c[i].read->resolver) + && !c[i].read->resolver + && !c[i].read->quic) { ngx_log_error(NGX_LOG_ALERT, cycle->log, 0, "*%uA open socket #%d left in connection %ui", From arut at nginx.com Fri Dec 9 09:38:52 2022 From: arut at nginx.com (=?iso-8859-1?q?Roman_Arutyunyan?=) Date: Fri, 09 Dec 2022 09:38:52 +0000 Subject: [PATCH 6 of 6] QUIC: client sockets In-Reply-To: References: Message-ID: <779035cf3e1c7886d06e.1670578732@ip-10-1-18-114.eu-central-1.compute.internal> # HG changeset patch # User Roman Arutyunyan # Date 1670509141 0 # Thu Dec 08 14:19:01 2022 +0000 # Branch quic # Node ID 779035cf3e1c7886d06e5d5295943f425f4aac60 # Parent afbac4ba4c75023e10e68bae39df5b1a0fdbd17b QUIC: client sockets. In client sockets mode, a new socket is created for each client. This socket is bound to server address and connected to client address. This allows for seamless configuration reload and binary upgrade. This mode is enabled by default and can be disabled by "--without-quic_client_sockets" configure option. With this approach, it is possible that some new connection packets will arrive not to the listen socket, but to a client socket due to a race between bind() and connect(). Such packets initiate new connections in the workers they end up in. diff --git a/auto/modules b/auto/modules --- a/auto/modules +++ b/auto/modules @@ -1378,6 +1378,10 @@ if [ $USE_OPENSSL_QUIC = YES ]; then have=NGX_QUIC_BPF . auto/have fi + + if [ $QUIC_CLIENT_SOCKETS = YES ]; then + have=NGX_QUIC_CLIENT_SOCKETS . auto/have + fi fi diff --git a/auto/options b/auto/options --- a/auto/options +++ b/auto/options @@ -45,6 +45,7 @@ USE_THREADS=NO NGX_FILE_AIO=NO +QUIC_CLIENT_SOCKETS=YES QUIC_BPF=NO HTTP=YES @@ -216,6 +217,7 @@ do --with-file-aio) NGX_FILE_AIO=YES ;; + --without-quic_client_sockets) QUIC_CLIENT_SOCKETS=NONE ;; --without-quic_bpf_module) QUIC_BPF=NONE ;; --with-ipv6) @@ -452,6 +454,7 @@ cat << END --with-file-aio enable file AIO support + --without-quic_client_sockets disable QUIC client sockets --without-quic_bpf_module disable ngx_quic_bpf_module --with-http_ssl_module enable ngx_http_ssl_module diff --git a/src/core/ngx_connection.c b/src/core/ngx_connection.c --- a/src/core/ngx_connection.c +++ b/src/core/ngx_connection.c @@ -513,6 +513,33 @@ ngx_open_listening_sockets(ngx_cycle_t * #if (NGX_HAVE_REUSEPORT) +#if (NGX_QUIC_CLIENT_SOCKETS && defined SO_REUSEPORT_LB) + + if (ls[i].quic) { + int reuseport; + + reuseport = 1; + + if (setsockopt(s, SOL_SOCKET, SO_REUSEPORT, + (const void *) &reuseport, sizeof(int)) + == -1) + { + ngx_log_error(NGX_LOG_EMERG, log, ngx_socket_errno, + "setsockopt(SO_REUSEPORT) %V failed", + &ls[i].addr_text); + + if (ngx_close_socket(s) == -1) { + ngx_log_error(NGX_LOG_EMERG, log, ngx_socket_errno, + ngx_close_socket_n " %V failed", + &ls[i].addr_text); + } + + return NGX_ERROR; + } + } + +#endif + if (ls[i].reuseport && !ngx_test_config) { int reuseport; diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c --- a/src/event/quic/ngx_event_quic.c +++ b/src/event/quic/ngx_event_quic.c @@ -10,11 +10,15 @@ #include +#define NGX_QUIC_LINGERING_TIMEOUT 50 + + static ngx_quic_connection_t *ngx_quic_new_connection(ngx_connection_t *c, ngx_quic_conf_t *conf, ngx_quic_header_t *pkt); static ngx_int_t ngx_quic_handle_stateless_reset(ngx_connection_t *c, ngx_quic_header_t *pkt); static void ngx_quic_input_handler(ngx_event_t *rev); +static void ngx_quic_lingering_handler(ngx_event_t *rev); static void ngx_quic_close_handler(ngx_event_t *ev); static ngx_int_t ngx_quic_handle_packet(ngx_connection_t *c, @@ -218,6 +222,13 @@ ngx_quic_run(ngx_connection_t *c, ngx_qu c->read->handler = ngx_quic_input_handler; + if (!c->shared) { + if (ngx_add_event(c->read, NGX_READ_EVENT, 0) == NGX_ERROR) { + ngx_quic_close_connection(c, NGX_ERROR); + return; + } + } + return; } @@ -441,6 +452,10 @@ ngx_quic_input_handler(ngx_event_t *rev) return; } + + if (!c->shared) { + ngx_quic_recvmsg(rev); + } } @@ -583,6 +598,15 @@ quic_done: c->destroyed = 1; + if (c->read->ready) { + ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, "quic lingering"); + + c->read->handler = ngx_quic_lingering_handler; + ngx_add_timer(c->read, NGX_QUIC_LINGERING_TIMEOUT); + + return; + } + pool = c->pool; ngx_close_connection(c); @@ -591,6 +615,31 @@ quic_done: } +static void +ngx_quic_lingering_handler(ngx_event_t *rev) +{ + ngx_pool_t *pool; + ngx_connection_t *c; + + c = rev->data; + + ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, "quic lingering handler"); + + if (rev->ready && !rev->timedout) { + ngx_quic_recvmsg(rev); + } + + if (!rev->ready || rev->timedout) { + pool = c->pool; + + ngx_close_connection(c); + ngx_destroy_pool(pool); + + return; + } +} + + void ngx_quic_finalize_connection(ngx_connection_t *c, ngx_uint_t err, const char *reason) diff --git a/src/event/quic/ngx_event_quic_output.c b/src/event/quic/ngx_event_quic_output.c --- a/src/event/quic/ngx_event_quic_output.c +++ b/src/event/quic/ngx_event_quic_output.c @@ -437,8 +437,10 @@ ngx_quic_send_segments(ngx_connection_t msg.msg_iov = &iov; msg.msg_iovlen = 1; - msg.msg_name = sockaddr; - msg.msg_namelen = socklen; + if (c->shared) { + msg.msg_name = sockaddr; + msg.msg_namelen = socklen; + } msg.msg_control = msg_control; msg.msg_controllen = sizeof(msg_control); @@ -455,7 +457,7 @@ ngx_quic_send_segments(ngx_connection_t *valp = segment; #if (NGX_HAVE_ADDRINFO_CMSG) - if (c->listening && c->listening->wildcard && c->local_sockaddr) { + if (c->shared && c->listening->wildcard) { cmsg = CMSG_NXTHDR(&msg, cmsg); clen += ngx_set_srcaddr_cmsg(cmsg, c->local_sockaddr); } @@ -747,11 +749,13 @@ ngx_quic_send(ngx_connection_t *c, u_cha msg.msg_iov = &iov; msg.msg_iovlen = 1; - msg.msg_name = sockaddr; - msg.msg_namelen = socklen; + if (c->shared) { + msg.msg_name = sockaddr; + msg.msg_namelen = socklen; + } #if (NGX_HAVE_ADDRINFO_CMSG) - if (c->listening && c->listening->wildcard && c->local_sockaddr) { + if (c->shared && c->listening->wildcard) { msg.msg_control = msg_control; msg.msg_controllen = sizeof(msg_control); diff --git a/src/event/quic/ngx_event_quic_udp.c b/src/event/quic/ngx_event_quic_udp.c --- a/src/event/quic/ngx_event_quic_udp.c +++ b/src/event/quic/ngx_event_quic_udp.c @@ -11,6 +11,11 @@ #include +#if (NGX_QUIC_CLIENT_SOCKETS) +static ngx_int_t ngx_quic_create_client_socket_connection(ngx_connection_t *lc, + ngx_connection_t **pc, struct sockaddr *sockaddr, socklen_t socklen, + struct sockaddr *local_sockaddr, socklen_t local_socklen); +#endif static void ngx_quic_close_accepted_connection(ngx_connection_t *c); static ngx_connection_t *ngx_quic_lookup_connection(ngx_listening_t *ls, ngx_str_t *key); @@ -48,7 +53,6 @@ ngx_quic_recvmsg(ngx_event_t *ev) lc = ev->data; ls = lc->listening; - ev->ready = 0; ngx_log_debug2(NGX_LOG_DEBUG_EVENT, ev->log, 0, "quic recvmsg on %V, ready: %d", @@ -65,8 +69,17 @@ ngx_quic_recvmsg(ngx_event_t *ev) msg.msg_iov = iov; msg.msg_iovlen = 1; + if (lc->local_sockaddr) { + local_sockaddr = lc->local_sockaddr; + local_socklen = lc->local_socklen; + + } else { + local_sockaddr = ls->sockaddr; + local_socklen = ls->socklen; + } + #if (NGX_HAVE_ADDRINFO_CMSG) - if (ls->wildcard) { + if (ngx_inet_wildcard(local_sockaddr)) { msg.msg_control = &msg_control; msg.msg_controllen = sizeof(msg_control); @@ -82,6 +95,7 @@ ngx_quic_recvmsg(ngx_event_t *ev) if (err == NGX_EAGAIN) { ngx_log_debug0(NGX_LOG_DEBUG_EVENT, ev->log, err, "quic recvmsg() not ready"); + ev->ready = 0; return; } @@ -121,12 +135,9 @@ ngx_quic_recvmsg(ngx_event_t *ev) #endif - local_sockaddr = ls->sockaddr; - local_socklen = ls->socklen; - #if (NGX_HAVE_ADDRINFO_CMSG) - if (ls->wildcard) { + if (ngx_inet_wildcard(local_sockaddr)) { struct cmsghdr *cmsg; ngx_memcpy(&lsa, local_sockaddr, local_socklen); @@ -201,6 +212,17 @@ ngx_quic_recvmsg(ngx_event_t *ev) } #endif +#if (NGX_QUIC_CLIENT_SOCKETS) + if (c == NULL) { + if (ngx_quic_create_client_socket_connection(lc, &c, + sockaddr, socklen, local_sockaddr, local_socklen) + != NGX_OK) + { + return; + } + } +#endif + if (c == NULL) { c = ngx_get_connection(lc->fd, ev->log); if (c == NULL) { @@ -243,17 +265,13 @@ ngx_quic_recvmsg(ngx_event_t *ev) c->pool->log = log; c->listening = ls; - if (local_sockaddr == &lsa.sockaddr) { - local_sockaddr = ngx_palloc(c->pool, local_socklen); - if (local_sockaddr == NULL) { - ngx_quic_close_accepted_connection(c); - return; - } - - ngx_memcpy(local_sockaddr, &lsa, local_socklen); + c->local_sockaddr = ngx_palloc(c->pool, local_socklen); + if (c->local_sockaddr == NULL) { + ngx_quic_close_accepted_connection(c); + return; } - c->local_sockaddr = local_sockaddr; + ngx_memcpy(c->local_sockaddr, local_sockaddr, local_socklen); c->local_socklen = local_socklen; c->buffer = ngx_create_temp_buf(c->pool, n); @@ -267,7 +285,7 @@ ngx_quic_recvmsg(ngx_event_t *ev) rev = c->read; wev = c->write; - rev->active = 1; + rev->ready = (c->shared ? 0 : 1); wev->ready = 1; rev->log = log; @@ -341,6 +359,94 @@ ngx_quic_recvmsg(ngx_event_t *ev) } +#if (NGX_QUIC_CLIENT_SOCKETS) + +static ngx_int_t +ngx_quic_create_client_socket_connection(ngx_connection_t *lc, + ngx_connection_t **pc, struct sockaddr *sockaddr, socklen_t socklen, + struct sockaddr *local_sockaddr, socklen_t local_socklen) +{ + int value; + ngx_socket_t s; + + s = ngx_socket(sockaddr->sa_family, SOCK_DGRAM, 0); + if (s == (ngx_socket_t) -1) { + ngx_log_error(NGX_LOG_ERR, lc->log, ngx_socket_errno, + ngx_socket_n " failed"); + return NGX_ERROR; + } + + if (ngx_nonblocking(s) == -1) { + ngx_log_error(NGX_LOG_EMERG, lc->log, ngx_socket_errno, + ngx_nonblocking_n " client socket failed"); + goto failed; + } + + value = 1; + +#if (NGX_HAVE_REUSEPORT && !NGX_LINUX) + + if (setsockopt(s, SOL_SOCKET, SO_REUSEPORT, + (const void *) &value, sizeof(int)) + == -1) + { + ngx_log_error(NGX_LOG_EMERG, lc->log, ngx_socket_errno, + "setsockopt(SO_REUSEPORT) client socket failed"); + goto failed; + } + +#else + + if (setsockopt(s, SOL_SOCKET, SO_REUSEADDR, + (const void *) &value, sizeof(int)) + == -1) + { + ngx_log_error(NGX_LOG_EMERG, lc->log, ngx_socket_errno, + "setsockopt(SO_REUSEADDR) client socket failed"); + goto failed; + } + +#endif + + if (bind(s, local_sockaddr, local_socklen) == -1) { + ngx_log_error(NGX_LOG_EMERG, lc->log, ngx_socket_errno, + "bind() for client socket failed"); + goto failed; + } + + if (connect(s, sockaddr, socklen) == -1) { + ngx_log_error(NGX_LOG_EMERG, lc->log, ngx_socket_errno, + "connect() to client failed"); + goto failed; + } + + *pc = ngx_get_connection(s, lc->log); + if (*pc == NULL) { + goto failed; + } + + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, lc->log, 0, + "quic client socket connection fd:%d r:%d", + s, lc->listening->connection != lc); + + ngx_accept_disabled = ngx_cycle->connection_n / 8 + - ngx_cycle->free_connection_n; + + return NGX_OK; + +failed: + + if (ngx_close_socket(s) == -1) { + ngx_log_error(NGX_LOG_EMERG, lc->log, ngx_socket_errno, + ngx_close_socket_n " client failed"); + } + + return NGX_ERROR; +} + +#endif + + static void ngx_quic_close_accepted_connection(ngx_connection_t *c) { From pl080516 at gmail.com Mon Dec 12 08:41:04 2022 From: pl080516 at gmail.com (Yu Zhu) Date: Mon, 12 Dec 2022 08:41:04 +0000 Subject: QUIC: reworked congestion control mechanism. In-Reply-To: <20221207150152.icsf4uu7lv57zeag@N00W24XTQX> References: <20221207150152.icsf4uu7lv57zeag@N00W24XTQX> Message-ID: Hi, Thanks for reply. I’m agree that a better framework is needed to implement different congestion algorithms. And the current implementation may have a little problem, for example it should be better for passing acknowledge event (including the ack of multiple packets and the losts) instead of pass each packet info to congestion control algorithms. As you suggested, “moved rtt and congestion control variables to ngx_quic_path_t structure” is separate patch now. And this patch is prerequisite for multipath quic. # HG changeset patch # User Yu Zhu # Date 1670831223 -28800 # Mon Dec 12 15:47:03 2022 +0800 # Branch quic # Node ID 8723d4282f6d6a5b67e271652f46d79ee24dfb39 # Parent b87a0dbc1150f415def5bc1e1f00d02b33519026 QUIC: moved rtt and congestion control variables to ngx_quic_path_t structure. As rfc9001, section 6. Loss Detection: Loss detection is separate per packet number space, unlike RTT measurement and congestion control, because RTT and congestion control are properties of the path, whereas loss detection also relies upon key availability. No functional changes. diff -r b87a0dbc1150 -r 8723d4282f6d src/event/quic/ngx_event_quic.c --- a/src/event/quic/ngx_event_quic.c Tue Oct 25 12:52:09 2022 +0400 +++ b/src/event/quic/ngx_event_quic.c Mon Dec 12 15:47:03 2022 +0800 @@ -263,15 +263,6 @@ ngx_queue_init(&qc->free_frames); - qc->avg_rtt = NGX_QUIC_INITIAL_RTT; - qc->rttvar = NGX_QUIC_INITIAL_RTT / 2; - qc->min_rtt = NGX_TIMER_INFINITE; - qc->first_rtt = NGX_TIMER_INFINITE; - - /* - * qc->latest_rtt = 0 - */ - qc->pto.log = c->log; qc->pto.data = c; qc->pto.handler = ngx_quic_pto_handler; @@ -311,12 +302,6 @@ qc->streams.client_max_streams_uni = qc->tp.initial_max_streams_uni; qc->streams.client_max_streams_bidi = qc->tp.initial_max_streams_bidi; - qc->congestion.window = ngx_min(10 * qc->tp.max_udp_payload_size, - ngx_max(2 * qc->tp.max_udp_payload_size, - 14720)); - qc->congestion.ssthresh = (size_t) -1; - qc->congestion.recovery_start = ngx_current_msec; - if (pkt->validated && pkt->retried) { qc->tp.retry_scid.len = pkt->dcid.len; qc->tp.retry_scid.data = ngx_pstrdup(c->pool, &pkt->dcid); diff -r b87a0dbc1150 -r 8723d4282f6d src/event/quic/ngx_event_quic_ack.c --- a/src/event/quic/ngx_event_quic_ack.c Tue Oct 25 12:52:09 2022 +0400 +++ b/src/event/quic/ngx_event_quic_ack.c Mon Dec 12 15:47:03 2022 +0800 @@ -29,7 +29,7 @@ } ngx_quic_ack_stat_t; -static ngx_inline ngx_msec_t ngx_quic_lost_threshold(ngx_quic_connection_t *qc); +static ngx_inline ngx_msec_t ngx_quic_lost_threshold(ngx_quic_path_t *path); static void ngx_quic_rtt_sample(ngx_connection_t *c, ngx_quic_ack_frame_t *ack, enum ssl_encryption_level_t level, ngx_msec_t send_time); static ngx_int_t ngx_quic_handle_ack_frame_range(ngx_connection_t *c, @@ -48,11 +48,11 @@ /* RFC 9002, 6.1.2. Time Threshold: kTimeThreshold, kGranularity */ static ngx_inline ngx_msec_t -ngx_quic_lost_threshold(ngx_quic_connection_t *qc) +ngx_quic_lost_threshold(ngx_quic_path_t *path) { ngx_msec_t thr; - thr = ngx_max(qc->latest_rtt, qc->avg_rtt); + thr = ngx_max(path->latest_rtt, path->avg_rtt); thr += thr >> 3; return ngx_max(thr, NGX_QUIC_TIME_GRANULARITY); @@ -179,21 +179,23 @@ enum ssl_encryption_level_t level, ngx_msec_t send_time) { ngx_msec_t latest_rtt, ack_delay, adjusted_rtt, rttvar_sample; + ngx_quic_path_t *path; ngx_quic_connection_t *qc; qc = ngx_quic_get_connection(c); + path = qc->path; latest_rtt = ngx_current_msec - send_time; - qc->latest_rtt = latest_rtt; + path->latest_rtt = latest_rtt; - if (qc->min_rtt == NGX_TIMER_INFINITE) { - qc->min_rtt = latest_rtt; - qc->avg_rtt = latest_rtt; - qc->rttvar = latest_rtt / 2; - qc->first_rtt = ngx_current_msec; + if (path->min_rtt == NGX_TIMER_INFINITE) { + path->min_rtt = latest_rtt; + path->avg_rtt = latest_rtt; + path->rttvar = latest_rtt / 2; + path->first_rtt = ngx_current_msec; } else { - qc->min_rtt = ngx_min(qc->min_rtt, latest_rtt); + path->min_rtt = ngx_min(path->min_rtt, latest_rtt); ack_delay = (ack->delay << qc->ctp.ack_delay_exponent) / 1000; @@ -203,18 +205,18 @@ adjusted_rtt = latest_rtt; - if (qc->min_rtt + ack_delay < latest_rtt) { + if (path->min_rtt + ack_delay < latest_rtt) { adjusted_rtt -= ack_delay; } - qc->avg_rtt += (adjusted_rtt >> 3) - (qc->avg_rtt >> 3); - rttvar_sample = ngx_abs((ngx_msec_int_t) (qc->avg_rtt - adjusted_rtt)); - qc->rttvar += (rttvar_sample >> 2) - (qc->rttvar >> 2); + path->avg_rtt += (adjusted_rtt >> 3) - (path->avg_rtt >> 3); + rttvar_sample = ngx_abs((ngx_msec_int_t) (path->avg_rtt - adjusted_rtt)); + path->rttvar += (rttvar_sample >> 2) - (path->rttvar >> 2); } ngx_log_debug4(NGX_LOG_DEBUG_EVENT, c->log, 0, "quic rtt sample latest:%M min:%M avg:%M var:%M", - latest_rtt, qc->min_rtt, qc->avg_rtt, qc->rttvar); + latest_rtt, path->min_rtt, path->avg_rtt, path->rttvar); } @@ -317,7 +319,7 @@ } qc = ngx_quic_get_connection(c); - cg = &qc->congestion; + cg = &qc->path->congestion; blocked = (cg->in_flight >= cg->window) ? 1 : 0; @@ -428,13 +430,15 @@ ngx_uint_t i, nlost; ngx_msec_t now, wait, thr, oldest, newest; ngx_queue_t *q; + ngx_quic_path_t *path; ngx_quic_frame_t *start; ngx_quic_send_ctx_t *ctx; ngx_quic_connection_t *qc; qc = ngx_quic_get_connection(c); + path = qc->path; now = ngx_current_msec; - thr = ngx_quic_lost_threshold(qc); + thr = ngx_quic_lost_threshold(path); /* send time of lost packets across all send contexts */ oldest = NGX_TIMER_INFINITE; @@ -471,7 +475,7 @@ break; } - if (start->last > qc->first_rtt) { + if (start->last > path->first_rtt) { if (oldest == NGX_TIMER_INFINITE || start->last < oldest) { oldest = start->last; @@ -519,8 +523,8 @@ qc = ngx_quic_get_connection(c); - duration = qc->avg_rtt; - duration += ngx_max(4 * qc->rttvar, NGX_QUIC_TIME_GRANULARITY); + duration = qc->path->avg_rtt; + duration += ngx_max(4 * qc->path->rttvar, NGX_QUIC_TIME_GRANULARITY); duration += qc->ctp.max_ack_delay; duration *= NGX_QUIC_PERSISTENT_CONGESTION_THR; @@ -535,7 +539,7 @@ ngx_quic_connection_t *qc; qc = ngx_quic_get_connection(c); - cg = &qc->congestion; + cg = &qc->path->congestion; cg->recovery_start = ngx_current_msec; cg->window = qc->tp.max_udp_payload_size * 2; @@ -656,7 +660,7 @@ } qc = ngx_quic_get_connection(c); - cg = &qc->congestion; + cg = &qc->path->congestion; blocked = (cg->in_flight >= cg->window) ? 1 : 0; @@ -721,7 +725,7 @@ if (ctx->largest_ack != NGX_QUIC_UNSET_PN) { q = ngx_queue_head(&ctx->sent); f = ngx_queue_data(q, ngx_quic_frame_t, queue); - w = (ngx_msec_int_t) (f->last + ngx_quic_lost_threshold(qc) - now); + w = (ngx_msec_int_t) (f->last + ngx_quic_lost_threshold(qc->path) - now); if (f->pnum <= ctx->largest_ack) { if (w < 0 || ctx->largest_ack - f->pnum >= NGX_QUIC_PKT_THR) { @@ -777,17 +781,19 @@ ngx_quic_pto(ngx_connection_t *c, ngx_quic_send_ctx_t *ctx) { ngx_msec_t duration; + ngx_quic_path_t *path; ngx_quic_connection_t *qc; qc = ngx_quic_get_connection(c); + path = qc->path; /* RFC 9002, Appendix A.8. Setting the Loss Detection Timer */ - duration = qc->avg_rtt; + duration = path->avg_rtt; - duration += ngx_max(4 * qc->rttvar, NGX_QUIC_TIME_GRANULARITY); + duration += ngx_max(4 * path->rttvar, NGX_QUIC_TIME_GRANULARITY); duration <<= qc->pto_count; - if (qc->congestion.in_flight == 0) { /* no in-flight packets */ + if (path->congestion.in_flight == 0) { /* no in-flight packets */ return duration; } diff -r b87a0dbc1150 -r 8723d4282f6d src/event/quic/ngx_event_quic_connection.h --- a/src/event/quic/ngx_event_quic_connection.h Tue Oct 25 12:52:09 2022 +0400 +++ b/src/event/quic/ngx_event_quic_connection.h Mon Dec 12 15:47:03 2022 +0800 @@ -80,6 +80,14 @@ }; +typedef struct { + size_t in_flight; + size_t window; + size_t ssthresh; + ngx_msec_t recovery_start; +} ngx_quic_congestion_t; + + struct ngx_quic_path_s { ngx_queue_t queue; struct sockaddr *sockaddr; @@ -96,6 +104,15 @@ uint64_t seqnum; ngx_str_t addr_text; u_char text[NGX_SOCKADDR_STRLEN]; + + ngx_msec_t first_rtt; + ngx_msec_t latest_rtt; + ngx_msec_t avg_rtt; + ngx_msec_t min_rtt; + ngx_msec_t rttvar; + + ngx_quic_congestion_t congestion; + unsigned validated:1; unsigned validating:1; unsigned limited:1; @@ -143,14 +160,6 @@ } ngx_quic_streams_t; -typedef struct { - size_t in_flight; - size_t window; - size_t ssthresh; - ngx_msec_t recovery_start; -} ngx_quic_congestion_t; - - /* * RFC 9000, 12.3. Packet Numbers * @@ -218,12 +227,6 @@ ngx_event_t path_validation; ngx_msec_t last_cc; - ngx_msec_t first_rtt; - ngx_msec_t latest_rtt; - ngx_msec_t avg_rtt; - ngx_msec_t min_rtt; - ngx_msec_t rttvar; - ngx_uint_t pto_count; ngx_queue_t free_frames; @@ -237,7 +240,6 @@ #endif ngx_quic_streams_t streams; - ngx_quic_congestion_t congestion; off_t received; diff -r b87a0dbc1150 -r 8723d4282f6d src/event/quic/ngx_event_quic_migration.c --- a/src/event/quic/ngx_event_quic_migration.c Tue Oct 25 12:52:09 2022 +0400 +++ b/src/event/quic/ngx_event_quic_migration.c Mon Dec 12 15:47:03 2022 +0800 @@ -135,17 +135,26 @@ { /* address did not change */ rst = 0; + + path->avg_rtt = prev->avg_rtt; + path->rttvar = prev->rttvar; + path->min_rtt = prev->min_rtt; + path->first_rtt = prev->first_rtt; + path->latest_rtt = prev->latest_rtt; + + ngx_memcpy(&path->congestion, &prev->congestion, + sizeof(ngx_quic_congestion_t)); } } if (rst) { - ngx_memzero(&qc->congestion, sizeof(ngx_quic_congestion_t)); + ngx_memzero(&path->congestion, sizeof(ngx_quic_congestion_t)); - qc->congestion.window = ngx_min(10 * qc->tp.max_udp_payload_size, + path->congestion.window = ngx_min(10 * qc->tp.max_udp_payload_size, ngx_max(2 * qc->tp.max_udp_payload_size, 14720)); - qc->congestion.ssthresh = (size_t) -1; - qc->congestion.recovery_start = ngx_current_msec; + path->congestion.ssthresh = (size_t) -1; + path->congestion.recovery_start = ngx_current_msec; } /* @@ -217,6 +226,21 @@ path->addr_text.len = ngx_sock_ntop(sockaddr, socklen, path->text, NGX_SOCKADDR_STRLEN, 1); + path->avg_rtt = NGX_QUIC_INITIAL_RTT; + path->rttvar = NGX_QUIC_INITIAL_RTT / 2; + path->min_rtt = NGX_TIMER_INFINITE; + path->first_rtt = NGX_TIMER_INFINITE; + + /* + * qc->latest_rtt = 0 + */ + + path->congestion.window = ngx_min(10 * qc->tp.max_udp_payload_size, + ngx_max(2 * qc->tp.max_udp_payload_size, + 14720)); + path->congestion.ssthresh = (size_t) -1; + path->congestion.recovery_start = ngx_current_msec; + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, "quic path seq:%uL created addr:%V", path->seqnum, &path->addr_text); diff -r b87a0dbc1150 -r 8723d4282f6d src/event/quic/ngx_event_quic_output.c --- a/src/event/quic/ngx_event_quic_output.c Tue Oct 25 12:52:09 2022 +0400 +++ b/src/event/quic/ngx_event_quic_output.c Mon Dec 12 15:47:03 2022 +0800 @@ -87,7 +87,7 @@ c->log->action = "sending frames"; qc = ngx_quic_get_connection(c); - cg = &qc->congestion; + cg = &qc->path->congestion; in_flight = cg->in_flight; @@ -135,8 +135,8 @@ static u_char dst[NGX_QUIC_MAX_UDP_PAYLOAD_SIZE]; qc = ngx_quic_get_connection(c); - cg = &qc->congestion; path = qc->path; + cg = &path->congestion; while (cg->in_flight < cg->window) { @@ -223,7 +223,7 @@ qc = ngx_quic_get_connection(c); - cg = &qc->congestion; + cg = &qc->path->congestion; while (!ngx_queue_empty(&ctx->sending)) { @@ -336,8 +336,8 @@ static u_char dst[NGX_QUIC_MAX_UDP_SEGMENT_BUF]; qc = ngx_quic_get_connection(c); - cg = &qc->congestion; path = qc->path; + cg = &path->congestion; ctx = ngx_quic_get_send_ctx(qc, ssl_encryption_application); -- Yu Zhu From: Roman Arutyunyan Date: Wednesday, December 7, 2022 at 23:05 To: nginx-devel at nginx.org Subject: Re: QUIC: reworked congestion control mechanism. Hi, Thanks for the path. On Tue, Dec 06, 2022 at 02:35:37PM +0000, 朱宇 wrote: > Hi, > > # HG changeset patch > # User Yu Zhu > # Date 1670326031 -28800 > # Tue Dec 06 19:27:11 2022 +0800 > # Branch quic > # Node ID 9a47ff1223bb32c8ddb146d731b395af89769a97 > # Parent 1a320805265db14904ca9deaae8330f4979619ce > QUIC: reworked congestion control mechanism. > > 1. move rtt measurement and congestion control to struct ngx_quic_path_t > because RTT and congestion control are properities of the path. I think this part should be moved out to a separate patch. > 2. introduced struct "ngx_quic_congestion_ops_t" to wrap callback functions > of congestion control and extract the reno algorithm from ngx_event_quic_ack.c. The biggest question about this part is how extensible is this approach? We are planning to implement more congestion control algorithms in the future and need a framework that would allow us to do that. Even CUBIC needs more data fields that we have now, and BBR will prooably need much more than that. Not sure how we'll add those data fields considering the proposed modular design. Also, we need to make sure the API is enough for future algorithms. I suggest that we finish the first part which moves congestion control to the path object. Then, until we have at least one other congestion control algorithm supported, it's hard to come up with a good API for it. I this we can postpone the second part until then. Also, I think CUBIC can be hardcoded into Reno without modular redesign of the code. > No functional changes. [..] > diff -r 1a320805265d -r 9a47ff1223bb src/event/quic/congestion/ngx_quic_reno.c > --- /dev/null Thu Jan 01 00:00:00 1970 +0000 > +++ b/src/event/quic/congestion/ngx_quic_reno.c Tue Dec 06 19:27:11 2022 +0800 > @@ -0,0 +1,133 @@ > + > +/* > + * Copyright (C) Nginx, Inc. > + */ > + > + > +#include > +#include > +#include > +#include > + > + > +static void ngx_quic_reno_on_init(ngx_connection_t *c, ngx_quic_congestion_t *cg); > +static ngx_int_t ngx_quic_reno_on_ack(ngx_connection_t *c, ngx_quic_frame_t *f); > +static ngx_int_t ngx_quic_reno_on_lost(ngx_connection_t *c, ngx_quic_frame_t *f); > + > + > +ngx_quic_congestion_ops_t ngx_quic_reno = { > + ngx_string("reno"), > + ngx_quic_reno_on_init, > + ngx_quic_reno_on_ack, > + ngx_quic_reno_on_lost > +}; > + > + > +static void > +ngx_quic_reno_on_init(ngx_connection_t *c, ngx_quic_congestion_t *cg) > +{ > + ngx_quic_connection_t *qc; > + > + qc = ngx_quic_get_connection(c); > + > + cg->window = ngx_min(10 * qc->tp.max_udp_payload_size, > + ngx_max(2 * qc->tp.max_udp_payload_size, > + 14720)); > + cg->ssthresh = (size_t) -1; > + cg->recovery_start = ngx_current_msec; > +} > + > + > +static ngx_int_t > +ngx_quic_reno_on_ack(ngx_connection_t *c, ngx_quic_frame_t *f) > +{ > + ngx_msec_t timer; > + ngx_quic_path_t *path; > + ngx_quic_connection_t *qc; > + ngx_quic_congestion_t *cg; > + > + qc = ngx_quic_get_connection(c); > + path = qc->path; What if the packet was sent on a different path? > + > + cg = &path->congestion; > + > + cg->in_flight -= f->plen; > + > + timer = f->last - cg->recovery_start; > + > + if ((ngx_msec_int_t) timer <= 0) { > + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, > + "quic congestion ack recovery win:%uz ss:%z if:%uz", > + cg->window, cg->ssthresh, cg->in_flight); > + > + return NGX_DONE; > + } > + > + if (cg->window < cg->ssthresh) { > + cg->window += f->plen; > + > + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, > + "quic congestion slow start win:%uz ss:%z if:%uz", > + cg->window, cg->ssthresh, cg->in_flight); > + > + } else { > + cg->window += qc->tp.max_udp_payload_size * f->plen / cg->window; > + > + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, > + "quic congestion avoidance win:%uz ss:%z if:%uz", > + cg->window, cg->ssthresh, cg->in_flight); > + } > + > + /* prevent recovery_start from wrapping */ > + > + timer = cg->recovery_start - ngx_current_msec + qc->tp.max_idle_timeout * 2; > + > + if ((ngx_msec_int_t) timer < 0) { > + cg->recovery_start = ngx_current_msec - qc->tp.max_idle_timeout * 2; > + } > + > + return NGX_OK; > +} > + > + > +static ngx_int_t > +ngx_quic_reno_on_lost(ngx_connection_t *c, ngx_quic_frame_t *f) > +{ > + ngx_msec_t timer; > + ngx_quic_path_t *path; > + ngx_quic_connection_t *qc; > + ngx_quic_congestion_t *cg; > + > + qc = ngx_quic_get_connection(c); > + path = qc->path; Same here. > + > + cg = &path->congestion; > + > + cg->in_flight -= f->plen; > + f->plen = 0; > + > + timer = f->last - cg->recovery_start; > + > + if ((ngx_msec_int_t) timer <= 0) { > + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, > + "quic congestion lost recovery win:%uz ss:%z if:%uz", > + cg->window, cg->ssthresh, cg->in_flight); > + > + return NGX_DONE; > + } > + > + cg->recovery_start = ngx_current_msec; > + cg->window /= 2; > + > + if (cg->window < qc->tp.max_udp_payload_size * 2) { > + cg->window = qc->tp.max_udp_payload_size * 2; > + } > + > + cg->ssthresh = cg->window; > + > + ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, > + "quic congestion lost win:%uz ss:%z if:%uz", > + cg->window, cg->ssthresh, cg->in_flight); > + > + return NGX_OK; > +} [..] -- Roman Arutyunyan _______________________________________________ nginx-devel mailing list -- nginx-devel at nginx.org To unsubscribe send an email to nginx-devel-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavier.solanellas at esade.edu Mon Dec 12 10:43:26 2022 From: xavier.solanellas at esade.edu (Solanellas Llobet, Xavier) Date: Mon, 12 Dec 2022 10:43:26 +0000 Subject: Special character # in url Message-ID: Hi everybody, I'm trying to redirect an url like /text1/text2#/text3 to url /text1/text2/text3 The problem seems that # character cut url in navigator, and nginx receives /text1/text2 Anybody have an workarround from nginx? The solution seems use javascript but how can implement it? Thanks!!!!! Xavier -------------- next part -------------- An HTML attachment was scrubbed... URL: From serg.brester at sebres.de Mon Dec 12 10:57:17 2022 From: serg.brester at sebres.de (Sergey Brester) Date: Mon, 12 Dec 2022 11:57:17 +0100 Subject: Special character # in url In-Reply-To: References: Message-ID: <022c5876959dedba44c9d9de277caea7@sebres.de> Hi, firstly please don't use nginx-devel for such questions, this is a developer mailing list and reserved for developers only purposes. Furthermore it is not nginx related question at all. The browsers handle #-chars as an internal jump to the anchor by ID of element on page, so nginx (or whatever web-server) can not and will not receive the part of URI after #-char from your agent. If you need such character in URI, you have to URL-encode it (# ==> %23). Regards, sebres 12.12.2022 11:43, Solanellas Llobet, Xavier wrote: > Hi everybody, > > I'm trying to redirect an url like /text1/text2#/text3 to url /text1/text2/text3 > > The problem seems that # character cut url in navigator, and nginx receives /text1/text2 > > Anybody have an workarround from nginx? > > The solution seems use javascript but how can implement it? > > Thanks!!!!! > Xavier > > _______________________________________________ > nginx-devel mailing list -- nginx-devel at nginx.org > To unsubscribe send an email to nginx-devel-leave at nginx.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From magnus at bodin.org Mon Dec 12 11:20:10 2022 From: magnus at bodin.org (=?UTF-8?Q?Magnus_Bodin_=E2=98=80?=) Date: Mon, 12 Dec 2022 12:20:10 +0100 Subject: Special character # in url In-Reply-To: References: Message-ID: Fragment identifiers are not sent to the server On Mon, Dec 12, 2022 at 11:43 AM Solanellas Llobet, Xavier < xavier.solanellas at esade.edu> wrote: > Hi everybody, > > I'm trying to redirect an url like /text1/text2#/text3 to url > /text1/text2/text3 > > The problem seems that # character cut url in navigator, and nginx > receives /text1/text2 > > Anybody have an workarround from nginx? > > The solution seems use javascript but how can implement it? > > Thanks!!!!! > Xavier > > > > > > > > > > _______________________________________________ > nginx-devel mailing list -- nginx-devel at nginx.org > To unsubscribe send an email to nginx-devel-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Mon Dec 12 15:10:43 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 12 Dec 2022 19:10:43 +0400 Subject: [PATCH] proxy_cache_max_range_offset could affect the background updating In-Reply-To: References: Message-ID: > On 7 Dec 2022, at 09:26, Sangdeuk Kwon wrote: > > # HG changeset patch > # User Sangdeuk Kwon > # Date 1670390583 -32400 > # Wed Dec 07 14:23:03 2022 +0900 > # Node ID a1069fbf10ffd806b7c8d6deb3f6546edc7b0427 > # Parent 0b360747c74e3fa7e439e0684a8cf1da2d14d8f6 > proxy_cache_max_range_offset could affect the background updating > > proxy_cache_max_range_offset doesn't care about the upstream of background updating. > So, nginx drops the new cache file after background updating. > This behavior is strange because background updating is just to fetch > new content after serving a stale cache, not to serve it. > > I think the background updating should be not affected by proxy_cache_max_range_offset. > Indeed, such configuration with far ranges results in useless background (range) subrequests, the response is discarded. While the behaviour matches the description of proxy_cache_max_range_offset, I believe the original intention was to avoid latencies in sending output to the client, imposed by obtaining a new full resource from upstream. As long as background subrequests don't delay sending output to the client, it should be ok to keep caching of (full) response enabled, regardless of whether the proxy_cache_max_range_offset limit is hit. > Related directives: > proxy_cache_max_range_offset 10; > proxy_cache_use_stale updating; > proxy_cache_background_update on; > > diff -r 0b360747c74e -r a1069fbf10ff src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c Thu Nov 24 23:08:30 2022 +0400 > +++ b/src/http/ngx_http_upstream.c Wed Dec 07 14:23:03 2022 +0900 > @@ -986,7 +986,9 @@ > return rc; > } > > - if (ngx_http_upstream_cache_check_range(r, u) == NGX_DECLINED) { > + if (!r->background > + && ngx_http_upstream_cache_check_range(r, u) == NGX_DECLINED) > + { > u->cacheable = 0; > } > Given the above, I cannot imagine that testing far ranges is anyhow usable for background subrequests in practice. As such, I think the condition can be moved inside of the ngx_http_upstream_cache_check_range() function. -- Sergey Kandaurov From sangdeuk.kwon at quantil.com Tue Dec 13 04:44:43 2022 From: sangdeuk.kwon at quantil.com (Sangdeuk Kwon) Date: Tue, 13 Dec 2022 13:44:43 +0900 Subject: [PATCH] proxy_cache_max_range_offset could affect the background updating Message-ID: # HG changeset patch # User Sangdeuk Kwon # Date 1670906112 -32400 # Tue Dec 13 13:35:12 2022 +0900 # Node ID 8c273203bd8e7280d3ce1895a37c7ef5323eea2b # Parent 56819a9491fe2ee1dcfe4986bed913b894fc0360 proxy_cache_max_range_offset could affect the background updating proxy_cache_max_range_offset doesn't care about the upstream of background updating. So, nginx drops the new cache file after background updating. This behavior is strange because background updating is just to fetch new content after serving a stale cache, not to serve it. I think the background updating should be not affected by proxy_cache_max_range_offset. Related directives: proxy_cache_max_range_offset 10; proxy_cache_use_stale updating; proxy_cache_background_update on; diff -r 56819a9491fe -r 8c273203bd8e src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Thu Dec 01 04:22:36 2022 +0300 +++ b/src/http/ngx_http_upstream.c Tue Dec 13 13:35:12 2022 +0900 @@ -1143,7 +1143,8 @@ if (h == NULL || !u->cacheable - || u->conf->cache_max_range_offset == NGX_MAX_OFF_T_VALUE) + || u->conf->cache_max_range_offset == NGX_MAX_OFF_T_VALUE + || r->background) { return NGX_OK; } I moved the condition into ngx_http_upstream_cache_check_range() function. >> On 7 Dec 2022, at 09:26, Sangdeuk Kwon wrote: >> >> # HG changeset patch >> # User Sangdeuk Kwon >> # Date 1670390583 -32400 >> # Wed Dec 07 14:23:03 2022 +0900 >> # Node ID a1069fbf10ffd806b7c8d6deb3f6546edc7b0427 >> # Parent 0b360747c74e3fa7e439e0684a8cf1da2d14d8f6 >> proxy_cache_max_range_offset could affect the background updating >> >> proxy_cache_max_range_offset doesn't care about the upstream of background updating. >> So, nginx drops the new cache file after background updating. >> This behavior is strange because background updating is just to fetch >> new content after serving a stale cache, not to serve it. >> >> I think the background updating should be not affected by proxy_cache_max_range_offset. >> > > Indeed, such configuration with far ranges results in useless background > (range) subrequests, the response is discarded. > > While the behaviour matches the description of proxy_cache_max_range_offset, > I believe the original intention was to avoid latencies in sending output > to the client, imposed by obtaining a new full resource from upstream. > As long as background subrequests don't delay sending output to the client, > it should be ok to keep caching of (full) response enabled, regardless of > whether the proxy_cache_max_range_offset limit is hit. > >> Related directives: >> proxy_cache_max_range_offset 10; >> proxy_cache_use_stale updating; >> proxy_cache_background_update on; >> >> diff -r 0b360747c74e -r a1069fbf10ff src/http/ngx_http_upstream.c >> --- a/src/http/ngx_http_upstream.c Thu Nov 24 23:08:30 2022 +0400 >> +++ b/src/http/ngx_http_upstream.c Wed Dec 07 14:23:03 2022 +0900 >> @@ -986,7 +986,9 @@ >> return rc; >> } >> >> - if (ngx_http_upstream_cache_check_range(r, u) == NGX_DECLINED) { >> + if (!r->background >> + && ngx_http_upstream_cache_check_range(r, u) == NGX_DECLINED) >> + { >> u->cacheable = 0; >> } >> > > Given the above, I cannot imagine that testing far ranges > is anyhow usable for background subrequests in practice. > As such, I think the condition can be moved inside of the > ngx_http_upstream_cache_check_range() function. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Dec 13 15:00:01 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Dec 2022 18:00:01 +0300 Subject: nginx 1.23.3 changes draft Message-ID: Hello! Changes with nginx 1.23.3 13 Dec 2022 *) Bugfix: an error might occur when reading PROXY protocol version 2 header with large number of TLVs. *) Bugfix: a segmentation fault might occur in a worker process if SSI was used to process subrequests created by other modules. Thanks to Ciel Zhao. *) Workaround: when a hostname used in the "listen" directive resolves to multiple addresses, nginx now ignores duplicates within these addresses. *) Bugfix: nginx might hog CPU during unbuffered proxying if SSL connections to backends were used. Изменения в nginx 1.23.3 13.12.2022 *) Исправление: при чтении заголовка протокола PROXY версии 2, содержащего большое количество TLV, могла возникать ошибка. *) Исправление: при использовании SSI для обработки подзапросов, созданных другими модулями, в рабочем процессе мог произойти segmentation fault. Спасибо Ciel Zhao. *) Изменение: теперь, если при преобразовании в адреса имени хоста, указанного в директив listen, возвращается несколько адресов, nginx игнорирует дубликаты среди этих адресов. *) Исправление: nginx мог нагружать процессор при небуферизированном проксировании, если использовались SSL-соединения с бэкендами. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Dec 13 15:08:56 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Dec 2022 18:08:56 +0300 Subject: [PATCH] proxy_cache_max_range_offset could affect the background updating In-Reply-To: References: Message-ID: Hello! On Mon, Dec 12, 2022 at 07:10:43PM +0400, Sergey Kandaurov wrote: > > On 7 Dec 2022, at 09:26, Sangdeuk Kwon wrote: > > > > # HG changeset patch > > # User Sangdeuk Kwon > > # Date 1670390583 -32400 > > # Wed Dec 07 14:23:03 2022 +0900 > > # Node ID a1069fbf10ffd806b7c8d6deb3f6546edc7b0427 > > # Parent 0b360747c74e3fa7e439e0684a8cf1da2d14d8f6 > > proxy_cache_max_range_offset could affect the background updating > > > > proxy_cache_max_range_offset doesn't care about the upstream of background updating. > > So, nginx drops the new cache file after background updating. > > This behavior is strange because background updating is just to fetch > > new content after serving a stale cache, not to serve it. > > > > I think the background updating should be not affected by proxy_cache_max_range_offset. > > > > Indeed, such configuration with far ranges results in useless background > (range) subrequests, the response is discarded. > > While the behaviour matches the description of proxy_cache_max_range_offset, > I believe the original intention was to avoid latencies in sending output > to the client, imposed by obtaining a new full resource from upstream. > As long as background subrequests don't delay sending output to the client, > it should be ok to keep caching of (full) response enabled, regardless of > whether the proxy_cache_max_range_offset limit is hit. What is expected to happen if background requests are used for something other than cache updates? For example, such requests can be created by the mirror module. And I believe njs tried to use background requests as well. Also note that the "background subrequests don't delay sending output to the client" claim is not really correct. They do delay main request finalization, and therefore can delay further requests on the same connection. -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Tue Dec 13 15:10:49 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 13 Dec 2022 19:10:49 +0400 Subject: nginx 1.23.3 changes draft In-Reply-To: References: Message-ID: > On 13 Dec 2022, at 19:00, Maxim Dounin wrote: > > Hello! > > > Changes with nginx 1.23.3 13 Dec 2022 > > *) Bugfix: an error might occur when reading PROXY protocol version 2 > header with large number of TLVs. > > *) Bugfix: a segmentation fault might occur in a worker process if SSI > was used to process subrequests created by other modules. > Thanks to Ciel Zhao. > > *) Workaround: when a hostname used in the "listen" directive resolves > to multiple addresses, nginx now ignores duplicates within these > addresses. > > *) Bugfix: nginx might hog CPU during unbuffered proxying if SSL > connections to backends were used. > > > Изменения в nginx 1.23.3 13.12.2022 > > *) Исправление: при чтении заголовка протокола PROXY версии 2, > содержащего большое количество TLV, могла возникать ошибка. > > *) Исправление: при использовании SSI для обработки подзапросов, > созданных другими модулями, в рабочем процессе мог произойти > segmentation fault. > Спасибо Ciel Zhao. > > *) Изменение: теперь, если при преобразовании в адреса имени хоста, > указанного в директив listen, возвращается несколько адресов, nginx > игнорирует дубликаты среди этих адресов. > > *) Исправление: nginx мог нагружать процессор при небуферизированном > проксировании, если использовались SSL-соединения с бэкендами. > Looks good to me. -- Sergey Kandaurov From yar at nginx.com Tue Dec 13 15:13:47 2022 From: yar at nginx.com (Yaroslav Zhuravlev) Date: Tue, 13 Dec 2022 15:13:47 +0000 Subject: nginx 1.23.3 changes draft In-Reply-To: References: Message-ID: <1594E7C4-357B-4AB0-9AF2-9CAB43C5745D@nginx.com> > On 13 Dec 2022, at 15:00, Maxim Dounin wrote: > > Hello! > > > Changes with nginx 1.23.3 13 Dec 2022 > > *) Bugfix: an error might occur when reading PROXY protocol version 2 > header with large number of TLVs. > > *) Bugfix: a segmentation fault might occur in a worker process if SSI > was used to process subrequests created by other modules. > Thanks to Ciel Zhao. > > *) Workaround: when a hostname used in the "listen" directive resolves > to multiple addresses, nginx now ignores duplicates within these > addresses. > > *) Bugfix: nginx might hog CPU during unbuffered proxying if SSL > connections to backends were used. > > > Изменения в nginx 1.23.3 13.12.2022 > > *) Исправление: при чтении заголовка протокола PROXY версии 2, > содержащего большое количество TLV, могла возникать ошибка. > > *) Исправление: при использовании SSI для обработки подзапросов, > созданных другими модулями, в рабочем процессе мог произойти > segmentation fault. > Спасибо Ciel Zhao. > > *) Изменение: теперь, если при преобразовании в адреса имени хоста, > указанного в директив listen, возвращается несколько адресов, nginx в директиве > игнорирует дубликаты среди этих адресов. > > *) Исправление: nginx мог нагружать процессор при небуферизированном > проксировании, если использовались SSL-соединения с бэкендами. > > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel From mdounin at mdounin.ru Tue Dec 13 15:15:35 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Dec 2022 18:15:35 +0300 Subject: nginx 1.23.3 changes draft In-Reply-To: <1594E7C4-357B-4AB0-9AF2-9CAB43C5745D@nginx.com> References: <1594E7C4-357B-4AB0-9AF2-9CAB43C5745D@nginx.com> Message-ID: Hello! On Tue, Dec 13, 2022 at 03:13:47PM +0000, Yaroslav Zhuravlev wrote: > > On 13 Dec 2022, at 15:00, Maxim Dounin wrote: > > > > Hello! > > > > > > Changes with nginx 1.23.3 13 Dec 2022 > > > > *) Bugfix: an error might occur when reading PROXY protocol version 2 > > header with large number of TLVs. > > > > *) Bugfix: a segmentation fault might occur in a worker process if SSI > > was used to process subrequests created by other modules. > > Thanks to Ciel Zhao. > > > > *) Workaround: when a hostname used in the "listen" directive resolves > > to multiple addresses, nginx now ignores duplicates within these > > addresses. > > > > *) Bugfix: nginx might hog CPU during unbuffered proxying if SSL > > connections to backends were used. > > > > > > Изменения в nginx 1.23.3 13.12.2022 > > > > *) Исправление: при чтении заголовка протокола PROXY версии 2, > > содержащего большое количество TLV, могла возникать ошибка. > > > > *) Исправление: при использовании SSI для обработки подзапросов, > > созданных другими модулями, в рабочем процессе мог произойти > > segmentation fault. > > Спасибо Ciel Zhao. > > > > *) Изменение: теперь, если при преобразовании в адреса имени хоста, > > указанного в директив listen, возвращается несколько адресов, nginx > > в директиве Fixed, thnx. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Dec 13 16:06:35 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Dec 2022 19:06:35 +0300 Subject: nginx 1.23.3 changes draft In-Reply-To: References: Message-ID: Hello! On Tue, Dec 13, 2022 at 06:00:01PM +0300, Maxim Dounin wrote: > Changes with nginx 1.23.3 13 Dec 2022 > > *) Bugfix: an error might occur when reading PROXY protocol version 2 > header with large number of TLVs. > > *) Bugfix: a segmentation fault might occur in a worker process if SSI > was used to process subrequests created by other modules. > Thanks to Ciel Zhao. > > *) Workaround: when a hostname used in the "listen" directive resolves > to multiple addresses, nginx now ignores duplicates within these > addresses. > > *) Bugfix: nginx might hog CPU during unbuffered proxying if SSL > connections to backends were used. Pushed to: http://mdounin.ru/hg/nginx http://mdounin.ru/hg/nginx.org Release builds: http://mdounin.ru/temp/nginx-1.23.3.tar.gz http://mdounin.ru/temp/nginx-1.23.3.tar.gz.asc http://mdounin.ru/temp/nginx-1.23.3.zip http://mdounin.ru/temp/nginx-1.23.3.zip.asc -- Maxim Dounin http://mdounin.ru/ From thresh at nginx.com Tue Dec 13 16:16:00 2022 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 13 Dec 2022 16:16:00 +0000 Subject: [nginx] Updated OpenSSL and zlib used for win32 builds. Message-ID: details: https://hg.nginx.org/nginx/rev/9ed5778f5d4a branches: changeset: 8112:9ed5778f5d4a user: Maxim Dounin date: Tue Dec 13 03:32:57 2022 +0300 description: Updated OpenSSL and zlib used for win32 builds. diffstat: misc/GNUmakefile | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff -r 56819a9491fe -r 9ed5778f5d4a misc/GNUmakefile --- a/misc/GNUmakefile Thu Dec 01 04:22:36 2022 +0300 +++ b/misc/GNUmakefile Tue Dec 13 03:32:57 2022 +0300 @@ -6,8 +6,8 @@ TEMP = tmp CC = cl OBJS = objs.msvc8 -OPENSSL = openssl-1.1.1q -ZLIB = zlib-1.2.12 +OPENSSL = openssl-1.1.1s +ZLIB = zlib-1.2.13 PCRE = pcre2-10.39 From thresh at nginx.com Tue Dec 13 16:16:04 2022 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 13 Dec 2022 16:16:04 +0000 Subject: [nginx] nginx-1.23.3-RELEASE Message-ID: details: https://hg.nginx.org/nginx/rev/ff3afd1ce6a6 branches: changeset: 8113:ff3afd1ce6a6 user: Maxim Dounin date: Tue Dec 13 18:53:53 2022 +0300 description: nginx-1.23.3-RELEASE diffstat: docs/xml/nginx/changes.xml | 55 ++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 55 insertions(+), 0 deletions(-) diffs (65 lines): diff -r 9ed5778f5d4a -r ff3afd1ce6a6 docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml Tue Dec 13 03:32:57 2022 +0300 +++ b/docs/xml/nginx/changes.xml Tue Dec 13 18:53:53 2022 +0300 @@ -5,6 +5,61 @@ + + + + +при чтении заголовка протокола PROXY версии 2, содержащего +большое количество TLV, могла возникать ошибка. + + +an error might occur when reading PROXY protocol version 2 header +with large number of TLVs. + + + + + +при использовании SSI для обработки подзапросов, созданных другими модулями, +в рабочем процессе мог произойти segmentation fault.
+Спасибо Ciel Zhao. +
+ +a segmentation fault might occur in a worker process +if SSI was used to process subrequests created by other modules.
+Thanks to Ciel Zhao. +
+
+ + + +теперь, если при преобразовании в адреса имени хоста, +указанного в директиве listen, возвращается несколько адресов, +nginx игнорирует дубликаты среди этих адресов. + + +when a hostname used in the "listen" directive +resolves to multiple addresses, +nginx now ignores duplicates within these addresses. + + + + + +nginx мог нагружать процессор +при небуферизированном проксировании, +если использовались SSL-соединения с бэкендами. + + +nginx might hog CPU +during unbuffered proxying +if SSL connections to backends were used. + + + +
+ + From thresh at nginx.com Tue Dec 13 16:16:07 2022 From: thresh at nginx.com (Konstantin Pavlov) Date: Tue, 13 Dec 2022 16:16:07 +0000 Subject: [nginx] release-1.23.3 tag Message-ID: details: https://hg.nginx.org/nginx/rev/c38588d8376b branches: changeset: 8114:c38588d8376b user: Maxim Dounin date: Tue Dec 13 18:53:53 2022 +0300 description: release-1.23.3 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff -r ff3afd1ce6a6 -r c38588d8376b .hgtags --- a/.hgtags Tue Dec 13 18:53:53 2022 +0300 +++ b/.hgtags Tue Dec 13 18:53:53 2022 +0300 @@ -470,3 +470,4 @@ 714eb4b2c09e712fb2572a2164ce2bf67638ccac 5da2c0902e8e2aa4534008a582a60c61c135960e release-1.23.0 a63d0a70afea96813ba6667997bc7d68b5863f0d release-1.23.1 aa901551a7ebad1e8b0f8c11cb44e3424ba29707 release-1.23.2 +ff3afd1ce6a6b65057741df442adfaa71a0e2588 release-1.23.3 From xeioex at nginx.com Tue Dec 13 17:13:17 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 13 Dec 2022 17:13:17 +0000 Subject: [njs] Modules: fixed nginx logger callback for calls in master. Message-ID: details: https://hg.nginx.org/njs/rev/f23c541c02ad branches: changeset: 2014:f23c541c02ad user: Dmitry Volyntsev date: Mon Dec 12 21:55:47 2022 -0800 description: Modules: fixed nginx logger callback for calls in master. diffstat: nginx/ngx_js.c | 24 +++++++++++++++++++----- 1 files changed, 19 insertions(+), 5 deletions(-) diffs (39 lines): diff -r 23607989a28b -r f23c541c02ad nginx/ngx_js.c --- a/nginx/ngx_js.c Wed Dec 07 18:11:57 2022 -0800 +++ b/nginx/ngx_js.c Mon Dec 12 21:55:47 2022 -0800 @@ -376,16 +376,30 @@ void ngx_js_logger(njs_vm_t *vm, njs_external_ptr_t external, njs_log_level_t level, const u_char *start, size_t length) { + ngx_log_t *log; ngx_connection_t *c; ngx_log_handler_pt handler; - c = ngx_external_connection(vm, external); - handler = c->log->handler; - c->log->handler = NULL; + handler = NULL; + + if (external != NULL) { + c = ngx_external_connection(vm, external); + log = c->log; + handler = log->handler; + log->handler = NULL; + + } else { - ngx_log_error((ngx_uint_t) level, c->log, 0, "js: %*s", length, start); + /* Logger was called during init phase. */ + + log = ngx_cycle->log; + } - c->log->handler = handler; + ngx_log_error((ngx_uint_t) level, log, 0, "js: %*s", length, start); + + if (external != NULL) { + log->handler = handler; + } } From xeioex at nginx.com Tue Dec 13 17:13:19 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 13 Dec 2022 17:13:19 +0000 Subject: [njs] Modules: added Request, Response and Headers ctors in Fetch API. Message-ID: details: https://hg.nginx.org/njs/rev/c43261bad627 branches: changeset: 2015:c43261bad627 user: Dmitry Volyntsev date: Mon Dec 12 22:00:23 2022 -0800 description: Modules: added Request, Response and Headers ctors in Fetch API. Added Headers method and properties: append(), delete(), get(), forEach(), has(), set(). Added Request method and properties: arrayBuffer(), bodyUsed, cache, credentials, json(), method, mode, text(), url. Added Headers, Request, Response constructors. This closes #425 issue on Github. diffstat: nginx/ngx_js_fetch.c | 2258 ++++++++++++++++++++++++++++++++++++++++++------- 1 files changed, 1936 insertions(+), 322 deletions(-) diffs (truncated from 2786 to 1000 lines): diff -r f23c541c02ad -r c43261bad627 nginx/ngx_js_fetch.c --- a/nginx/ngx_js_fetch.c Mon Dec 12 21:55:47 2022 -0800 +++ b/nginx/ngx_js_fetch.c Mon Dec 12 22:00:23 2022 -0800 @@ -18,6 +18,12 @@ typedef struct ngx_js_http_s ngx_js_htt typedef struct { + njs_str_t name; + njs_int_t value; +} ngx_js_entry_t; + + +typedef struct { ngx_uint_t state; ngx_uint_t code; u_char *status_text; @@ -41,6 +47,59 @@ typedef struct { } ngx_js_http_chunk_parse_t; +typedef struct { + enum { + GUARD_NONE = 0, + GUARD_REQUEST, + GUARD_IMMUTABLE, + GUARD_RESPONSE, + } guard; + ngx_list_t header_list; +} ngx_js_headers_t; + + +typedef struct { + enum { + CACHE_MODE_DEFAULT = 0, + CACHE_MODE_NO_STORE, + CACHE_MODE_RELOAD, + CACHE_MODE_NO_CACHE, + CACHE_MODE_FORCE_CACHE, + CACHE_MODE_ONLY_IF_CACHED, + } cache_mode; + enum { + CREDENTIALS_SAME_ORIGIN = 0, + CREDENTIALS_INCLUDE, + CREDENTIALS_OMIT, + } credentials; + enum { + MODE_NO_CORS = 0, + MODE_SAME_ORIGIN, + MODE_CORS, + MODE_NAVIGATE, + MODE_WEBSOCKET, + } mode; + njs_str_t url; + njs_str_t method; + u_char m[8]; + uint8_t body_used; + njs_str_t body; + ngx_js_headers_t headers; + njs_opaque_value_t header_value; +} ngx_js_request_t; + + +typedef struct { + njs_str_t url; + ngx_int_t code; + njs_str_t status_text; + uint8_t body_used; + njs_chb_t chain; + ngx_js_headers_t headers; + njs_opaque_value_t header_value; +} ngx_js_response_t; + + struct ngx_js_http_s { ngx_log_t *log; ngx_pool_t *pool; @@ -63,9 +122,6 @@ struct ngx_js_http_s { ngx_int_t buffer_size; ngx_int_t max_response_body_size; - njs_str_t url; - ngx_array_t headers; - unsigned header_only; #if (NGX_SSL) @@ -78,26 +134,35 @@ struct ngx_js_http_s { ngx_buf_t *chunk; njs_chb_t chain; - njs_opaque_value_t reply; + ngx_js_response_t response; + njs_opaque_value_t response_value; + njs_opaque_value_t promise; njs_opaque_value_t promise_callbacks[2]; uint8_t done; - uint8_t body_used; ngx_js_http_parse_t http_parse; ngx_js_http_chunk_parse_t http_chunk_parse; ngx_int_t (*process)(ngx_js_http_t *http); }; + + #define ngx_js_http_error(http, err, fmt, ...) \ do { \ - njs_vm_value_error_set((http)->vm, njs_value_arg(&(http)->reply), \ + njs_vm_value_error_set((http)->vm, \ + njs_value_arg(&(http)->response_value), \ fmt, ##__VA_ARGS__); \ - ngx_js_http_fetch_done(http, &(http)->reply, NJS_ERROR); \ + ngx_js_http_fetch_done(http, &(http)->response_value, NJS_ERROR); \ } while (0) +static njs_int_t ngx_js_method_process(njs_vm_t *vm, ngx_js_request_t *r); +static njs_int_t ngx_js_headers_inherit(njs_vm_t *vm, ngx_js_headers_t *headers, + ngx_js_headers_t *orig); +static njs_int_t ngx_js_headers_fill(njs_vm_t *vm, ngx_js_headers_t *headers, + njs_value_t *init); static ngx_js_http_t *ngx_js_http_alloc(njs_vm_t *vm, ngx_pool_t *pool, ngx_log_t *log); static void njs_js_http_destructor(njs_external_ptr_t external, @@ -113,6 +178,14 @@ static void ngx_js_http_connect(ngx_js_h static void ngx_js_http_next(ngx_js_http_t *http); static void ngx_js_http_write_handler(ngx_event_t *wev); static void ngx_js_http_read_handler(ngx_event_t *rev); + +static njs_int_t ngx_js_request_constructor(njs_vm_t *vm, + ngx_js_request_t *request, ngx_url_t *u, njs_external_ptr_t external, + njs_value_t *args, njs_uint_t nargs); + +static njs_int_t ngx_js_headers_append(njs_vm_t *vm, ngx_js_headers_t *headers, + u_char *name, size_t len, u_char *value, size_t vlen); + static ngx_int_t ngx_js_http_process_status_line(ngx_js_http_t *http); static ngx_int_t ngx_js_http_process_headers(ngx_js_http_t *http); static ngx_int_t ngx_js_http_process_body(ngx_js_http_t *http); @@ -124,15 +197,39 @@ static ngx_int_t ngx_js_http_parse_chunk ngx_buf_t *b, njs_chb_t *chain); static void ngx_js_http_dummy_handler(ngx_event_t *ev); -static njs_int_t ngx_response_js_ext_headers_get(njs_vm_t *vm, - njs_value_t *args, njs_uint_t nargs, njs_index_t as_array); -static njs_int_t ngx_response_js_ext_headers_has(njs_vm_t *vm, - njs_value_t *args, njs_uint_t nargs, njs_index_t unused); -static njs_int_t ngx_response_js_ext_header(njs_vm_t *vm, +static njs_int_t ngx_headers_js_ext_append(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t unused); +static njs_int_t ngx_headers_js_ext_delete(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t unused); +static njs_int_t ngx_headers_js_ext_for_each(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t as_array); +static njs_int_t ngx_headers_js_ext_get(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t as_array); +static njs_int_t ngx_headers_js_ext_has(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t unused); +static njs_int_t ngx_headers_js_ext_prop(njs_vm_t *vm, njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, njs_value_t *retval); -static njs_int_t ngx_response_js_ext_keys(njs_vm_t *vm, njs_value_t *value, +static njs_int_t ngx_headers_js_ext_keys(njs_vm_t *vm, njs_value_t *value, njs_value_t *keys); +static njs_int_t ngx_headers_js_ext_set(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t unused); +static njs_int_t ngx_request_js_ext_body(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t unused); +static njs_int_t ngx_request_js_ext_body_used(njs_vm_t *vm, + njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, + njs_value_t *retval); +static njs_int_t ngx_request_js_ext_cache(njs_vm_t *vm, + njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, + njs_value_t *retval); +static njs_int_t ngx_request_js_ext_credentials(njs_vm_t *vm, + njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, + njs_value_t *retval); +static njs_int_t ngx_request_js_ext_headers(njs_vm_t *vm, + njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, + njs_value_t *retval); +static njs_int_t ngx_request_js_ext_mode(njs_vm_t *vm, njs_object_prop_t *prop, + njs_value_t *value, njs_value_t *setval, njs_value_t *retval); static njs_int_t ngx_response_js_ext_status(njs_vm_t *vm, njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, njs_value_t *retval); @@ -145,6 +242,9 @@ static njs_int_t ngx_response_js_ext_ok( static njs_int_t ngx_response_js_ext_body_used(njs_vm_t *vm, njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, njs_value_t *retval); +static njs_int_t ngx_response_js_ext_headers(njs_vm_t *vm, + njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, + njs_value_t *retval); static njs_int_t ngx_response_js_ext_type(njs_vm_t *vm, njs_object_prop_t *prop, njs_value_t *value, njs_value_t *setval, njs_value_t *retval); @@ -158,7 +258,44 @@ static void ngx_js_http_ssl_handshake(ng static njs_int_t ngx_js_http_ssl_name(ngx_js_http_t *http); #endif -static njs_external_t ngx_js_ext_http_response_headers[] = { +static void ngx_js_http_trim(u_char **value, size_t *len, + njs_bool_t trim_c0_control_or_space); +static njs_int_t ngx_fetch_flag(njs_vm_t *vm, const ngx_js_entry_t *entries, + njs_int_t value, njs_value_t *retval); +static njs_int_t ngx_fetch_flag_set(njs_vm_t *vm, const ngx_js_entry_t *entries, + njs_value_t *value, const char *type); + + +static const ngx_js_entry_t ngx_js_fetch_credentials[] = { + { njs_str("same-origin"), CREDENTIALS_SAME_ORIGIN }, + { njs_str("omit"), CREDENTIALS_OMIT }, + { njs_str("include"), CREDENTIALS_INCLUDE }, + { njs_null_str, 0 }, +}; + + +static const ngx_js_entry_t ngx_js_fetch_cache_modes[] = { + { njs_str("default"), CACHE_MODE_DEFAULT }, + { njs_str("no-store"), CACHE_MODE_NO_STORE }, + { njs_str("reload"), CACHE_MODE_RELOAD }, + { njs_str("no-cache"), CACHE_MODE_NO_CACHE }, + { njs_str("force-cache"), CACHE_MODE_FORCE_CACHE }, + { njs_str("only-if-cached"), CACHE_MODE_ONLY_IF_CACHED }, + { njs_null_str, 0 }, +}; + + +static const ngx_js_entry_t ngx_js_fetch_modes[] = { + { njs_str("no-cors"), MODE_NO_CORS }, + { njs_str("cors"), MODE_CORS }, + { njs_str("same-origin"), MODE_SAME_ORIGIN }, + { njs_str("navigate"), MODE_NAVIGATE }, + { njs_str("websocket"), MODE_WEBSOCKET }, + { njs_null_str, 0 }, +}; + + +static njs_external_t ngx_js_ext_http_headers[] = { { .flags = NJS_EXTERN_PROPERTY | NJS_EXTERN_SYMBOL, @@ -169,13 +306,55 @@ static njs_external_t ngx_js_ext_http_r }, { + .flags = NJS_EXTERN_SELF, + .u.object = { + .enumerable = 1, + .prop_handler = ngx_headers_js_ext_prop, + .keys = ngx_headers_js_ext_keys, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("append"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = ngx_headers_js_ext_append, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("delete"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = ngx_headers_js_ext_delete, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("forEach"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = ngx_headers_js_ext_for_each, + } + }, + + { .flags = NJS_EXTERN_METHOD, .name.string = njs_str("get"), .writable = 1, .configurable = 1, .enumerable = 1, .u.method = { - .native = ngx_response_js_ext_headers_get, + .native = ngx_headers_js_ext_get, } }, @@ -186,7 +365,7 @@ static njs_external_t ngx_js_ext_http_r .configurable = 1, .enumerable = 1, .u.method = { - .native = ngx_response_js_ext_headers_get, + .native = ngx_headers_js_ext_get, .magic8 = 1 } }, @@ -198,7 +377,135 @@ static njs_external_t ngx_js_ext_http_r .configurable = 1, .enumerable = 1, .u.method = { - .native = ngx_response_js_ext_headers_has, + .native = ngx_headers_js_ext_has, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("set"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = ngx_headers_js_ext_set, + } + }, + +}; + + +static njs_external_t ngx_js_ext_http_request[] = { + + { + .flags = NJS_EXTERN_PROPERTY | NJS_EXTERN_SYMBOL, + .name.symbol = NJS_SYMBOL_TO_STRING_TAG, + .u.property = { + .value = "Request", + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("arrayBuffer"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = ngx_request_js_ext_body, +#define NGX_JS_BODY_ARRAY_BUFFER 0 +#define NGX_JS_BODY_JSON 1 +#define NGX_JS_BODY_TEXT 2 + .magic8 = NGX_JS_BODY_ARRAY_BUFFER + } + }, + + { + .flags = NJS_EXTERN_PROPERTY, + .name.string = njs_str("bodyUsed"), + .enumerable = 1, + .u.property = { + .handler = ngx_request_js_ext_body_used, + } + }, + + { + .flags = NJS_EXTERN_PROPERTY, + .name.string = njs_str("cache"), + .enumerable = 1, + .u.property = { + .handler = ngx_request_js_ext_cache, + } + }, + + { + .flags = NJS_EXTERN_PROPERTY, + .name.string = njs_str("credentials"), + .enumerable = 1, + .u.property = { + .handler = ngx_request_js_ext_credentials, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("json"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = ngx_request_js_ext_body, + .magic8 = NGX_JS_BODY_JSON + } + }, + + { + .flags = NJS_EXTERN_PROPERTY, + .name.string = njs_str("headers"), + .enumerable = 1, + .u.property = { + .handler = ngx_request_js_ext_headers, + } + }, + + { + .flags = NJS_EXTERN_PROPERTY, + .name.string = njs_str("method"), + .enumerable = 1, + .u.property = { + .handler = ngx_js_ext_string, + .magic32 = offsetof(ngx_js_request_t, method), + } + }, + + { + .flags = NJS_EXTERN_PROPERTY, + .name.string = njs_str("mode"), + .enumerable = 1, + .u.property = { + .handler = ngx_request_js_ext_mode, + } + }, + + { + .flags = NJS_EXTERN_METHOD, + .name.string = njs_str("text"), + .writable = 1, + .configurable = 1, + .enumerable = 1, + .u.method = { + .native = ngx_request_js_ext_body, + .magic8 = NGX_JS_BODY_TEXT + } + }, + + { + .flags = NJS_EXTERN_PROPERTY, + .name.string = njs_str("url"), + .enumerable = 1, + .u.property = { + .handler = ngx_js_ext_string, + .magic32 = offsetof(ngx_js_request_t, url), } }, @@ -223,9 +530,6 @@ static njs_external_t ngx_js_ext_http_r .enumerable = 1, .u.method = { .native = ngx_response_js_ext_body, -#define NGX_JS_BODY_ARRAY_BUFFER 0 -#define NGX_JS_BODY_JSON 1 -#define NGX_JS_BODY_TEXT 2 .magic8 = NGX_JS_BODY_ARRAY_BUFFER } }, @@ -240,15 +544,11 @@ static njs_external_t ngx_js_ext_http_r }, { - .flags = NJS_EXTERN_OBJECT, + .flags = NJS_EXTERN_PROPERTY, .name.string = njs_str("headers"), .enumerable = 1, - .u.object = { - .enumerable = 1, - .properties = ngx_js_ext_http_response_headers, - .nproperties = njs_nitems(ngx_js_ext_http_response_headers), - .prop_handler = ngx_response_js_ext_header, - .keys = ngx_response_js_ext_keys, + .u.property = { + .handler = ngx_response_js_ext_headers, } }, @@ -329,47 +629,42 @@ static njs_external_t ngx_js_ext_http_r .enumerable = 1, .u.property = { .handler = ngx_js_ext_string, - .magic32 = offsetof(ngx_js_http_t, url), + .magic32 = offsetof(ngx_js_response_t, url), } }, }; -static njs_int_t ngx_http_js_fetch_proto_id; +static njs_int_t ngx_http_js_fetch_request_proto_id; +static njs_int_t ngx_http_js_fetch_response_proto_id; +static njs_int_t ngx_http_js_fetch_headers_proto_id; njs_int_t ngx_js_ext_fetch(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, njs_index_t unused) { - int64_t i, length; njs_int_t ret; - njs_str_t method, body, name, header; ngx_url_t u; - njs_bool_t has_host; + ngx_uint_t i; ngx_pool_t *pool; - njs_value_t *init, *value, *headers, *keys; + njs_value_t *init, *value; ngx_js_http_t *http; + ngx_list_part_t *part; + ngx_table_elt_t *h; + ngx_js_request_t request; ngx_connection_t *c; ngx_resolver_ctx_t *ctx; njs_external_ptr_t external; - njs_opaque_value_t *start, lvalue, headers_value; - - static const njs_str_t body_key = njs_str("body"); - static const njs_str_t headers_key = njs_str("headers"); + njs_opaque_value_t lvalue; + static const njs_str_t buffer_size_key = njs_str("buffer_size"); static const njs_str_t body_size_key = njs_str("max_response_body_size"); - static const njs_str_t method_key = njs_str("method"); #if (NGX_SSL) static const njs_str_t verify_key = njs_str("verify"); #endif - external = njs_vm_external(vm, NJS_PROTO_ID_ANY, njs_argument(args, 0)); - if (external == NULL) { - njs_vm_error(vm, "\"this\" is not an external"); - return NJS_ERROR; - } - + external = njs_vm_external_ptr(vm); c = ngx_external_connection(vm, external); pool = ngx_external_pool(vm, external); @@ -379,76 +674,29 @@ ngx_js_ext_fetch(njs_vm_t *vm, njs_value } http->external = external; + http->event_handler = ngx_external_event_handler(vm, external); + + ret = ngx_js_request_constructor(vm, &request, &u, external, args, nargs); + if (ret != NJS_OK) { + goto fail; + } + + http->response.url = request.url; http->timeout = ngx_external_fetch_timeout(vm, external); - http->event_handler = ngx_external_event_handler(vm, external); http->buffer_size = ngx_external_buffer_size(vm, external); http->max_response_body_size = ngx_external_max_response_buffer_size(vm, external); - ret = ngx_js_string(vm, njs_arg(args, nargs, 1), &http->url); - if (ret != NJS_OK) { - njs_vm_error(vm, "failed to convert url arg"); - goto fail; - } - - ngx_memzero(&u, sizeof(ngx_url_t)); - - u.url.len = http->url.length; - u.url.data = http->url.start; - u.default_port = 80; - u.uri_part = 1; - u.no_resolve = 1; - - if (u.url.len > 7 - && ngx_strncasecmp(u.url.data, (u_char *) "http://", 7) == 0) - { - u.url.len -= 7; - u.url.data += 7; - #if (NGX_SSL) - } else if (u.url.len > 8 - && ngx_strncasecmp(u.url.data, (u_char *) "https://", 8) == 0) - { - u.url.len -= 8; - u.url.data += 8; - u.default_port = 443; + if (u.default_port == 443) { http->ssl = ngx_external_ssl(vm, external); http->ssl_verify = ngx_external_ssl_verify(vm, external); + } #endif - } else { - njs_vm_error(vm, "unsupported URL prefix"); - goto fail; - } - - if (ngx_parse_url(pool, &u) != NGX_OK) { - njs_vm_error(vm, "invalid url"); - goto fail; - } - init = njs_arg(args, nargs, 2); - method = njs_str_value("GET"); - body = njs_str_value(""); - headers = NULL; - if (njs_value_is_object(init)) { - value = njs_vm_object_prop(vm, init, &method_key, &lvalue); - if (value != NULL && ngx_js_string(vm, value, &method) != NGX_OK) { - goto fail; - } - - headers = njs_vm_object_prop(vm, init, &headers_key, &headers_value); - if (headers != NULL && !njs_value_is_object(headers)) { - njs_vm_error(vm, "headers is not an object"); - goto fail; - } - - value = njs_vm_object_prop(vm, init, &body_key, &lvalue); - if (value != NULL && ngx_js_string(vm, value, &body) != NGX_OK) { - goto fail; - } - value = njs_vm_object_prop(vm, init, &buffer_size_key, &lvalue); if (value != NULL && ngx_js_integer(vm, value, &http->buffer_size) @@ -473,11 +721,11 @@ ngx_js_ext_fetch(njs_vm_t *vm, njs_value #endif } + http->header_only = njs_strstr_eq(&request.method, &njs_str_value("HEAD")); + njs_chb_init(&http->chain, njs_vm_memory_pool(vm)); - http->header_only = njs_strstr_case_eq(&method, &njs_str_value("HEAD")); - - njs_chb_append(&http->chain, method.start, method.length); + njs_chb_append(&http->chain, request.method.start, request.method.length); njs_chb_append_literal(&http->chain, " "); if (u.uri.len == 0 || u.uri.data[0] != '/') { @@ -486,59 +734,34 @@ ngx_js_ext_fetch(njs_vm_t *vm, njs_value njs_chb_append(&http->chain, u.uri.data, u.uri.len); njs_chb_append_literal(&http->chain, " HTTP/1.1" CRLF); + + njs_chb_append_literal(&http->chain, "Host: "); + njs_chb_append(&http->chain, u.host.data, u.host.len); + njs_chb_append_literal(&http->chain, CRLF); njs_chb_append_literal(&http->chain, "Connection: close" CRLF); - has_host = 0; - - if (headers != NULL) { - keys = njs_vm_object_keys(vm, headers, njs_value_arg(&lvalue)); - if (keys == NULL) { - goto fail; - } - - start = (njs_opaque_value_t *) njs_vm_array_start(vm, keys); - if (start == NULL) { - goto fail; - } - - (void) njs_vm_array_length(vm, keys, &length); - - for (i = 0; i < length; i++) { - if (ngx_js_string(vm, njs_value_arg(start), &name) != NGX_OK) { - goto fail; + part = &request.headers.header_list.part; + h = part->elts; + + for (i = 0; /* void */; i++) { + + if (i >= part->nelts) { + if (part->next == NULL) { + break; } - start++; - - value = njs_vm_object_prop(vm, headers, &name, &lvalue); - if (value == NULL) { - goto fail; - } - - if (njs_value_is_null_or_undefined(value)) { - continue; - } - - if (ngx_js_string(vm, value, &header) != NGX_OK) { - goto fail; - } - - if (name.length == 4 - && ngx_strncasecmp(name.start, (u_char *) "Host", 4) == 0) - { - has_host = 1; - } - - njs_chb_append(&http->chain, name.start, name.length); - njs_chb_append_literal(&http->chain, ": "); - njs_chb_append(&http->chain, header.start, header.length); - njs_chb_append_literal(&http->chain, CRLF); + part = part->next; + h = part->elts; + i = 0; } - } - - if (!has_host) { - njs_chb_append_literal(&http->chain, "Host: "); - njs_chb_append(&http->chain, u.host.data, u.host.len); + + if (h[i].hash == 0) { + continue; + } + + njs_chb_append(&http->chain, h[i].key.data, h[i].key.len); + njs_chb_append_literal(&http->chain, ": "); + njs_chb_append(&http->chain, h[i].value.data, h[i].value.len); njs_chb_append_literal(&http->chain, CRLF); } @@ -547,10 +770,10 @@ ngx_js_ext_fetch(njs_vm_t *vm, njs_value http->tls_name.len = u.host.len; #endif - if (body.length != 0) { + if (request.body.length != 0) { njs_chb_sprintf(&http->chain, 32, "Content-Length: %uz" CRLF CRLF, - body.length); - njs_chb_append(&http->chain, body.start, body.length); + request.body.length); + njs_chb_append(&http->chain, request.body.start, request.body.length); } else { njs_chb_append_literal(&http->chain, CRLF); @@ -609,6 +832,377 @@ fail: } +static njs_int_t +ngx_js_ext_headers_constructor(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t unused) +{ + ngx_int_t rc; + njs_int_t ret; + njs_value_t *init; + ngx_pool_t *pool; + ngx_js_headers_t *headers; + + pool = ngx_external_pool(vm, njs_vm_external_ptr(vm)); + + headers = ngx_palloc(pool, sizeof(ngx_js_headers_t)); + if (headers == NULL) { + njs_vm_memory_error(vm); + return NJS_ERROR; + } + + rc = ngx_list_init(&headers->header_list, pool, 4, sizeof(ngx_table_elt_t)); + if (rc != NGX_OK) { + njs_vm_memory_error(vm); + return NJS_ERROR; + } + + init = njs_arg(args, nargs, 1); + + if (njs_value_is_object(init)) { + ret = ngx_js_headers_fill(vm, headers, init); + if (ret != NJS_OK) { + return NJS_ERROR; + } + } + + return njs_vm_external_create(vm, njs_vm_retval(vm), + ngx_http_js_fetch_headers_proto_id, headers, + 0); +} + + +static njs_int_t +ngx_js_ext_request_constructor(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t unused) +{ + njs_int_t ret; + ngx_url_t u; + ngx_js_request_t *request; + + request = njs_mp_alloc(njs_vm_memory_pool(vm), sizeof(ngx_js_request_t)); + if (request == NULL) { + njs_vm_memory_error(vm); + return NJS_ERROR; + } + + ret = ngx_js_request_constructor(vm, request, &u, njs_vm_external_ptr(vm), + args, nargs); + if (ret != NJS_OK) { + return NJS_ERROR; + } + + return njs_vm_external_create(vm, njs_vm_retval(vm), + ngx_http_js_fetch_request_proto_id, request, + 0); +} + + +static njs_int_t +ngx_js_ext_response_constructor(njs_vm_t *vm, njs_value_t *args, + njs_uint_t nargs, njs_index_t unused) +{ + u_char *p, *end; + ngx_int_t rc; + njs_int_t ret; + njs_str_t bd; + ngx_pool_t *pool; + njs_value_t *body, *init, *value; + ngx_js_response_t *response; + njs_opaque_value_t lvalue; + + static const njs_str_t headers = njs_str("headers"); + static const njs_str_t status = njs_str("status"); + static const njs_str_t status_text = njs_str("statusText"); + + response = njs_mp_zalloc(njs_vm_memory_pool(vm), sizeof(ngx_js_response_t)); + if (response == NULL) { + njs_vm_memory_error(vm); + return NJS_ERROR; + } + + /* + * set by njs_mp_zalloc(): + * + * request->url.length = 0; + * request->status_text.length = 0; + */ + + response->code = 200; + response->headers.guard = GUARD_RESPONSE; + + pool = ngx_external_pool(vm, njs_vm_external_ptr(vm)); + + rc = ngx_list_init(&response->headers.header_list, pool, 4, + sizeof(ngx_table_elt_t)); + if (rc != NGX_OK) { + njs_vm_memory_error(vm); + return NJS_ERROR; + } + + init = njs_arg(args, nargs, 2); + + if (njs_value_is_object(init)) { + value = njs_vm_object_prop(vm, init, &status, &lvalue); + if (value != NULL) { + if (ngx_js_integer(vm, value, &response->code) != NGX_OK) { + njs_vm_error(vm, "invalid Response status"); + return NJS_ERROR; + } + + if (response->code < 200 || response->code > 599) { + njs_vm_error(vm, "status provided (%i) is outside of " + "[200, 599] range", response->code); + return NJS_ERROR; + } + } + + value = njs_vm_object_prop(vm, init, &status_text, &lvalue); + if (value != NULL) { + if (ngx_js_string(vm, value, &response->status_text) != NGX_OK) { + njs_vm_error(vm, "invalid Response statusText"); + return NJS_ERROR; + } + + p = response->status_text.start; + end = p + response->status_text.length; + + while (p < end) { + if (*p != '\t' && *p < ' ') { + njs_vm_error(vm, "invalid Response statusText"); + return NJS_ERROR; + } + + p++; + } + } + + value = njs_vm_object_prop(vm, init, &headers, &lvalue); + if (value != NULL) { + if (!njs_value_is_object(value)) { + njs_vm_error(vm, "Headers is not an object"); + return NJS_ERROR; + } + + ret = ngx_js_headers_fill(vm, &response->headers, value); + if (ret != NJS_OK) { + return NJS_ERROR; + } + } + } + + njs_chb_init(&response->chain, njs_vm_memory_pool(vm)); + + body = njs_arg(args, nargs, 1); + + if (!njs_value_is_null_or_undefined(body)) { + if (ngx_js_string(vm, body, &bd) != NGX_OK) { + njs_vm_error(vm, "invalid Response body"); + return NJS_ERROR; + } + + njs_chb_append(&response->chain, bd.start, bd.length); + + if (njs_value_is_string(body)) { + ret = ngx_js_headers_append(vm, &response->headers, + (u_char *) "Content-Type", + njs_length("Content-Type"), + (u_char *) "text/plain;charset=UTF-8", + njs_length("text/plain;charset=UTF-8")); + if (ret != NJS_OK) { + return NJS_ERROR; + } + } + } + + return njs_vm_external_create(vm, njs_vm_retval(vm), + ngx_http_js_fetch_response_proto_id, response, + 0); +} + + +static njs_int_t +ngx_js_method_process(njs_vm_t *vm, ngx_js_request_t *request) +{ + u_char *s, *p; + const njs_str_t *m; + + static const njs_str_t forbidden[] = { + njs_str("CONNECT"), + njs_str("TRACE"), + njs_str("TRACK"), + njs_null_str, + }; + + static const njs_str_t to_normalize[] = { + njs_str("DELETE"), + njs_str("GET"), + njs_str("HEAD"), + njs_str("OPTIONS"), + njs_str("POST"), + njs_str("PUT"), + njs_null_str, + }; + + for (m = &forbidden[0]; m->length != 0; m++) { + if (njs_strstr_case_eq(&request->method, m)) { + njs_vm_error(vm, "forbidden method: %V", m); + return NJS_ERROR; + } + } + + for (m = &to_normalize[0]; m->length != 0; m++) { + if (njs_strstr_case_eq(&request->method, m)) { + s = &request->m[0]; + p = m->start; + + while (*p != '\0') { + *s++ = njs_upper_case(*p++); + } + + request->method.start = &request->m[0]; + request->method.length = m->length; + break; + } + } + + return NJS_OK; +} + + +static njs_int_t +ngx_js_headers_inherit(njs_vm_t *vm, ngx_js_headers_t *headers, + ngx_js_headers_t *orig) +{ + njs_int_t ret; + ngx_uint_t i; + ngx_list_part_t *part; + ngx_table_elt_t *h; + + part = &orig->header_list.part; + h = part->elts; + + for (i = 0; /* void */; i++) { + + if (i >= part->nelts) { + if (part->next == NULL) { + break; + } + + part = part->next; + h = part->elts; + i = 0; + } + + if (h[i].hash == 0) { From mdounin at mdounin.ru Tue Dec 13 17:49:18 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Dec 2022 20:49:18 +0300 Subject: [PATCH 1 of 6] QUIC: ignore server address while looking up a connection In-Reply-To: <1038d7300c29eea02b47.1670578727@ip-10-1-18-114.eu-central-1.compute.internal> References: <1038d7300c29eea02b47.1670578727@ip-10-1-18-114.eu-central-1.compute.internal> Message-ID: Hello! On Fri, Dec 09, 2022 at 09:38:47AM +0000, Roman Arutyunyan wrote: > # HG changeset patch > # User Roman Arutyunyan > # Date 1670322119 0 > # Tue Dec 06 10:21:59 2022 +0000 > # Branch quic > # Node ID 1038d7300c29eea02b47eac3f205e293b1e55f5b > # Parent b87a0dbc1150f415def5bc1e1f00d02b33519026 > QUIC: ignore server address while looking up a connection. > > The server connection check was copied from the common UDP code in c2f5d79cde64. > In QUIC it does not make much sense though. Technically client is not allowed > to migrate to a different server address. However, migrating withing a single > wildcard listening does not seem to affect anything. Wildcard address might be used for a catch-all listening socket, "if there are several listen directives with the same port but different addresses, and one of the listen directives listens on all addresses for the given port (*:port)" (http://nginx.org/r/listen). For example, in a configuration like the following: server { listen 80; return 404; } server { listen 127.0.0.1:80; return 200 secret; } This will create just one listening socket on *:80, but only clients connecting to 127.0.0.1:80 will be able to see the secret. Distinction between such connections in case of http happens in ngx_http_init_connection(), see "if (port->naddrs > 1)". In stream and mail, similar ifs are in ngx_stream_init_connection() and ngx_mail_init_connection(). This distinction is expected to be equivalent to using different listening sockets as long as socket-specific options are identical. Distinct sockets can be requested with listen 127.0.0.1:80 bind; which is expected to result in exactly equivalent behaviour, but with distinct listening sockets. Not sure how this can affect QUIC, but the change essentially removes distinction between packets sent to different listening sockets. This might not be a good idea from security point of view. As a trivial example, one can block packets to a particular server address on a firewall (in an attempt to stop an attack), with something like "block from any to 192.0.2.1", assuming it will stop traffic to the server in question. Still, with the proposed change, it will be possible to access resources with a previously established QUIC connection as long as the attacker knows other IP addresses used on the same physical server. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Dec 13 17:57:12 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 13 Dec 2022 20:57:12 +0300 Subject: [PATCH 4 of 6] QUIC: never disable QUIC socket events In-Reply-To: References: Message-ID: Hello! On Fri, Dec 09, 2022 at 09:38:50AM +0000, Roman Arutyunyan wrote: > # HG changeset patch > # User Roman Arutyunyan > # Date 1670256830 0 > # Mon Dec 05 16:13:50 2022 +0000 > # Branch quic > # Node ID de8bcaea559d151f5945d0a2e06c61b56a26a52b > # Parent b5c30f16ec8ba3ace2f58d77d294d9b355bf3267 > QUIC: never disable QUIC socket events. > > Unlike TCP accept(), current QUIC implementation does not require new file > descriptors for new clients. Also, it does not work with accept mutex since > it normally requires reuseport option. > > diff --git a/src/event/ngx_event_accept.c b/src/event/ngx_event_accept.c > --- a/src/event/ngx_event_accept.c > +++ b/src/event/ngx_event_accept.c > @@ -416,6 +416,12 @@ ngx_disable_accept_events(ngx_cycle_t *c > > #endif > > +#if (NGX_QUIC) > + if (ls[i].quic) { > + continue; > + } > +#endif > + As long as the reuseport option is used, this should happen automatically. If it's not, it might be a bad idea to do not disable accepting new QUIC connections, given that QUIC connections can still use file descriptors for other tasks, such as opening files and connecting to backends. On the other hand, existing QUIC implementation cannot receive packets from previously created connections if socket events are disabled. And this might be a good reason for the change. If this is the reason, shouldn't it be in the commit log? Also, it looks like with client sockets, as introduced later in this patch series, it would be appropriate to actually disable events on QUIC listening sockets. > if (ngx_del_event(c->read, NGX_READ_EVENT, NGX_DISABLE_EVENT) > == NGX_ERROR) > { > > _______________________________________________ > nginx-devel mailing list -- nginx-devel at nginx.org > To unsubscribe send an email to nginx-devel-leave at nginx.org -- Maxim Dounin http://mdounin.ru/ From osa at freebsd.org.ru Wed Dec 14 01:29:32 2022 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Wed, 14 Dec 2022 04:29:32 +0300 Subject: [PATCH] fix "the the" typo Message-ID: Hi, here's the patch to fix "the the" typo. Thank you. -- Sergey A. Osokin -------------- next part -------------- A non-text attachment was scrubbed... Name: ngx_http_upstream_module.xml.diff Type: text/x-diff Size: 535 bytes Desc: not available URL: From yar at nginx.com Wed Dec 14 09:53:16 2022 From: yar at nginx.com (Yaroslav Zhuravlev) Date: Wed, 14 Dec 2022 09:53:16 +0000 Subject: [PATCH] fix "the the" typo In-Reply-To: References: Message-ID: Hi Sergey, Thank you for the diff, it also requires a commit log message, smth like "Fixed typo in the sticky directive", otherwise the diff looks good. > On 14 Dec 2022, at 01:29, Sergey A. Osokin wrote: > > Hi, > > here's the patch to fix "the the" typo. > > Thank you. > > -- > Sergey A. Osokin > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel From osa at FreeBSD.org.ru Wed Dec 14 15:20:33 2022 From: osa at FreeBSD.org.ru (=?iso-8859-1?q?Sergey_A=2E_Osokin?=) Date: Wed, 14 Dec 2022 18:20:33 +0300 Subject: [PATCH] Fixed typo in the sticky directive Message-ID: <72b76e78e2845278b7a9.1671031233@mp2.macomnet.net> xml/en/docs/http/ngx_http_upstream_module.xml | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) # HG changeset patch # User Sergey A. Osokin # Date 1671031009 -10800 # Wed Dec 14 18:16:49 2022 +0300 # Node ID 72b76e78e2845278b7a93bffe953a17077cd5f70 # Parent 178f55cf631aa1c87281667b63e6cc3139459d8b Fixed typo in the sticky directive diff -r 178f55cf631a -r 72b76e78e284 xml/en/docs/http/ngx_http_upstream_module.xml --- a/xml/en/docs/http/ngx_http_upstream_module.xml Tue Dec 13 19:01:40 2022 +0300 +++ b/xml/en/docs/http/ngx_http_upstream_module.xml Wed Dec 14 18:16:49 2022 +0300 @@ -1097,7 +1097,7 @@ Strict, Lax, or None, -the the corresponding value will be assigned, +the corresponding value will be assigned, otherwise the Strict value will be assigned. From osa at FreeBSD.org.ru Wed Dec 14 15:27:07 2022 From: osa at FreeBSD.org.ru (=?iso-8859-1?q?Sergey_A=2E_Osokin?=) Date: Wed, 14 Dec 2022 18:27:07 +0300 Subject: [PATCH] Fix typos in the CHANGES files Message-ID: text/en/CHANGES-0.5 | 2 +- text/en/CHANGES-0.6 | 6 +++--- text/en/CHANGES-0.7 | 6 +++--- text/en/CHANGES-0.8 | 6 +++--- 4 files changed, 10 insertions(+), 10 deletions(-) # HG changeset patch # User Sergey A. Osokin # Date 1671031600 -10800 # Wed Dec 14 18:26:40 2022 +0300 # Node ID dc163738d6b50a29d150546cbf01181de3535c8f # Parent 72b76e78e2845278b7a93bffe953a17077cd5f70 Fix typos in the CHANGES files diff -r 72b76e78e284 -r dc163738d6b5 text/en/CHANGES-0.5 --- a/text/en/CHANGES-0.5 Wed Dec 14 18:16:49 2022 +0300 +++ b/text/en/CHANGES-0.5 Wed Dec 14 18:26:40 2022 +0300 @@ -2115,7 +2115,7 @@ *) Bugfix: the segmentation fault occurred or the worker process may got caught in an endless loop if the proxied or FastCGI server sent the "Cache-Control" header line and the "expires" directive was - used; in the proxied mode the the bug had appeared in 0.1.29. + used; in the proxied mode the bug had appeared in 0.1.29. Changes with nginx 0.1.42 23 Aug 2005 diff -r 72b76e78e284 -r dc163738d6b5 text/en/CHANGES-0.6 --- a/text/en/CHANGES-0.6 Wed Dec 14 18:16:49 2022 +0300 +++ b/text/en/CHANGES-0.6 Wed Dec 14 18:26:40 2022 +0300 @@ -2604,7 +2604,7 @@ *) Bugfix: the segmentation fault occurred or the worker process may got caught in an endless loop if the proxied or FastCGI server sent the "Cache-Control" header line and the "expires" directive was - used; in the proxied mode the the bug had appeared in 0.1.29. + used; in the proxied mode the bug had appeared in 0.1.29. Changes with nginx 0.1.42 23 Aug 2005 @@ -2832,7 +2832,7 @@ *) Bugfix: if the length of the response part received at once from proxied or FastCGI server was equal to 500, then nginx returns the - 500 response code; in proxy mode the the bug had appeared in 0.1.29 + 500 response code; in proxy mode the bug had appeared in 0.1.29 only. *) Bugfix: nginx did not consider the directives with 8 or 9 parameters @@ -3076,7 +3076,7 @@ *) Bugfix: nginx could not be built on NetBSD 2.0. - *) Bugfix: the timeout may occur while reading of the the client + *) Bugfix: the timeout may occur while reading of the client request body via SSL connections. diff -r 72b76e78e284 -r dc163738d6b5 text/en/CHANGES-0.7 --- a/text/en/CHANGES-0.7 Wed Dec 14 18:16:49 2022 +0300 +++ b/text/en/CHANGES-0.7 Wed Dec 14 18:26:40 2022 +0300 @@ -3805,7 +3805,7 @@ *) Bugfix: the segmentation fault occurred or the worker process may got caught in an endless loop if the proxied or FastCGI server sent the "Cache-Control" header line and the "expires" directive was - used; in the proxied mode the the bug had appeared in 0.1.29. + used; in the proxied mode the bug had appeared in 0.1.29. Changes with nginx 0.1.42 23 Aug 2005 @@ -4033,7 +4033,7 @@ *) Bugfix: if the length of the response part received at once from proxied or FastCGI server was equal to 500, then nginx returns the - 500 response code; in proxy mode the the bug had appeared in 0.1.29 + 500 response code; in proxy mode the bug had appeared in 0.1.29 only. *) Bugfix: nginx did not consider the directives with 8 or 9 parameters @@ -4277,7 +4277,7 @@ *) Bugfix: nginx could not be built on NetBSD 2.0. - *) Bugfix: the timeout may occur while reading of the the client + *) Bugfix: the timeout may occur while reading of the client request body via SSL connections. diff -r 72b76e78e284 -r dc163738d6b5 text/en/CHANGES-0.8 --- a/text/en/CHANGES-0.8 Wed Dec 14 18:16:49 2022 +0300 +++ b/text/en/CHANGES-0.8 Wed Dec 14 18:26:40 2022 +0300 @@ -4301,7 +4301,7 @@ *) Bugfix: the segmentation fault occurred or the worker process may got caught in an endless loop if the proxied or FastCGI server sent the "Cache-Control" header line and the "expires" directive was - used; in the proxied mode the the bug had appeared in 0.1.29. + used; in the proxied mode the bug had appeared in 0.1.29. Changes with nginx 0.1.42 23 Aug 2005 @@ -4529,7 +4529,7 @@ *) Bugfix: if the length of the response part received at once from proxied or FastCGI server was equal to 500, then nginx returns the - 500 response code; in proxy mode the the bug had appeared in 0.1.29 + 500 response code; in proxy mode the bug had appeared in 0.1.29 only. *) Bugfix: nginx did not consider the directives with 8 or 9 parameters @@ -4773,7 +4773,7 @@ *) Bugfix: nginx could not be built on NetBSD 2.0. - *) Bugfix: the timeout may occur while reading of the the client + *) Bugfix: the timeout may occur while reading of the client request body via SSL connections. From yar at nginx.com Wed Dec 14 15:57:27 2022 From: yar at nginx.com (Yaroslav Zhuravlev) Date: Wed, 14 Dec 2022 15:57:27 +0000 Subject: [PATCH] Fixed typo in the sticky directive In-Reply-To: <72b76e78e2845278b7a9.1671031233@mp2.macomnet.net> References: <72b76e78e2845278b7a9.1671031233@mp2.macomnet.net> Message-ID: [...] > # HG changeset patch > # User Sergey A. Osokin > # Date 1671031009 -10800 > # Wed Dec 14 18:16:49 2022 +0300 > # Node ID 72b76e78e2845278b7a93bffe953a17077cd5f70 > # Parent 178f55cf631aa1c87281667b63e6cc3139459d8b > Fixed typo in the sticky directive > > diff -r 178f55cf631a -r 72b76e78e284 xml/en/docs/http/ngx_http_upstream_module.xml > --- a/xml/en/docs/http/ngx_http_upstream_module.xml Tue Dec 13 19:01:40 2022 +0300 > +++ b/xml/en/docs/http/ngx_http_upstream_module.xml Wed Dec 14 18:16:49 2022 +0300 > @@ -1097,7 +1097,7 @@ > Strict, > Lax, or > None, > -the the corresponding value will be assigned, > +the corresponding value will be assigned, > otherwise the Strict value will be assigned. > Thank you, patch committed: http://hg.nginx.org/nginx.org/rev/a2708cf6ebdb [...] From mdounin at mdounin.ru Wed Dec 14 16:30:05 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 14 Dec 2022 19:30:05 +0300 Subject: [PATCH] Fix typos in the CHANGES files In-Reply-To: References: Message-ID: Hello! On Wed, Dec 14, 2022 at 06:27:07PM +0300, Sergey A. Osokin wrote: > text/en/CHANGES-0.5 | 2 +- > text/en/CHANGES-0.6 | 6 +++--- > text/en/CHANGES-0.7 | 6 +++--- > text/en/CHANGES-0.8 | 6 +++--- > 4 files changed, 10 insertions(+), 10 deletions(-) > > > # HG changeset patch > # User Sergey A. Osokin > # Date 1671031600 -10800 > # Wed Dec 14 18:26:40 2022 +0300 > # Node ID dc163738d6b50a29d150546cbf01181de3535c8f > # Parent 72b76e78e2845278b7a93bffe953a17077cd5f70 > Fix typos in the CHANGES files > > diff -r 72b76e78e284 -r dc163738d6b5 text/en/CHANGES-0.5 > --- a/text/en/CHANGES-0.5 Wed Dec 14 18:16:49 2022 +0300 > +++ b/text/en/CHANGES-0.5 Wed Dec 14 18:26:40 2022 +0300 > @@ -2115,7 +2115,7 @@ > *) Bugfix: the segmentation fault occurred or the worker process may > got caught in an endless loop if the proxied or FastCGI server sent > the "Cache-Control" header line and the "expires" directive was > - used; in the proxied mode the the bug had appeared in 0.1.29. > + used; in the proxied mode the bug had appeared in 0.1.29. > > > Changes with nginx 0.1.42 23 Aug 2005 Please avoid direct modifications of the CHANGES files, these are generated from docs/xml/nginx/changes.xml in nginx when a release on the appropriate branch is made. Note well that this particular typo is already fixed in 4378:6af0f5881f0a (https://hg.nginx.org/nginx/rev/6af0f5881f0a). It, however, remains visible in the CHANGES files from old branches, where the fix isn't present. -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Thu Dec 15 02:13:41 2022 From: pluknet at nginx.com (=?iso-8859-1?q?Sergey_Kandaurov?=) Date: Thu, 15 Dec 2022 06:13:41 +0400 Subject: [PATCH] SSL: SSL_CTX_set_tlsext_ticket_key_cb() deprecated in OpenSSL 3.0 Message-ID: <8fbae86083f2efda8b4e.1671070421@enoparse.local> # HG changeset patch # User Sergey Kandaurov # Date 1671069897 -14400 # Thu Dec 15 06:04:57 2022 +0400 # Node ID 8fbae86083f2efda8b4e079b3bda148dec220323 # Parent c38588d8376b77fc2f56f90ca16533031b235491 SSL: SSL_CTX_set_tlsext_ticket_key_cb() deprecated in OpenSSL 3.0. It becomes hidden when OpenSSL is built with OPENSSL_NO_DEPRECATED. While this is manageable for the ssl_session_ticket_key directive, rotation of ticket keys stored in shared memory is silently disabled. Switch to SSL_CTX_set_tlsext_ticket_key_evp_cb() whenever available. A macro similar to SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB isn't provided, so the feature test uses OSSL_PARAM_octet_string as a close relative. Using the documented macro OSSL_MAC_PARAM_KEY is considered worthless as this requires to conditionally include an additional OpenSSL header. diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -12,6 +12,14 @@ #define NGX_SSL_PASSWORD_BUFFER_SIZE 4096 +#ifdef OSSL_PARAM_octet_string +#define ngx_ssl_mac_ctx EVP_MAC_CTX +#define ngx_ssl_ctx_ticket_key_cb SSL_CTX_set_tlsext_ticket_key_evp_cb +#elif defined SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB +#define ngx_ssl_mac_ctx HMAC_CTX +#define ngx_ssl_ctx_ticket_key_cb SSL_CTX_set_tlsext_ticket_key_cb +#endif + typedef struct { ngx_uint_t engine; /* unsigned engine:1; */ @@ -70,10 +78,10 @@ static void ngx_ssl_expire_sessions(ngx_ static void ngx_ssl_session_rbtree_insert_value(ngx_rbtree_node_t *temp, ngx_rbtree_node_t *node, ngx_rbtree_node_t *sentinel); -#ifdef SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB +#ifdef ngx_ssl_ctx_ticket_key_cb static int ngx_ssl_ticket_key_callback(ngx_ssl_conn_t *ssl_conn, unsigned char *name, unsigned char *iv, EVP_CIPHER_CTX *ectx, - HMAC_CTX *hctx, int enc); + ngx_ssl_mac_ctx *hctx, int enc); static ngx_int_t ngx_ssl_rotate_ticket_keys(SSL_CTX *ssl_ctx, ngx_log_t *log); static void ngx_ssl_ticket_keys_cleanup(void *data); #endif @@ -4281,7 +4289,7 @@ ngx_ssl_session_rbtree_insert_value(ngx_ } -#ifdef SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB +#ifdef ngx_ssl_ctx_ticket_key_cb ngx_int_t ngx_ssl_session_ticket_keys(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_array_t *paths) @@ -4323,7 +4331,7 @@ ngx_ssl_session_ticket_keys(ngx_conf_t * return NGX_ERROR; } - if (SSL_CTX_set_tlsext_ticket_key_cb(ssl->ctx, ngx_ssl_ticket_key_callback) + if (ngx_ssl_ctx_ticket_key_cb(ssl->ctx, ngx_ssl_ticket_key_callback) == 0) { ngx_log_error(NGX_LOG_WARN, cf->log, 0, @@ -4445,10 +4453,13 @@ failed: static int ngx_ssl_ticket_key_callback(ngx_ssl_conn_t *ssl_conn, unsigned char *name, unsigned char *iv, EVP_CIPHER_CTX *ectx, - HMAC_CTX *hctx, int enc) + ngx_ssl_mac_ctx *hctx, int enc) { size_t size; SSL_CTX *ssl_ctx; +#ifdef OSSL_PARAM_octet_string + OSSL_PARAM params[3]; +#endif ngx_uint_t i; ngx_array_t *keys; ngx_connection_t *c; @@ -4504,7 +4515,22 @@ ngx_ssl_ticket_key_callback(ngx_ssl_conn return -1; } -#if OPENSSL_VERSION_NUMBER >= 0x10000000L +#ifdef OSSL_PARAM_octet_string + + params[0] = OSSL_PARAM_construct_octet_string("key", + key[0].hmac_key, size); + params[1] = OSSL_PARAM_construct_utf8_string("digest", + (char *) EVP_MD_name(digest), + 0); + params[2] = OSSL_PARAM_construct_end(); + + if (!EVP_MAC_CTX_set_params(hctx, params)) { + ngx_ssl_error(NGX_LOG_ALERT, c->log, 0, + "EVP_MAC_CTX_set_params() failed"); + return -1; + } + +#elif OPENSSL_VERSION_NUMBER >= 0x10000000L if (HMAC_Init_ex(hctx, key[0].hmac_key, size, digest, NULL) != 1) { ngx_ssl_error(NGX_LOG_ALERT, c->log, 0, "HMAC_Init_ex() failed"); return -1; @@ -4547,7 +4573,22 @@ ngx_ssl_ticket_key_callback(ngx_ssl_conn size = 32; } -#if OPENSSL_VERSION_NUMBER >= 0x10000000L +#ifdef OSSL_PARAM_octet_string + + params[0] = OSSL_PARAM_construct_octet_string("key", + key[i].hmac_key, size); + params[1] = OSSL_PARAM_construct_utf8_string("digest", + (char *) EVP_MD_name(digest), + 0); + params[2] = OSSL_PARAM_construct_end(); + + if (!EVP_MAC_CTX_set_params(hctx, params)) { + ngx_ssl_error(NGX_LOG_ALERT, c->log, 0, + "EVP_MAC_CTX_set_params() failed"); + return -1; + } + +#elif OPENSSL_VERSION_NUMBER >= 0x10000000L if (HMAC_Init_ex(hctx, key[i].hmac_key, size, digest, NULL) != 1) { ngx_ssl_error(NGX_LOG_ALERT, c->log, 0, "HMAC_Init_ex() failed"); return -1; From pluknet at nginx.com Thu Dec 15 02:14:34 2022 From: pluknet at nginx.com (=?iso-8859-1?q?Sergey_Kandaurov?=) Date: Thu, 15 Dec 2022 06:14:34 +0400 Subject: [PATCH] Tests: ssl session ticket key rotation tests Message-ID: <82dc9c3a4ec81636e42e.1671070474@enoparse.local> # HG changeset patch # User Sergey Kandaurov # Date 1671070326 -14400 # Thu Dec 15 06:12:06 2022 +0400 # Node ID 82dc9c3a4ec81636e42e1417ce6661f3b0e4d358 # Parent ff6c99824947575d4a8d3c9aeea8d6b68e0ace29 Tests: ssl session ticket key rotation tests. diff --git a/ssl_session_ticket_key.t b/ssl_session_ticket_key.t new file mode 100644 --- /dev/null +++ b/ssl_session_ticket_key.t @@ -0,0 +1,141 @@ +#!/usr/bin/perl + +# (C) Sergey Kandaurov +# (C) Nginx, Inc. + +# Tests for rotation of SSL session ticket keys. + +############################################################################### + +use warnings; +use strict; + +use Test::More; + +BEGIN { use FindBin; chdir($FindBin::Bin); } + +use lib 'lib'; +use Test::Nginx; + +############################################################################### + +select STDERR; $| = 1; +select STDOUT; $| = 1; + +eval { + require Net::SSLeay; die if $Net::SSLeay::VERSION < 1.86; + Net::SSLeay::load_error_strings(); + Net::SSLeay::SSLeay_add_ssl_algorithms(); + Net::SSLeay::randomize(); +}; +plan(skip_all => 'Net::SSLeay version => 1.86 required') if $@; + +my $t = Test::Nginx->new()->has(qw/http http_ssl/)->has_daemon('openssl') + ->plan(2)->write_file_expand('nginx.conf', <<'EOF'); + +%%TEST_GLOBALS%% + +daemon off; +worker_processes 2; + +events { +} + +http { + %%TEST_GLOBALS_HTTP%% + + ssl_certificate_key localhost.key; + ssl_certificate localhost.crt; + + server { + listen 127.0.0.1:8080 ssl; + server_name localhost; + + ssl_session_cache shared:SSL:1m; + ssl_session_timeout 2; + } +} + +EOF + +$t->write_file('openssl.conf', <testdir(); + +foreach my $name ('localhost') { + system('openssl req -x509 -new ' + . "-config $d/openssl.conf -subj /CN=$name/ " + . "-out $d/$name.crt -keyout $d/$name.key " + . ">>$d/openssl.out 2>&1") == 0 + or die "Can't create certificate for $name: $!\n"; +} + +$t->run(); + +############################################################################### + +# any test can fail depending on which worker process served connection, +# with a single worker process it is only the 2nd test that fails +local $TODO = 'not yet' unless $t->has_version('1.23.2'); + +my $ses = get_ssl_session(); +my $key = get_ticket_key_name($ses); + +sleep 1; + +$ses = get_ssl_session($ses); +is(get_ticket_key_name($ses), $key, 'ticket key match'); + +sleep 2; + +$ses = get_ssl_session($ses); +isnt(get_ticket_key_name($ses), $key, 'ticket key next'); + +############################################################################### + +sub get_ticket_key_name { + my ($ses) = @_; + my $asn = Net::SSLeay::i2d_SSL_SESSION($ses); + my $any = qr/[\x00-\xff]/; +next: + # tag(10) | len{2} | OCTETSTRING(4) | len{2} | ticket(key_name|..) + $asn =~ /\xaa\x81($any)\x04\x81($any)($any{16})/g; + return if !defined $3; + goto next if unpack("C", $1) - unpack("C", $2) != 3; + unpack "H*", $3; +} + +sub get_ssl_session { + my ($ses) = @_; + + my ($s, $ssl) = get_ssl_socket(ses => $ses); + + Net::SSLeay::write($ssl, <new('127.0.0.1:' . port(8080)); + my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); + my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); + Net::SSLeay::set_session($ssl, $extra{ses}) if $extra{ses}; + Net::SSLeay::set_fd($ssl, fileno($s)); + Net::SSLeay::connect($ssl) or die("ssl connect"); + return ($s, $ssl); +} + +############################################################################### From mdounin at mdounin.ru Thu Dec 15 04:25:55 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Dec 2022 07:25:55 +0300 Subject: [PATCH] SSL: SSL_CTX_set_tlsext_ticket_key_cb() deprecated in OpenSSL 3.0 In-Reply-To: <8fbae86083f2efda8b4e.1671070421@enoparse.local> References: <8fbae86083f2efda8b4e.1671070421@enoparse.local> Message-ID: Hello! On Thu, Dec 15, 2022 at 06:13:41AM +0400, Sergey Kandaurov wrote: > # HG changeset patch > # User Sergey Kandaurov > # Date 1671069897 -14400 > # Thu Dec 15 06:04:57 2022 +0400 > # Node ID 8fbae86083f2efda8b4e079b3bda148dec220323 > # Parent c38588d8376b77fc2f56f90ca16533031b235491 > SSL: SSL_CTX_set_tlsext_ticket_key_cb() deprecated in OpenSSL 3.0. > > It becomes hidden when OpenSSL is built with OPENSSL_NO_DEPRECATED. > While this is manageable for the ssl_session_ticket_key directive, > rotation of ticket keys stored in shared memory is silently disabled. > > Switch to SSL_CTX_set_tlsext_ticket_key_evp_cb() whenever available. > A macro similar to SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB isn't provided, > so the feature test uses OSSL_PARAM_octet_string as a close relative. > Using the documented macro OSSL_MAC_PARAM_KEY is considered worthless > as this requires to conditionally include an additional OpenSSL header. Do we really need this? Given the amount of various API changes and deprecations in OpenSSL, especially in OpenSSL 3.0, and the quality of new APIs being introduced (see 61011bfcdb49 for an example, and the whole early data stuff for another one), I would rather refrain from supporting this at least till SSL_CTX_set_tlsext_ticket_key_cb(), which is also the only API supported by LibreSSL and BoringSSL, is actually removed. Note well that building nginx with OpenSSL 3.0 and OPENSSL_NO_DEPRECATED defined will simply fail. If for some reason OpenSSL was built with "no-deprecated", I believe it's completely expected that some advanced features using APIs not present in the build will not be available, and it's sole responsibility of whoever created such a build. [...] -- Maxim Dounin http://mdounin.ru/ From yefei.dyf at alibaba-inc.com Thu Dec 15 11:53:22 2022 From: yefei.dyf at alibaba-inc.com (=?UTF-8?B?5p2c5Y+26aOeKOa3ruWPtik=?=) Date: Thu, 15 Dec 2022 19:53:22 +0800 Subject: =?UTF-8?B?Rml4ZWQgdGhlIGNvZGUgc3R5bGU=?= Message-ID: Hello! # HG changeset patch # User BullerDu # Date 1671104973 -28800 # Thu Dec 15 19:49:33 2022 +0800 # Branch bugfix_style # Node ID 43aa2b889da22758b567964667e95071ad453e59 # Parent c38588d8376b77fc2f56f90ca16533031b235491 Style. diff -r c38588d8376b -r 43aa2b889da2 src/core/ngx_conf_file.c --- a/src/core/ngx_conf_file.c Tue Dec 13 18:53:53 2022 +0300 +++ b/src/core/ngx_conf_file.c Thu Dec 15 19:49:33 2022 +0800 @@ -544,8 +544,8 @@ } ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, - "unexpected end of file, " - "expecting \";\" or \"}\""); + "unexpected end of file, " + "expecting \";\" or \"}\""); return NGX_ERROR; } diff -r c38588d8376b -r 43aa2b889da2 src/event/ngx_event_udp.c --- a/src/event/ngx_event_udp.c Tue Dec 13 18:53:53 2022 +0300 +++ b/src/event/ngx_event_udp.c Thu Dec 15 19:49:33 2022 +0800 @@ -88,7 +88,7 @@ msg.msg_controllen = sizeof(msg_control); ngx_memzero(&msg_control, sizeof(msg_control)); - } + } #endif n = recvmsg(lc->fd, &msg, 0); diff -r c38588d8376b -r 43aa2b889da2 src/os/unix/ngx_udp_sendmsg_chain.c --- a/src/os/unix/ngx_udp_sendmsg_chain.c Tue Dec 13 18:53:53 2022 +0300 +++ b/src/os/unix/ngx_udp_sendmsg_chain.c Thu Dec 15 19:49:33 2022 +0800 @@ -335,7 +335,7 @@ #endif - #if (NGX_HAVE_IP_RECVDSTADDR) +#if (NGX_HAVE_IP_RECVDSTADDR) if (cmsg->cmsg_level == IPPROTO_IP && cmsg->cmsg_type == IP_RECVDSTADDR -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Thu Dec 15 12:47:36 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 15 Dec 2022 16:47:36 +0400 Subject: [PATCH] SSL: SSL_CTX_set_tlsext_ticket_key_cb() deprecated in OpenSSL 3.0 In-Reply-To: References: <8fbae86083f2efda8b4e.1671070421@enoparse.local> Message-ID: <094F1B0A-47EA-4379-8E8F-08D15AA0F435@nginx.com> > On 15 Dec 2022, at 08:25, Maxim Dounin wrote: > > Hello! > > On Thu, Dec 15, 2022 at 06:13:41AM +0400, Sergey Kandaurov wrote: > >> # HG changeset patch >> # User Sergey Kandaurov >> # Date 1671069897 -14400 >> # Thu Dec 15 06:04:57 2022 +0400 >> # Node ID 8fbae86083f2efda8b4e079b3bda148dec220323 >> # Parent c38588d8376b77fc2f56f90ca16533031b235491 >> SSL: SSL_CTX_set_tlsext_ticket_key_cb() deprecated in OpenSSL 3.0. >> >> It becomes hidden when OpenSSL is built with OPENSSL_NO_DEPRECATED. >> While this is manageable for the ssl_session_ticket_key directive, >> rotation of ticket keys stored in shared memory is silently disabled. >> >> Switch to SSL_CTX_set_tlsext_ticket_key_evp_cb() whenever available. >> A macro similar to SSL_CTRL_SET_TLSEXT_TICKET_KEY_CB isn't provided, >> so the feature test uses OSSL_PARAM_octet_string as a close relative. >> Using the documented macro OSSL_MAC_PARAM_KEY is considered worthless >> as this requires to conditionally include an additional OpenSSL header. > > Do we really need this? > > Given the amount of various API changes and deprecations in > OpenSSL, especially in OpenSSL 3.0, and the quality of new APIs > being introduced (see 61011bfcdb49 for an example, and the whole > early data stuff for another one), I would rather refrain from > supporting this at least till SSL_CTX_set_tlsext_ticket_key_cb(), > which is also the only API supported by LibreSSL and BoringSSL, is > actually removed. The main motivation is to support such experimental builds, such that they won't affect passing our own nginx tests. This can wait for sure till existing API is removed (if at all). > > Note well that building nginx with OpenSSL 3.0 and > OPENSSL_NO_DEPRECATED defined will simply fail. > > If for some reason OpenSSL was built with "no-deprecated", I > believe it's completely expected that some advanced features using > APIs not present in the build will not be available, and it's sole > responsibility of whoever created such a build. > > [...] -- Sergey Kandaurov From artem.konev at nginx.com Thu Dec 15 16:11:34 2022 From: artem.konev at nginx.com (=?iso-8859-1?q?Artem_Konev?=) Date: Thu, 15 Dec 2022 16:11:34 +0000 Subject: [PATCH] Added info about the Unit 1.29.0 release Message-ID: <599196cb4114302d9c0f.1671120694@ork-ml-00029876.station> xml/index.xml | 10 ++++++++++ 1 files changed, 10 insertions(+), 0 deletions(-) # HG changeset patch # User Artem Konev # Date 1671119709 0 # Thu Dec 15 15:55:09 2022 +0000 # Node ID 599196cb4114302d9c0faeb41d6ee48b0f330762 # Parent a2708cf6ebdb8c802eddac9bfe9e68606336ceae Added info about the Unit 1.29.0 release. diff --git a/xml/index.xml b/xml/index.xml --- a/xml/index.xml +++ b/xml/index.xml @@ -7,6 +7,16 @@ + + +unit-1.29.0 +version has been +released, +featuring initial +njs support and per-app cgroups. + + + nginx-1.23.3 From pluknet at nginx.com Thu Dec 15 17:09:11 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 15 Dec 2022 21:09:11 +0400 Subject: [PATCH] Added info about the Unit 1.29.0 release In-Reply-To: <599196cb4114302d9c0f.1671120694@ork-ml-00029876.station> References: <599196cb4114302d9c0f.1671120694@ork-ml-00029876.station> Message-ID: > On 15 Dec 2022, at 20:11, Artem Konev wrote: > > xml/index.xml | 10 ++++++++++ > 1 files changed, 10 insertions(+), 0 deletions(-) > > > # HG changeset patch > # User Artem Konev > # Date 1671119709 0 > # Thu Dec 15 15:55:09 2022 +0000 > # Node ID 599196cb4114302d9c0faeb41d6ee48b0f330762 > # Parent a2708cf6ebdb8c802eddac9bfe9e68606336ceae > Added info about the Unit 1.29.0 release. > > diff --git a/xml/index.xml b/xml/index.xml > --- a/xml/index.xml > +++ b/xml/index.xml > @@ -7,6 +7,16 @@ > > > > + > + > +unit-1.29.0 > +version has been > +released, > +featuring initial > +njs support and per-app cgroups. > + > + > + > > > nginx-1.23.3 Looks good. -- Sergey Kandaurov From pluknet at nginx.com Thu Dec 15 17:24:08 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 15 Dec 2022 21:24:08 +0400 Subject: Fixed the code style In-Reply-To: References: Message-ID: <3B8ED5FE-C0B3-4B7D-9CB0-CF42E87E0CF2@nginx.com> > On 15 Dec 2022, at 15:53, 杜叶飞(淮叶) via nginx-devel wrote: > > Hello! > > > # HG changeset patch > # User BullerDu > # Date 1671104973 -28800 > # Thu Dec 15 19:49:33 2022 +0800 > # Branch bugfix_style > # Node ID 43aa2b889da22758b567964667e95071ad453e59 > # Parent c38588d8376b77fc2f56f90ca16533031b235491 > Style. > diff -r c38588d8376b -r 43aa2b889da2 src/core/ngx_conf_file.c > --- a/src/core/ngx_conf_file.c Tue Dec 13 18:53:53 2022 +0300 > +++ b/src/core/ngx_conf_file.c Thu Dec 15 19:49:33 2022 +0800 > @@ -544,8 +544,8 @@ > } > ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > - "unexpected end of file, " > - "expecting \";\" or \"}\""); > + "unexpected end of file, " > + "expecting \";\" or \"}\""); > return NGX_ERROR; > } > diff -r c38588d8376b -r 43aa2b889da2 src/event/ngx_event_udp.c > --- a/src/event/ngx_event_udp.c Tue Dec 13 18:53:53 2022 +0300 > +++ b/src/event/ngx_event_udp.c Thu Dec 15 19:49:33 2022 +0800 > @@ -88,7 +88,7 @@ > msg.msg_controllen = sizeof(msg_control); > ngx_memzero(&msg_control, sizeof(msg_control)); > - } > + } > #endif > n = recvmsg(lc->fd, &msg, 0); > diff -r c38588d8376b -r 43aa2b889da2 src/os/unix/ngx_udp_sendmsg_chain.c > --- a/src/os/unix/ngx_udp_sendmsg_chain.c Tue Dec 13 18:53:53 2022 +0300 > +++ b/src/os/unix/ngx_udp_sendmsg_chain.c Thu Dec 15 19:49:33 2022 +0800 > @@ -335,7 +335,7 @@ > #endif > - #if (NGX_HAVE_IP_RECVDSTADDR) > +#if (NGX_HAVE_IP_RECVDSTADDR) > if (cmsg->cmsg_level == IPPROTO_IP > && cmsg->cmsg_type == IP_RECVDSTADDR Please note that patch formatting appears corrupted, I had to apply it manually. It looks good otherwise. If no objections arise, I'll push it shortly. -- Sergey Kandaurov From mdounin at mdounin.ru Thu Dec 15 19:42:26 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 15 Dec 2022 22:42:26 +0300 Subject: Fixed the code style In-Reply-To: <3B8ED5FE-C0B3-4B7D-9CB0-CF42E87E0CF2@nginx.com> References: <3B8ED5FE-C0B3-4B7D-9CB0-CF42E87E0CF2@nginx.com> Message-ID: Hello! On Thu, Dec 15, 2022 at 09:24:08PM +0400, Sergey Kandaurov wrote: > > > On 15 Dec 2022, at 15:53, 杜叶飞(淮叶) via nginx-devel wrote: > > > > Hello! > > > > > > # HG changeset patch > > # User BullerDu > > # Date 1671104973 -28800 > > # Thu Dec 15 19:49:33 2022 +0800 > > # Branch bugfix_style > > # Node ID 43aa2b889da22758b567964667e95071ad453e59 > > # Parent c38588d8376b77fc2f56f90ca16533031b235491 > > Style. > > diff -r c38588d8376b -r 43aa2b889da2 src/core/ngx_conf_file.c > > --- a/src/core/ngx_conf_file.c Tue Dec 13 18:53:53 2022 +0300 > > +++ b/src/core/ngx_conf_file.c Thu Dec 15 19:49:33 2022 +0800 > > @@ -544,8 +544,8 @@ > > } > > ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > > - "unexpected end of file, " > > - "expecting \";\" or \"}\""); > > + "unexpected end of file, " > > + "expecting \";\" or \"}\""); > > return NGX_ERROR; > > } > > diff -r c38588d8376b -r 43aa2b889da2 src/event/ngx_event_udp.c > > --- a/src/event/ngx_event_udp.c Tue Dec 13 18:53:53 2022 +0300 > > +++ b/src/event/ngx_event_udp.c Thu Dec 15 19:49:33 2022 +0800 > > @@ -88,7 +88,7 @@ > > msg.msg_controllen = sizeof(msg_control); > > ngx_memzero(&msg_control, sizeof(msg_control)); > > - } > > + } > > #endif > > n = recvmsg(lc->fd, &msg, 0); > > diff -r c38588d8376b -r 43aa2b889da2 src/os/unix/ngx_udp_sendmsg_chain.c > > --- a/src/os/unix/ngx_udp_sendmsg_chain.c Tue Dec 13 18:53:53 2022 +0300 > > +++ b/src/os/unix/ngx_udp_sendmsg_chain.c Thu Dec 15 19:49:33 2022 +0800 > > @@ -335,7 +335,7 @@ > > #endif > > - #if (NGX_HAVE_IP_RECVDSTADDR) > > +#if (NGX_HAVE_IP_RECVDSTADDR) > > if (cmsg->cmsg_level == IPPROTO_IP > > && cmsg->cmsg_type == IP_RECVDSTADDR > > Please note that patch formatting appears corrupted, > I had to apply it manually. It looks good otherwise. > If no objections arise, I'll push it shortly. Looks good to me. -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Thu Dec 15 21:25:14 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 15 Dec 2022 21:25:14 +0000 Subject: [nginx] Version bump. Message-ID: details: https://hg.nginx.org/nginx/rev/d85ce1df2313 branches: changeset: 8115:d85ce1df2313 user: Sergey Kandaurov date: Fri Dec 16 01:15:13 2022 +0400 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff -r c38588d8376b -r d85ce1df2313 src/core/nginx.h --- a/src/core/nginx.h Tue Dec 13 18:53:53 2022 +0300 +++ b/src/core/nginx.h Fri Dec 16 01:15:13 2022 +0400 @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1023003 -#define NGINX_VERSION "1.23.3" +#define nginx_version 1023004 +#define NGINX_VERSION "1.23.4" #define NGINX_VER "nginx/" NGINX_VERSION #ifdef NGX_BUILD From pluknet at nginx.com Thu Dec 15 21:25:17 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 15 Dec 2022 21:25:17 +0000 Subject: [nginx] Style. Message-ID: details: https://hg.nginx.org/nginx/rev/3108d4d668e4 branches: changeset: 8116:3108d4d668e4 user: BullerDu date: Fri Dec 16 01:15:15 2022 +0400 description: Style. diffstat: src/core/ngx_conf_file.c | 4 ++-- src/event/ngx_event_udp.c | 2 +- src/os/unix/ngx_udp_sendmsg_chain.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diffs (38 lines): diff -r d85ce1df2313 -r 3108d4d668e4 src/core/ngx_conf_file.c --- a/src/core/ngx_conf_file.c Fri Dec 16 01:15:13 2022 +0400 +++ b/src/core/ngx_conf_file.c Fri Dec 16 01:15:15 2022 +0400 @@ -544,8 +544,8 @@ ngx_conf_read_token(ngx_conf_t *cf) } ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, - "unexpected end of file, " - "expecting \";\" or \"}\""); + "unexpected end of file, " + "expecting \";\" or \"}\""); return NGX_ERROR; } diff -r d85ce1df2313 -r 3108d4d668e4 src/event/ngx_event_udp.c --- a/src/event/ngx_event_udp.c Fri Dec 16 01:15:13 2022 +0400 +++ b/src/event/ngx_event_udp.c Fri Dec 16 01:15:15 2022 +0400 @@ -88,7 +88,7 @@ ngx_event_recvmsg(ngx_event_t *ev) msg.msg_controllen = sizeof(msg_control); ngx_memzero(&msg_control, sizeof(msg_control)); - } + } #endif n = recvmsg(lc->fd, &msg, 0); diff -r d85ce1df2313 -r 3108d4d668e4 src/os/unix/ngx_udp_sendmsg_chain.c --- a/src/os/unix/ngx_udp_sendmsg_chain.c Fri Dec 16 01:15:13 2022 +0400 +++ b/src/os/unix/ngx_udp_sendmsg_chain.c Fri Dec 16 01:15:15 2022 +0400 @@ -335,7 +335,7 @@ ngx_get_srcaddr_cmsg(struct cmsghdr *cms #endif - #if (NGX_HAVE_IP_RECVDSTADDR) +#if (NGX_HAVE_IP_RECVDSTADDR) if (cmsg->cmsg_level == IPPROTO_IP && cmsg->cmsg_type == IP_RECVDSTADDR From mdounin at mdounin.ru Thu Dec 15 21:54:08 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 16 Dec 2022 00:54:08 +0300 Subject: [PATCH] Tests: ssl session ticket key rotation tests In-Reply-To: <82dc9c3a4ec81636e42e.1671070474@enoparse.local> References: <82dc9c3a4ec81636e42e.1671070474@enoparse.local> Message-ID: Hello! On Thu, Dec 15, 2022 at 06:14:34AM +0400, Sergey Kandaurov wrote: > # HG changeset patch > # User Sergey Kandaurov > # Date 1671070326 -14400 > # Thu Dec 15 06:12:06 2022 +0400 > # Node ID 82dc9c3a4ec81636e42e1417ce6661f3b0e4d358 > # Parent ff6c99824947575d4a8d3c9aeea8d6b68e0ace29 > Tests: ssl session ticket key rotation tests. > > diff --git a/ssl_session_ticket_key.t b/ssl_session_ticket_key.t > new file mode 100644 > --- /dev/null > +++ b/ssl_session_ticket_key.t > @@ -0,0 +1,141 @@ > +#!/usr/bin/perl > + > +# (C) Sergey Kandaurov > +# (C) Nginx, Inc. > + > +# Tests for rotation of SSL session ticket keys. > + > +############################################################################### > + > +use warnings; > +use strict; > + > +use Test::More; > + > +BEGIN { use FindBin; chdir($FindBin::Bin); } > + > +use lib 'lib'; > +use Test::Nginx; > + > +############################################################################### > + > +select STDERR; $| = 1; > +select STDOUT; $| = 1; > + > +eval { > + require Net::SSLeay; die if $Net::SSLeay::VERSION < 1.86; > + Net::SSLeay::load_error_strings(); > + Net::SSLeay::SSLeay_add_ssl_algorithms(); > + Net::SSLeay::randomize(); > +}; > +plan(skip_all => 'Net::SSLeay version => 1.86 required') if $@; > + > +my $t = Test::Nginx->new()->has(qw/http http_ssl/)->has_daemon('openssl') > + ->plan(2)->write_file_expand('nginx.conf', <<'EOF'); > + > +%%TEST_GLOBALS%% > + > +daemon off; > +worker_processes 2; > + > +events { > +} > + > +http { > + %%TEST_GLOBALS_HTTP%% > + > + ssl_certificate_key localhost.key; > + ssl_certificate localhost.crt; > + > + server { > + listen 127.0.0.1:8080 ssl; > + server_name localhost; > + > + ssl_session_cache shared:SSL:1m; > + ssl_session_timeout 2; > + } > +} > + > +EOF > + > +$t->write_file('openssl.conf', < +[ req ] > +default_bits = 2048 > +encrypt_key = no > +distinguished_name = req_distinguished_name > +[ req_distinguished_name ] > +EOF > + > +my $d = $t->testdir(); > + > +foreach my $name ('localhost') { > + system('openssl req -x509 -new ' > + . "-config $d/openssl.conf -subj /CN=$name/ " > + . "-out $d/$name.crt -keyout $d/$name.key " > + . ">>$d/openssl.out 2>&1") == 0 > + or die "Can't create certificate for $name: $!\n"; > +} > + > +$t->run(); > + > +############################################################################### > + > +# any test can fail depending on which worker process served connection, > +# with a single worker process it is only the 2nd test that fails > +local $TODO = 'not yet' unless $t->has_version('1.23.2'); It might worth explaining why the test uses multiple worker processes, and why the first test might fail. > + > +my $ses = get_ssl_session(); > +my $key = get_ticket_key_name($ses); > + > +sleep 1; > + > +$ses = get_ssl_session($ses); Any specific reasons to try to reuse sessions? The result is not checked anywhere (well, it might make sense to actually test that sessions can be reused, but that's a different question). > +is(get_ticket_key_name($ses), $key, 'ticket key match'); > + > +sleep 2; > + > +$ses = get_ssl_session($ses); > +isnt(get_ticket_key_name($ses), $key, 'ticket key next'); > + > +############################################################################### > + > +sub get_ticket_key_name { > + my ($ses) = @_; > + my $asn = Net::SSLeay::i2d_SSL_SESSION($ses); > + my $any = qr/[\x00-\xff]/; > +next: > + # tag(10) | len{2} | OCTETSTRING(4) | len{2} | ticket(key_name|..) > + $asn =~ /\xaa\x81($any)\x04\x81($any)($any{16})/g; > + return if !defined $3; > + goto next if unpack("C", $1) - unpack("C", $2) != 3; > + unpack "H*", $3; > +} > + > +sub get_ssl_session { > + my ($ses) = @_; > + > + my ($s, $ssl) = get_ssl_socket(ses => $ses); > + > + Net::SSLeay::write($ssl, < +GET / HTTP/1.0 > +Host: localhost > + > +EOF > + Net::SSLeay::read($ssl); > + > + Net::SSLeay::get_session($ssl); > +} > + > +sub get_ssl_socket { > + my (%extra) = @_; > + > + my $s = IO::Socket::INET->new('127.0.0.1:' . port(8080)); > + my $ctx = Net::SSLeay::CTX_new() or die("Failed to create SSL_CTX $!"); > + my $ssl = Net::SSLeay::new($ctx) or die("Failed to create SSL $!"); > + Net::SSLeay::set_session($ssl, $extra{ses}) if $extra{ses}; > + Net::SSLeay::set_fd($ssl, fileno($s)); > + Net::SSLeay::connect($ssl) or die("ssl connect"); > + return ($s, $ssl); > +} > + > +############################################################################### Otherwise looks good. -- Maxim Dounin http://mdounin.ru/ From yefei.dyf at alibaba-inc.com Fri Dec 16 01:12:13 2022 From: yefei.dyf at alibaba-inc.com (=?UTF-8?B?5p2c5Y+26aOeKOa3ruWPtik=?=) Date: Fri, 16 Dec 2022 09:12:13 +0800 Subject: =?UTF-8?B?5Zue5aSN77yaRml4ZWQgdGhlIGNvZGUgc3R5bGU=?= In-Reply-To: <3B8ED5FE-C0B3-4B7D-9CB0-CF42E87E0CF2@nginx.com> References: , <3B8ED5FE-C0B3-4B7D-9CB0-CF42E87E0CF2@nginx.com> Message-ID: <58fb5bf3-fbcf-4669-8b2d-2231fc573530.yefei.dyf@alibaba-inc.com> OK, thanks for you help. I'll note next time. ------------------------------------------------------------------ 发件人:Sergey Kandaurov 发送时间:2022年12月16日(星期五) 01:24 收件人:nginx-devel ; 杜叶飞(淮叶) 主 题:Re: Fixed the code style > On 15 Dec 2022, at 15:53, 杜叶飞(淮叶) via nginx-devel wrote: > > Hello! > > > # HG changeset patch > # User BullerDu > # Date 1671104973 -28800 > # Thu Dec 15 19:49:33 2022 +0800 > # Branch bugfix_style > # Node ID 43aa2b889da22758b567964667e95071ad453e59 > # Parent c38588d8376b77fc2f56f90ca16533031b235491 > Style. > diff -r c38588d8376b -r 43aa2b889da2 src/core/ngx_conf_file.c > --- a/src/core/ngx_conf_file.c Tue Dec 13 18:53:53 2022 +0300 > +++ b/src/core/ngx_conf_file.c Thu Dec 15 19:49:33 2022 +0800 > @@ -544,8 +544,8 @@ > } > ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > - "unexpected end of file, " > - "expecting \";\" or \"}\""); > + "unexpected end of file, " > + "expecting \";\" or \"}\""); > return NGX_ERROR; > } > diff -r c38588d8376b -r 43aa2b889da2 src/event/ngx_event_udp.c > --- a/src/event/ngx_event_udp.c Tue Dec 13 18:53:53 2022 +0300 > +++ b/src/event/ngx_event_udp.c Thu Dec 15 19:49:33 2022 +0800 > @@ -88,7 +88,7 @@ > msg.msg_controllen = sizeof(msg_control); > ngx_memzero(&msg_control, sizeof(msg_control)); > - } > + } > #endif > n = recvmsg(lc->fd, &msg, 0); > diff -r c38588d8376b -r 43aa2b889da2 src/os/unix/ngx_udp_sendmsg_chain.c > --- a/src/os/unix/ngx_udp_sendmsg_chain.c Tue Dec 13 18:53:53 2022 +0300 > +++ b/src/os/unix/ngx_udp_sendmsg_chain.c Thu Dec 15 19:49:33 2022 +0800 > @@ -335,7 +335,7 @@ > #endif > - #if (NGX_HAVE_IP_RECVDSTADDR) > +#if (NGX_HAVE_IP_RECVDSTADDR) > if (cmsg->cmsg_level == IPPROTO_IP > && cmsg->cmsg_type == IP_RECVDSTADDR Please note that patch formatting appears corrupted, I had to apply it manually. It looks good otherwise. If no objections arise, I'll push it shortly. -- Sergey Kandaurov -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Dec 17 20:01:07 2022 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Sat, 17 Dec 2022 23:01:07 +0300 Subject: [PATCH] Updated link to OpenVZ suspend/resume bug Message-ID: # HG changeset patch # User Maxim Dounin # Date 1671306987 -10800 # Sat Dec 17 22:56:27 2022 +0300 # Node ID d3b64770c1e78fa5cef907c35904b21fb8b8e281 # Parent 3108d4d668e4b907868b815f0441d4c893bf4188 Updated link to OpenVZ suspend/resume bug. diff --git a/src/core/ngx_connection.c b/src/core/ngx_connection.c --- a/src/core/ngx_connection.c +++ b/src/core/ngx_connection.c @@ -660,7 +660,7 @@ ngx_open_listening_sockets(ngx_cycle_t * /* * on OpenVZ after suspend/resume EADDRINUSE * may be returned by listen() instead of bind(), see - * https://bugzilla.openvz.org/show_bug.cgi?id=2470 + * https://bugs.openvz.org/browse/OVZ-5587 */ if (err != NGX_EADDRINUSE || !ngx_test_config) { From vbart at wbsrv.ru Sun Dec 18 18:36:55 2022 From: vbart at wbsrv.ru (=?iso-8859-1?q?Valentin_V=2E_Bartenev?=) Date: Sun, 18 Dec 2022 21:36:55 +0300 Subject: [PATCH] Fixed port ranges support in the listen directive Message-ID: <2af1287d2da744335932.1671388615@vbart-laptop> # HG changeset patch # User Valentin Bartenev # Date 1671388142 -10800 # Sun Dec 18 21:29:02 2022 +0300 # Node ID 2af1287d2da744335932f6dca345618f7b80d1c1 # Parent 3108d4d668e4b907868b815f0441d4c893bf4188 Fixed port ranges support in the listen directive. Ports difference must be respected when checking addresses for duplicates, otherwise configurations like this are broken: listen 127.0.0.1:6000-6005 It was broken by 4cc2bfeff46c (nginx 1.23.3). diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c +++ b/src/http/ngx_http_core_module.c @@ -4292,7 +4292,7 @@ ngx_http_core_listen(ngx_conf_t *cf, ngx for (i = 0; i < n; i++) { if (ngx_cmp_sockaddr(u.addrs[n].sockaddr, u.addrs[n].socklen, - u.addrs[i].sockaddr, u.addrs[i].socklen, 0) + u.addrs[i].sockaddr, u.addrs[i].socklen, 1) == NGX_OK) { goto next; diff --git a/src/mail/ngx_mail_core_module.c b/src/mail/ngx_mail_core_module.c --- a/src/mail/ngx_mail_core_module.c +++ b/src/mail/ngx_mail_core_module.c @@ -572,7 +572,7 @@ ngx_mail_core_listen(ngx_conf_t *cf, ngx for (i = 0; i < n; i++) { if (ngx_cmp_sockaddr(u.addrs[n].sockaddr, u.addrs[n].socklen, - u.addrs[i].sockaddr, u.addrs[i].socklen, 0) + u.addrs[i].sockaddr, u.addrs[i].socklen, 1) == NGX_OK) { goto next; diff --git a/src/stream/ngx_stream_core_module.c b/src/stream/ngx_stream_core_module.c --- a/src/stream/ngx_stream_core_module.c +++ b/src/stream/ngx_stream_core_module.c @@ -890,7 +890,7 @@ ngx_stream_core_listen(ngx_conf_t *cf, n for (i = 0; i < n; i++) { if (ngx_cmp_sockaddr(u.addrs[n].sockaddr, u.addrs[n].socklen, - u.addrs[i].sockaddr, u.addrs[i].socklen, 0) + u.addrs[i].sockaddr, u.addrs[i].socklen, 1) == NGX_OK) { goto next; From mdounin at mdounin.ru Sun Dec 18 23:49:32 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 19 Dec 2022 02:49:32 +0300 Subject: [PATCH] Fixed port ranges support in the listen directive In-Reply-To: <2af1287d2da744335932.1671388615@vbart-laptop> References: <2af1287d2da744335932.1671388615@vbart-laptop> Message-ID: Hello! On Sun, Dec 18, 2022 at 09:36:55PM +0300, Valentin V. Bartenev wrote: > # HG changeset patch > # User Valentin Bartenev > # Date 1671388142 -10800 > # Sun Dec 18 21:29:02 2022 +0300 > # Node ID 2af1287d2da744335932f6dca345618f7b80d1c1 > # Parent 3108d4d668e4b907868b815f0441d4c893bf4188 > Fixed port ranges support in the listen directive. > > Ports difference must be respected when checking addresses for duplicates, > otherwise configurations like this are broken: > > listen 127.0.0.1:6000-6005 > > It was broken by 4cc2bfeff46c (nginx 1.23.3). Thanks for catching this. Completely forgot we have this (mis)feature. The patch looks good to me, pushed to http://mdounin.ru/hg/nginx. -- Maxim Dounin http://mdounin.ru/ From qiaozhiqi2016 at gmail.com Tue Dec 20 07:01:59 2022 From: qiaozhiqi2016 at gmail.com (=?UTF-8?B?5LmU5b+X5aWH?=) Date: Tue, 20 Dec 2022 15:01:59 +0800 Subject: [PATCH] Fixed state protection when restarting during the websocket request process Message-ID: # HG changeset patch # User 乔志奇@Matebook-Qiao # Date 1671515941 -28800 # Tue Dec 20 13:59:01 2022 +0800 # Branch nginx-bugfix-websocket # Node ID 3e68435db4a9991921b5bf91d792787a1ad387fb # Parent 3108d4d668e4b907868b815f0441d4c893bf4188 Fixed state protection when restarting during the websocket request process During the websocket request process, it is necessary to add a timer operation, but we need to do state protection for the timer addition operation. When the nginx process is restarted or stopped, the timer should be prohibited from being added, otherwise continuous websocket requests will cause the old process to be unable to exit during the restart process or unable to exit during the stop process. diff -r 3108d4d668e4 -r 3e68435db4a9 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Fri Dec 16 01:15:15 2022 +0400 +++ b/src/http/ngx_http_upstream.c Tue Dec 20 13:59:01 2022 +0800 @@ -3559,7 +3559,9 @@ } if (upstream->write->active && !upstream->write->ready) { - ngx_add_timer(upstream->write, u->conf->send_timeout); + if (!ngx_exiting && !ngx_quit) { + ngx_add_timer(upstream->write, u->conf->send_timeout); + } } else if (upstream->write->timer_set) { ngx_del_timer(upstream->write); @@ -3578,7 +3580,9 @@ } if (upstream->read->active && !upstream->read->ready) { - ngx_add_timer(upstream->read, u->conf->read_timeout); + if (!ngx_exiting && !ngx_quit) { + ngx_add_timer(upstream->read, u->conf->read_timeout); + } } else if (upstream->read->timer_set) { ngx_del_timer(upstream->read); @@ -3604,7 +3608,9 @@ } if (downstream->write->active && !downstream->write->ready) { - ngx_add_timer(downstream->write, clcf->send_timeout); + if (!ngx_exiting && !ngx_quit) { + ngx_add_timer(downstream->write, clcf->send_timeout); + } } else if (downstream->write->timer_set) { ngx_del_timer(downstream->write); -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Dec 20 08:03:39 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 20 Dec 2022 11:03:39 +0300 Subject: [PATCH] Fixed state protection when restarting during the websocket request process In-Reply-To: References: Message-ID: Hello! On Tue, Dec 20, 2022 at 03:01:59PM +0800, 乔志奇 wrote: > # HG changeset patch > # User 乔志奇@Matebook-Qiao > # Date 1671515941 -28800 > # Tue Dec 20 13:59:01 2022 +0800 > # Branch nginx-bugfix-websocket > # Node ID 3e68435db4a9991921b5bf91d792787a1ad387fb > # Parent 3108d4d668e4b907868b815f0441d4c893bf4188 > Fixed state protection when restarting during the websocket request process > > During the websocket request process, it is necessary to add a timer > operation, but we need to do state protection for the timer addition > operation. When the nginx process is restarted or stopped, the timer should > be prohibited from being added, otherwise continuous websocket requests > will cause the old process to be unable to exit during the restart process > or unable to exit during the stop process. Thanks, but no. The graceful exit process is expected to handle all existing connections till they are closed normally. This includes Websocket connections. The recommended approach for long-running connections (including Websocket, stream module connections, or just long-running HTTP requests) is to close them periodically from the server side, so they can be properly re-established without errors. If this does not work for you for some reason (for example, if you cannot control the behaviour of the backend server), consider using the "worker_shutdown_timeout" directive (http://nginx.org/r/worker_shutdown_timeout) to terminate all connections after certain period of time. -- Maxim Dounin http://mdounin.ru/ From qiaozhiqi2016 at gmail.com Tue Dec 20 08:16:37 2022 From: qiaozhiqi2016 at gmail.com (=?UTF-8?B?5LmU5b+X5aWH?=) Date: Tue, 20 Dec 2022 16:16:37 +0800 Subject: [PATCH] Fixed crash protection in round robin Message-ID: # HG changeset patch # User 乔志奇@Matebook-Qiao # Date 1671521412 -28800 # Tue Dec 20 15:30:12 2022 +0800 # Branch nginx-bugfix-crash # Node ID 992013158c8970318c20e2e3294dbc9311bb20c8 # Parent 3108d4d668e4b907868b815f0441d4c893bf4188 Fixed crash protection in round robin When all servers in the upstream are in the down state, rrp->peers will be NULL, initialization will crash here, and protection is needed. diff -r 3108d4d668e4 -r 992013158c89 src/http/ngx_http_upstream_round_robin.c --- a/src/http/ngx_http_upstream_round_robin.c Fri Dec 16 01:15:15 2022 +0400 +++ b/src/http/ngx_http_upstream_round_robin.c Tue Dec 20 15:30:12 2022 +0800 @@ -275,6 +275,10 @@ rrp->current = NULL; rrp->config = 0; + if (rrp->peers == NULL) { + return NGX_ERROR; + } + n = rrp->peers->number; if (rrp->peers->next && rrp->peers->next->number > n) { -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Dec 20 08:48:26 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 20 Dec 2022 11:48:26 +0300 Subject: [PATCH] Fixed crash protection in round robin In-Reply-To: References: Message-ID: Hello! On Tue, Dec 20, 2022 at 04:16:37PM +0800, 乔志奇 wrote: > # HG changeset patch > # User 乔志奇@Matebook-Qiao > # Date 1671521412 -28800 > # Tue Dec 20 15:30:12 2022 +0800 > # Branch nginx-bugfix-crash > # Node ID 992013158c8970318c20e2e3294dbc9311bb20c8 > # Parent 3108d4d668e4b907868b815f0441d4c893bf4188 > Fixed crash protection in round robin > > When all servers in the upstream are in the down state, rrp->peers will be > NULL, initialization will crash here, and protection is needed. > > diff -r 3108d4d668e4 -r 992013158c89 > src/http/ngx_http_upstream_round_robin.c > --- a/src/http/ngx_http_upstream_round_robin.c Fri Dec 16 01:15:15 2022 > +0400 > +++ b/src/http/ngx_http_upstream_round_robin.c Tue Dec 20 15:30:12 2022 > +0800 > @@ -275,6 +275,10 @@ > rrp->current = NULL; > rrp->config = 0; > > + if (rrp->peers == NULL) { > + return NGX_ERROR; > + } > + > n = rrp->peers->number; > > if (rrp->peers->next && rrp->peers->next->number > n) { Could you please clarify how rrp->peers can be NULL here? An example configuration and/or test which demonstrates the problem would be awesome. Thanks in advance. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Dec 20 13:30:16 2022 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Tue, 20 Dec 2022 16:30:16 +0300 Subject: [PATCH 2 of 4] Win32: handling of localized MSVC cl output In-Reply-To: References: Message-ID: <43098cb134a87a404b70.1671543016@1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa> # HG changeset patch # User Maxim Dounin # Date 1671541078 -10800 # Tue Dec 20 15:57:58 2022 +0300 # Node ID 43098cb134a87a404b70fcc77ad01ca343cba969 # Parent f5d9c24fb4ac2a6b82b9d842b88978a329690138 Win32: handling of localized MSVC cl output. Output examples in English, Russian, and Spanish: Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 16.00.30319.01 for 80x86 Оптимизирующий 32-разрядный компилятор Microsoft (R) C/C++ версии 16.00.30319.01 для 80x86 Compilador de optimización de C/C++ de Microsoft (R) versión 16.00.30319.01 para x64 Since most of the words are translated, instead of looking for the words "Compiler Version" we now search for "C/C++" and the version number. diff -r f5d9c24fb4ac -r 43098cb134a8 auto/cc/msvc --- a/auto/cc/msvc Tue Dec 20 15:57:51 2022 +0300 +++ b/auto/cc/msvc Tue Dec 20 15:57:58 2022 +0300 @@ -11,8 +11,8 @@ # MSVC 2015 (14.0) cl 19.00 -NGX_MSVC_VER=`$NGX_WINE $CC 2>&1 | grep 'Compiler Version' 2>&1 \ - | sed -e 's/^.* Version \(.*\)/\1/'` +NGX_MSVC_VER=`$NGX_WINE $CC 2>&1 | grep 'C/C++.* [0-9][0-9]*\.[0-9]' 2>&1 \ + | sed -e 's/^.* \([0-9][0-9]*\.[0-9].*\)/\1/'` echo " + cl version: $NGX_MSVC_VER" From mdounin at mdounin.ru Tue Dec 20 13:30:17 2022 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Tue, 20 Dec 2022 16:30:17 +0300 Subject: [PATCH 3 of 4] Win32: i386 now assumed when crossbuilding (ticket #2416) In-Reply-To: References: Message-ID: <6606ed21a7091b060ebe.1671543017@1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa> # HG changeset patch # User Maxim Dounin # Date 1671542876 -10800 # Tue Dec 20 16:27:56 2022 +0300 # Node ID 6606ed21a7091b060ebec0d082876ddbbbe0ea79 # Parent 43098cb134a87a404b70fcc77ad01ca343cba969 Win32: i386 now assumed when crossbuilding (ticket #2416). Previously, NGX_MACHINE was not set when crossbuilding, resulting in NGX_ALIGNMENT=16 being used in 32-bit builds (if not explicitly set to a correct value). This in turn might result in memory corruption in ngx_palloc() (as there are no usable aligned allocator on Windows, and normal malloc() is used instead, which provides 8 byte alignment on 32-bit platforms). To fix this, now i386 machine is set when crossbuilding, so nginx won't assume strict alignment requirements. diff -r 43098cb134a8 -r 6606ed21a709 auto/configure --- a/auto/configure Tue Dec 20 15:57:58 2022 +0300 +++ b/auto/configure Tue Dec 20 16:27:56 2022 +0300 @@ -44,6 +44,7 @@ if test -z "$NGX_PLATFORM"; then else echo "building for $NGX_PLATFORM" NGX_SYSTEM=$NGX_PLATFORM + NGX_MACHINE=i386 fi . auto/cc/conf From mdounin at mdounin.ru Tue Dec 20 13:30:15 2022 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Tue, 20 Dec 2022 16:30:15 +0300 Subject: [PATCH 1 of 4] Win32: removed unneeded wildcard in NGX_CC_NAME test for msvc Message-ID: # HG changeset patch # User Maxim Dounin # Date 1671541071 -10800 # Tue Dec 20 15:57:51 2022 +0300 # Node ID f5d9c24fb4ac2a6b82b9d842b88978a329690138 # Parent 2af1287d2da744335932f6dca345618f7b80d1c1 Win32: removed unneeded wildcard in NGX_CC_NAME test for msvc. Wildcards for msvc in NGX_CC_NAME tests are not needed since 78f8ac479735. diff -r 2af1287d2da7 -r f5d9c24fb4ac auto/cc/conf --- a/auto/cc/conf Sun Dec 18 21:29:02 2022 +0300 +++ b/auto/cc/conf Tue Dec 20 15:57:51 2022 +0300 @@ -117,7 +117,7 @@ else . auto/cc/acc ;; - msvc*) + msvc) # MSVC++ 6.0 SP2, MSVC++ Toolkit 2003 . auto/cc/msvc From mdounin at mdounin.ru Tue Dec 20 13:30:18 2022 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Tue, 20 Dec 2022 16:30:18 +0300 Subject: [PATCH 4 of 4] Win32: OpenSSL compilation for x64 targets with MSVC In-Reply-To: References: Message-ID: # HG changeset patch # User Maxim Dounin # Date 1671542914 -10800 # Tue Dec 20 16:28:34 2022 +0300 # Node ID e5a75718823d5ec365703275f3efa87d0b63f8c4 # Parent 6606ed21a7091b060ebec0d082876ddbbbe0ea79 Win32: OpenSSL compilation for x64 targets with MSVC. To ensure proper target selection the NGX_MACHINE variable is now set based on the MSVC compiler output, and the OpenSSL target is set based on it. This is not important as long as "no-asm" is used (as in misc/GNUmakefile and win32 build instructions), but might be beneficial if someone is trying to build OpenSSL with assembler code. diff -r 6606ed21a709 -r e5a75718823d auto/cc/msvc --- a/auto/cc/msvc Tue Dec 20 16:27:56 2022 +0300 +++ b/auto/cc/msvc Tue Dec 20 16:28:34 2022 +0300 @@ -22,6 +22,21 @@ have=NGX_COMPILER value="\"cl $NGX_MSVC_ ngx_msvc_ver=`echo $NGX_MSVC_VER | sed -e 's/^\([0-9]*\).*/\1/'` +# detect x64 builds + +case "$NGX_MSVC_VER" in + + *x64) + NGX_MACHINE=amd64 + ;; + + *) + NGX_MACHINE=i386 + ;; + +esac + + # optimizations # maximize speed, equivalent to -Og -Oi -Ot -Oy -Ob2 -Gs -GF -Gy diff -r 6606ed21a709 -r e5a75718823d auto/lib/openssl/make --- a/auto/lib/openssl/make Tue Dec 20 16:27:56 2022 +0300 +++ b/auto/lib/openssl/make Tue Dec 20 16:28:34 2022 +0300 @@ -7,11 +7,24 @@ case "$CC" in cl) + case "$NGX_MACHINE" in + + amd64) + OPENSSL_TARGET=VC-WIN64A + ;; + + *) + OPENSSL_TARGET=VC-WIN32 + ;; + + esac + cat << END >> $NGX_MAKEFILE $OPENSSL/openssl/include/openssl/ssl.h: $NGX_MAKEFILE \$(MAKE) -f auto/lib/openssl/makefile.msvc \ - OPENSSL="$OPENSSL" OPENSSL_OPT="$OPENSSL_OPT" + OPENSSL="$OPENSSL" OPENSSL_OPT="$OPENSSL_OPT" \ + OPENSSL_TARGET="$OPENSSL_TARGET" END diff -r 6606ed21a709 -r e5a75718823d auto/lib/openssl/makefile.msvc --- a/auto/lib/openssl/makefile.msvc Tue Dec 20 16:27:56 2022 +0300 +++ b/auto/lib/openssl/makefile.msvc Tue Dec 20 16:28:34 2022 +0300 @@ -6,7 +6,7 @@ all: cd $(OPENSSL) - perl Configure VC-WIN32 no-shared no-threads \ + perl Configure $(OPENSSL_TARGET) no-shared no-threads \ --prefix="%cd%/openssl" \ --openssldir="%cd%/openssl/ssl" \ $(OPENSSL_OPT) From pluknet at nginx.com Tue Dec 20 15:51:07 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 20 Dec 2022 19:51:07 +0400 Subject: [PATCH] Tests: ssl session ticket key rotation tests In-Reply-To: References: <82dc9c3a4ec81636e42e.1671070474@enoparse.local> Message-ID: > On 16 Dec 2022, at 01:54, Maxim Dounin wrote: > > Hello! > > On Thu, Dec 15, 2022 at 06:14:34AM +0400, Sergey Kandaurov wrote: > >> # HG changeset patch >> # User Sergey Kandaurov >> # Date 1671070326 -14400 >> # Thu Dec 15 06:12:06 2022 +0400 >> # Node ID 82dc9c3a4ec81636e42e1417ce6661f3b0e4d358 >> # Parent ff6c99824947575d4a8d3c9aeea8d6b68e0ace29 >> Tests: ssl session ticket key rotation tests. >> >> diff --git a/ssl_session_ticket_key.t b/ssl_session_ticket_key.t >> new file mode 100644 >> --- /dev/null >> +++ b/ssl_session_ticket_key.t >> @@ -0,0 +1,141 @@ >> +#!/usr/bin/perl >> + >> +# (C) Sergey Kandaurov >> +# (C) Nginx, Inc. >> + >> +# Tests for rotation of SSL session ticket keys. >> + >> +############################################################################### >> + >> +use warnings; >> +use strict; >> + >> +use Test::More; >> + >> +BEGIN { use FindBin; chdir($FindBin::Bin); } >> + >> +use lib 'lib'; >> +use Test::Nginx; >> + >> +############################################################################### >> + >> +select STDERR; $| = 1; >> +select STDOUT; $| = 1; >> + >> +eval { >> + require Net::SSLeay; die if $Net::SSLeay::VERSION < 1.86; >> + Net::SSLeay::load_error_strings(); >> + Net::SSLeay::SSLeay_add_ssl_algorithms(); >> + Net::SSLeay::randomize(); >> +}; >> +plan(skip_all => 'Net::SSLeay version => 1.86 required') if $@; >> + >> +my $t = Test::Nginx->new()->has(qw/http http_ssl/)->has_daemon('openssl') >> + ->plan(2)->write_file_expand('nginx.conf', <<'EOF'); >> + >> +%%TEST_GLOBALS%% >> + >> +daemon off; >> +worker_processes 2; >> + >> +events { >> +} >> + >> +http { >> + %%TEST_GLOBALS_HTTP%% >> + >> + ssl_certificate_key localhost.key; >> + ssl_certificate localhost.crt; >> + >> + server { >> + listen 127.0.0.1:8080 ssl; >> + server_name localhost; >> + >> + ssl_session_cache shared:SSL:1m; >> + ssl_session_timeout 2; >> + } >> +} >> + >> +EOF >> + >> +$t->write_file('openssl.conf', <> +[ req ] >> +default_bits = 2048 >> +encrypt_key = no >> +distinguished_name = req_distinguished_name >> +[ req_distinguished_name ] >> +EOF >> + >> +my $d = $t->testdir(); >> + >> +foreach my $name ('localhost') { >> + system('openssl req -x509 -new ' >> + . "-config $d/openssl.conf -subj /CN=$name/ " >> + . "-out $d/$name.crt -keyout $d/$name.key " >> + . ">>$d/openssl.out 2>&1") == 0 >> + or die "Can't create certificate for $name: $!\n"; >> +} >> + >> +$t->run(); >> + >> +############################################################################### >> + >> +# any test can fail depending on which worker process served connection, >> +# with a single worker process it is only the 2nd test that fails >> +local $TODO = 'not yet' unless $t->has_version('1.23.2'); > > It might worth explaining why the test uses multiple worker > processes, and why the first test might fail. > Makes sense, added a lengthy comment. Pushed in http://hg.nginx.org/nginx-tests/rev/5817625792bd >> + >> +my $ses = get_ssl_session(); >> +my $key = get_ticket_key_name($ses); >> + >> +sleep 1; >> + >> +$ses = get_ssl_session($ses); > > Any specific reasons to try to reuse sessions? The result is > not checked anywhere (well, it might make sense to actually test > that sessions can be reused, but that's a different question). Looks like a leftover from early testing, tnx. While checking that sessions are properly restored may have sense, I'd abstain from doing so as essentially it is a (fragile) testing of SSL protocols. Basically, if you continue to receive tickets protected with the same key, then transitively it looks like such sessions are reusable, because the ticket key is the default one. But that's not always so, it depends on when the session is started and time to rotate keys. In edge cases, an attempt to reuse a just expired session results in a full SSL handshake and a new session, but a new session ticket is sent protected with the same going to expire default ticket key, because keys rotation didn't happen yet due to a second difference compared to when OpenSSL decides that the session has expired. The behaviour is specific to TLSv1.3, where ticket are always renewed. Still, and this is an opposite edge case, such new session obtained with a going to expire key is reusable and, if reused after its key has expired and rotated, this will result in the ticket renewal protected with a different, new key. Long story short, key rotation doesn't necessary match to session reuse. Testing that ticket keys are rotated should be sufficient. -- Sergey Kandaurov From pluknet at nginx.com Tue Dec 20 21:49:41 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 21 Dec 2022 01:49:41 +0400 Subject: [PATCH] Updated link to OpenVZ suspend/resume bug In-Reply-To: References: Message-ID: <82D3AAC6-76A0-41A7-9E02-51D5D72B0FAC@nginx.com> > On 18 Dec 2022, at 00:01, Maxim Dounin wrote: > > # HG changeset patch > # User Maxim Dounin > # Date 1671306987 -10800 > # Sat Dec 17 22:56:27 2022 +0300 > # Node ID d3b64770c1e78fa5cef907c35904b21fb8b8e281 > # Parent 3108d4d668e4b907868b815f0441d4c893bf4188 > Updated link to OpenVZ suspend/resume bug. > > diff --git a/src/core/ngx_connection.c b/src/core/ngx_connection.c > --- a/src/core/ngx_connection.c > +++ b/src/core/ngx_connection.c > @@ -660,7 +660,7 @@ ngx_open_listening_sockets(ngx_cycle_t * > /* > * on OpenVZ after suspend/resume EADDRINUSE > * may be returned by listen() instead of bind(), see > - * https://bugzilla.openvz.org/show_bug.cgi?id=2470 > + * https://bugs.openvz.org/browse/OVZ-5587 > */ > > if (err != NGX_EADDRINUSE || !ngx_test_config) { Looks good. On a related note, I wonder if the bug was ever fixed. Looking at the bug tracker, it seems not. OTOH, the bug manifested on linux 2.6.32, while "OpenVZ can work with unpatched Linux 3.x kernels" (c). Anyway, the workaround doesn't seem to have noticeable maintenance costs. -- Sergey Kandaurov From xeioex at nginx.com Wed Dec 21 02:53:10 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 21 Dec 2022 02:53:10 +0000 Subject: [njs] Fixed typos. Message-ID: details: https://hg.nginx.org/njs/rev/5fc0aa4a4e72 branches: changeset: 2016:5fc0aa4a4e72 user: Jérémy Lal date: Thu Dec 15 13:04:46 2022 +0100 description: Fixed typos. diffstat: src/njs_disassembler.c | 4 ++-- src/njs_generator.c | 4 ++-- src/njs_lexer.c | 6 +++--- src/njs_lexer.h | 4 ++-- src/njs_object.c | 2 +- src/njs_parser.c | 14 +++++++------- src/njs_vmcode.c | 4 ++-- src/njs_vmcode.h | 2 +- 8 files changed, 20 insertions(+), 20 deletions(-) diffs (184 lines): diff -r c43261bad627 -r 5fc0aa4a4e72 src/njs_disassembler.c --- a/src/njs_disassembler.c Mon Dec 12 22:00:23 2022 -0800 +++ b/src/njs_disassembler.c Thu Dec 15 13:04:46 2022 +0100 @@ -88,8 +88,8 @@ static njs_code_name_t code_names[] = { { NJS_VMCODE_ADDITION, sizeof(njs_vmcode_3addr_t), njs_str("ADD ") }, - { NJS_VMCODE_SUBSTRACTION, sizeof(njs_vmcode_3addr_t), - njs_str("SUBSTRACT ") }, + { NJS_VMCODE_SUBTRACTION, sizeof(njs_vmcode_3addr_t), + njs_str("SUBTRACT ") }, { NJS_VMCODE_MULTIPLICATION, sizeof(njs_vmcode_3addr_t), njs_str("MULTIPLY ") }, { NJS_VMCODE_EXPONENTIATION, sizeof(njs_vmcode_3addr_t), diff -r c43261bad627 -r 5fc0aa4a4e72 src/njs_generator.c --- a/src/njs_generator.c Mon Dec 12 22:00:23 2022 -0800 +++ b/src/njs_generator.c Thu Dec 15 13:04:46 2022 +0100 @@ -616,7 +616,7 @@ njs_generate(njs_vm_t *vm, njs_generator case NJS_TOKEN_RIGHT_SHIFT_ASSIGNMENT: case NJS_TOKEN_UNSIGNED_RIGHT_SHIFT_ASSIGNMENT: case NJS_TOKEN_ADDITION_ASSIGNMENT: - case NJS_TOKEN_SUBSTRACTION_ASSIGNMENT: + case NJS_TOKEN_SUBTRACTION_ASSIGNMENT: case NJS_TOKEN_MULTIPLICATION_ASSIGNMENT: case NJS_TOKEN_EXPONENTIATION_ASSIGNMENT: case NJS_TOKEN_DIVISION_ASSIGNMENT: @@ -639,7 +639,7 @@ njs_generate(njs_vm_t *vm, njs_generator case NJS_TOKEN_RIGHT_SHIFT: case NJS_TOKEN_UNSIGNED_RIGHT_SHIFT: case NJS_TOKEN_ADDITION: - case NJS_TOKEN_SUBSTRACTION: + case NJS_TOKEN_SUBTRACTION: case NJS_TOKEN_MULTIPLICATION: case NJS_TOKEN_EXPONENTIATION: case NJS_TOKEN_DIVISION: diff -r c43261bad627 -r 5fc0aa4a4e72 src/njs_lexer.c --- a/src/njs_lexer.c Mon Dec 12 22:00:23 2022 -0800 +++ b/src/njs_lexer.c Thu Dec 15 13:04:46 2022 +0100 @@ -64,7 +64,7 @@ static const uint8_t njs_tokens[256] n /* & ' */ NJS_TOKEN_BITWISE_AND, NJS_TOKEN_SINGLE_QUOTE, /* ( ) */ NJS_TOKEN_OPEN_PARENTHESIS, NJS_TOKEN_CLOSE_PARENTHESIS, /* * + */ NJS_TOKEN_MULTIPLICATION, NJS_TOKEN_ADDITION, - /* , - */ NJS_TOKEN_COMMA, NJS_TOKEN_SUBSTRACTION, + /* , - */ NJS_TOKEN_COMMA, NJS_TOKEN_SUBTRACTION, /* . / */ NJS_TOKEN_DOT, NJS_TOKEN_DIVISION, /* 0 1 */ NJS_TOKEN_DIGIT, NJS_TOKEN_DIGIT, @@ -196,7 +196,7 @@ static const njs_lexer_multi_t njs_addi static const njs_lexer_multi_t njs_substraction_token[] = { { '-', NJS_TOKEN_DECREMENT, 0, NULL }, - { '=', NJS_TOKEN_SUBSTRACTION_ASSIGNMENT, 0, NULL }, + { '=', NJS_TOKEN_SUBTRACTION_ASSIGNMENT, 0, NULL }, }; @@ -639,7 +639,7 @@ njs_lexer_make_token(njs_lexer_t *lexer, njs_nitems(njs_addition_token)); break; - case NJS_TOKEN_SUBSTRACTION: + case NJS_TOKEN_SUBTRACTION: njs_lexer_multi(lexer, token, njs_substraction_token, njs_nitems(njs_substraction_token)); break; diff -r c43261bad627 -r 5fc0aa4a4e72 src/njs_lexer.h --- a/src/njs_lexer.h Mon Dec 12 22:00:23 2022 -0800 +++ b/src/njs_lexer.h Thu Dec 15 13:04:46 2022 +0100 @@ -39,7 +39,7 @@ typedef enum { NJS_TOKEN_ASSIGNMENT, NJS_TOKEN_ARROW, NJS_TOKEN_ADDITION_ASSIGNMENT, - NJS_TOKEN_SUBSTRACTION_ASSIGNMENT, + NJS_TOKEN_SUBTRACTION_ASSIGNMENT, NJS_TOKEN_MULTIPLICATION_ASSIGNMENT, NJS_TOKEN_EXPONENTIATION_ASSIGNMENT, NJS_TOKEN_DIVISION_ASSIGNMENT, @@ -66,7 +66,7 @@ typedef enum { NJS_TOKEN_ADDITION, NJS_TOKEN_UNARY_PLUS, - NJS_TOKEN_SUBSTRACTION, + NJS_TOKEN_SUBTRACTION, NJS_TOKEN_UNARY_NEGATION, NJS_TOKEN_MULTIPLICATION, diff -r c43261bad627 -r 5fc0aa4a4e72 src/njs_object.c --- a/src/njs_object.c Mon Dec 12 22:00:23 2022 -0800 +++ b/src/njs_object.c Thu Dec 15 13:04:46 2022 +0100 @@ -2098,7 +2098,7 @@ njs_object_prototype_create_constructor( if (setval != NULL) { if (!njs_is_object(value)) { - njs_type_error(vm, "Cannot create propery \"constructor\" on %s", + njs_type_error(vm, "Cannot create property \"constructor\" on %s", njs_type_string(value->type)); return NJS_ERROR; } diff -r c43261bad627 -r 5fc0aa4a4e72 src/njs_parser.c --- a/src/njs_parser.c Mon Dec 12 22:00:23 2022 -0800 +++ b/src/njs_parser.c Thu Dec 15 13:04:46 2022 +0100 @@ -3438,7 +3438,7 @@ njs_parser_unary_expression(njs_parser_t operation = NJS_VMCODE_UNARY_PLUS; break; - case NJS_TOKEN_SUBSTRACTION: + case NJS_TOKEN_SUBTRACTION: type = NJS_TOKEN_UNARY_NEGATION; operation = NJS_VMCODE_UNARY_NEGATION; break; @@ -3768,8 +3768,8 @@ njs_parser_additive_expression_match(njs operation = NJS_VMCODE_ADDITION; break; - case NJS_TOKEN_SUBSTRACTION: - operation = NJS_VMCODE_SUBSTRACTION; + case NJS_TOKEN_SUBTRACTION: + operation = NJS_VMCODE_SUBTRACTION; break; default: @@ -4438,9 +4438,9 @@ njs_parser_assignment_operator(njs_parse operation = NJS_VMCODE_ADDITION; break; - case NJS_TOKEN_SUBSTRACTION_ASSIGNMENT: + case NJS_TOKEN_SUBTRACTION_ASSIGNMENT: njs_thread_log_debug("JS: -="); - operation = NJS_VMCODE_SUBSTRACTION; + operation = NJS_VMCODE_SUBTRACTION; break; case NJS_TOKEN_LEFT_SHIFT_ASSIGNMENT: @@ -9337,7 +9337,7 @@ njs_parser_serialize_node(njs_chb_t *cha njs_token_serialize(NJS_TOKEN_CONDITIONAL); njs_token_serialize(NJS_TOKEN_ASSIGNMENT); njs_token_serialize(NJS_TOKEN_ADDITION_ASSIGNMENT); - njs_token_serialize(NJS_TOKEN_SUBSTRACTION_ASSIGNMENT); + njs_token_serialize(NJS_TOKEN_SUBTRACTION_ASSIGNMENT); njs_token_serialize(NJS_TOKEN_MULTIPLICATION_ASSIGNMENT); njs_token_serialize(NJS_TOKEN_EXPONENTIATION_ASSIGNMENT); njs_token_serialize(NJS_TOKEN_DIVISION_ASSIGNMENT); @@ -9356,7 +9356,7 @@ njs_parser_serialize_node(njs_chb_t *cha njs_token_serialize(NJS_TOKEN_UNARY_PLUS); njs_token_serialize(NJS_TOKEN_INCREMENT); njs_token_serialize(NJS_TOKEN_POST_INCREMENT); - njs_token_serialize(NJS_TOKEN_SUBSTRACTION); + njs_token_serialize(NJS_TOKEN_SUBTRACTION); njs_token_serialize(NJS_TOKEN_UNARY_NEGATION); njs_token_serialize(NJS_TOKEN_DECREMENT); njs_token_serialize(NJS_TOKEN_POST_DECREMENT); diff -r c43261bad627 -r 5fc0aa4a4e72 src/njs_vmcode.c --- a/src/njs_vmcode.c Mon Dec 12 22:00:23 2022 -0800 +++ b/src/njs_vmcode.c Thu Dec 15 13:04:46 2022 +0100 @@ -195,7 +195,7 @@ njs_vmcode_interpreter(njs_vm_t *vm, u_c NJS_GOTO_ROW(NJS_VMCODE_ADDITION), NJS_GOTO_ROW(NJS_VMCODE_EQUAL), NJS_GOTO_ROW(NJS_VMCODE_NOT_EQUAL), - NJS_GOTO_ROW(NJS_VMCODE_SUBSTRACTION), + NJS_GOTO_ROW(NJS_VMCODE_SUBTRACTION), NJS_GOTO_ROW(NJS_VMCODE_MULTIPLICATION), NJS_GOTO_ROW(NJS_VMCODE_EXPONENTIATION), NJS_GOTO_ROW(NJS_VMCODE_DIVISION), @@ -712,7 +712,7 @@ NEXT_LBL; njs_vmcode_operand(vm, vmcode->operand1, retval); \ pc += sizeof(njs_vmcode_3addr_t) - CASE (NJS_VMCODE_SUBSTRACTION): + CASE (NJS_VMCODE_SUBTRACTION): njs_vmcode_debug_opcode(); njs_vmcode_operand(vm, vmcode->operand3, value2); diff -r c43261bad627 -r 5fc0aa4a4e72 src/njs_vmcode.h --- a/src/njs_vmcode.h Mon Dec 12 22:00:23 2022 -0800 +++ b/src/njs_vmcode.h Thu Dec 15 13:04:46 2022 +0100 @@ -81,7 +81,7 @@ enum { NJS_VMCODE_ADDITION, NJS_VMCODE_EQUAL, NJS_VMCODE_NOT_EQUAL, - NJS_VMCODE_SUBSTRACTION, + NJS_VMCODE_SUBTRACTION, NJS_VMCODE_MULTIPLICATION, NJS_VMCODE_EXPONENTIATION, NJS_VMCODE_DIVISION, From arut at nginx.com Wed Dec 21 07:07:32 2022 From: arut at nginx.com (=?iso-8859-1?q?Roman_Arutyunyan?=) Date: Wed, 21 Dec 2022 11:07:32 +0400 Subject: [PATCH] QUIC: OpenSSL compatibility layer Message-ID: <64a365dcb52503e91d91.1671606452@arut-laptop> # HG changeset patch # User Roman Arutyunyan # Date 1671606197 -14400 # Wed Dec 21 11:03:17 2022 +0400 # Branch quic # Node ID 64a365dcb52503e91d91c2084f56d072301e65f9 # Parent b87a0dbc1150f415def5bc1e1f00d02b33519026 QUIC: OpenSSL compatibility layer. The change allows to compile QUIC with OpenSSL, which lacks QUIC API. The layer is enabled by "--with-openssl-quic-compat" configure option. diff --git a/auto/lib/openssl/conf b/auto/lib/openssl/conf --- a/auto/lib/openssl/conf +++ b/auto/lib/openssl/conf @@ -153,6 +153,11 @@ END if [ $ngx_found = no ]; then + if [ $OPENSSL_QUIC_COMPAT = YES ]; then + have=NGX_QUIC . auto/have + have=NGX_QUIC_OPENSSL_COMPAT . auto/have + else + cat << END $0: error: certain modules require OpenSSL QUIC support. @@ -161,7 +166,8 @@ QUIC support into the system, or build t statically from the source with nginx by using --with-openssl= option. END - exit 1 + exit 1 + fi fi fi fi diff --git a/auto/modules b/auto/modules --- a/auto/modules +++ b/auto/modules @@ -1357,6 +1357,13 @@ if [ $USE_OPENSSL_QUIC = YES ]; then src/event/quic/ngx_event_quic_output.c \ src/event/quic/ngx_event_quic_socket.c" + if [ $OPENSSL_QUIC_COMPAT = YES ]; then + ngx_module_deps="$ngx_module_deps \ + src/event/quic/ngx_event_quic_openssl_compat.h" + ngx_module_srcs="$ngx_module_srcs \ + src/event/quic/ngx_event_quic_openssl_compat.c" + fi + ngx_module_libs= ngx_module_link=YES ngx_module_order= diff --git a/auto/options b/auto/options --- a/auto/options +++ b/auto/options @@ -154,6 +154,7 @@ PCRE2=YES USE_OPENSSL=NO USE_OPENSSL_QUIC=NO +OPENSSL_QUIC_COMPAT=NO OPENSSL=NONE USE_ZLIB=NO @@ -373,6 +374,7 @@ use the \"--with-mail_ssl_module\" optio --with-openssl=*) OPENSSL="$value" ;; --with-openssl-opt=*) OPENSSL_OPT="$value" ;; + --with-openssl-quic-compat) OPENSSL_QUIC_COMPAT=YES ;; --with-md5=*) NGX_POST_CONF_MSG="$NGX_POST_CONF_MSG @@ -603,6 +605,7 @@ cat << END --with-openssl=DIR set path to OpenSSL library sources --with-openssl-opt=OPTIONS set additional build options for OpenSSL + --with-openssl-quic-compat enable OpenSSL QUIC compatibility mode --with-debug enable debug logging diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -9,6 +9,10 @@ #include #include +#if (NGX_QUIC_OPENSSL_COMPAT) +#include +#endif + #define NGX_SSL_PASSWORD_BUFFER_SIZE 4096 @@ -392,6 +396,10 @@ ngx_ssl_create(ngx_ssl_t *ssl, ngx_uint_ SSL_CTX_set_info_callback(ssl->ctx, ngx_ssl_info_callback); +#if (NGX_QUIC_OPENSSL_COMPAT) + ngx_quic_compat_init(ssl->ctx); +#endif + return NGX_OK; } diff --git a/src/event/quic/ngx_event_quic_connection.h b/src/event/quic/ngx_event_quic_connection.h --- a/src/event/quic/ngx_event_quic_connection.h +++ b/src/event/quic/ngx_event_quic_connection.h @@ -24,6 +24,9 @@ typedef struct ngx_quic_send_ctx_s ng typedef struct ngx_quic_socket_s ngx_quic_socket_t; typedef struct ngx_quic_path_s ngx_quic_path_t; typedef struct ngx_quic_keys_s ngx_quic_keys_t; +#if (NGX_QUIC_OPENSSL_COMPAT) +typedef struct ngx_quic_compat_s ngx_quic_compat_t; +#endif #include #include @@ -36,6 +39,9 @@ typedef struct ngx_quic_keys_s ng #include #include #include +#if (NGX_QUIC_OPENSSL_COMPAT) +#include +#endif /* RFC 9002, 6.2.2. Handshakes and New Paths: kInitialRtt */ @@ -236,6 +242,10 @@ struct ngx_quic_connection_s { ngx_uint_t nshadowbufs; #endif +#if (NGX_QUIC_OPENSSL_COMPAT) + ngx_quic_compat_t *compat; +#endif + ngx_quic_streams_t streams; ngx_quic_congestion_t congestion; diff --git a/src/event/quic/ngx_event_quic_openssl_compat.c b/src/event/quic/ngx_event_quic_openssl_compat.c new file mode 100644 --- /dev/null +++ b/src/event/quic/ngx_event_quic_openssl_compat.c @@ -0,0 +1,610 @@ + +/* + * Copyright (C) Nginx, Inc. + */ + + +#include +#include +#include +#include + + +#define NGX_QUIC_COMPAT_RECORD_BUFFER 1024 + +#define NGX_QUIC_SSL_TP_EXT 0x39 + +#define CLIENT_EARLY_LABEL "CLIENT_EARLY_TRAFFIC_SECRET" +#define CLIENT_HANDSHAKE_LABEL "CLIENT_HANDSHAKE_TRAFFIC_SECRET" +#define SERVER_HANDSHAKE_LABEL "SERVER_HANDSHAKE_TRAFFIC_SECRET" +#define CLIENT_APPLICATION_LABEL "CLIENT_TRAFFIC_SECRET_0" +#define SERVER_APPLICATION_LABEL "SERVER_TRAFFIC_SECRET_0" + + +typedef struct { + ngx_quic_secret_t secret; + ngx_uint_t cipher; +} ngx_quic_compat_keys_t; + + +typedef struct { + ngx_log_t *log; + + u_char type; + ngx_str_t payload; + uint64_t number; + ngx_quic_compat_keys_t *keys; + + enum ssl_encryption_level_t level; +} ngx_quic_compat_record_t; + + +struct ngx_quic_compat_s { + const SSL_QUIC_METHOD *method; + + enum ssl_encryption_level_t write_level; + enum ssl_encryption_level_t read_level; + + uint64_t read_record; + ngx_quic_compat_keys_t keys; + + ngx_str_t tp; + ngx_str_t ctp; +}; + + +static void ngx_quic_compat_keylog_callback(const SSL *ssl, const char *line); +static ngx_int_t ngx_quic_compat_set_encryption_secret(ngx_log_t *log, + ngx_quic_compat_keys_t *keys, enum ssl_encryption_level_t level, + const SSL_CIPHER *cipher, const uint8_t *secret, size_t secret_len); +static void ngx_quic_compat_message_callback(int write_p, int version, + int content_type, const void *buf, size_t len, SSL *ssl, void *arg); +static int ngx_quic_compat_add_transport_params_callback(SSL *ssl, + unsigned int ext_type, unsigned int context, const unsigned char **out, + size_t *outlen, X509 *x, size_t chainidx, int *al, void *add_arg); +static int ngx_quic_compat_parse_transport_params_callback(SSL *ssl, + unsigned int ext_type, unsigned int context, const unsigned char *in, + size_t inlen, X509 *x, size_t chainidx, int *al, void *parse_arg); +static ngx_int_t ngx_quic_compat_create_record(ngx_quic_compat_record_t *rec, + ngx_str_t *res); +size_t ngx_quic_compat_create_header(ngx_quic_compat_record_t *rec, u_char *out, + ngx_uint_t plain); + + +ngx_int_t +ngx_quic_compat_init(SSL_CTX *ctx) +{ + SSL_CTX_set_keylog_callback(ctx, ngx_quic_compat_keylog_callback); + + if (SSL_CTX_add_custom_ext(ctx, NGX_QUIC_SSL_TP_EXT, + SSL_EXT_CLIENT_HELLO + |SSL_EXT_TLS1_3_ENCRYPTED_EXTENSIONS, + ngx_quic_compat_add_transport_params_callback, + NULL, + NULL, + ngx_quic_compat_parse_transport_params_callback, + NULL) + == 0) + { + return NGX_ERROR; + } + + return NGX_OK; +} + + +static void +ngx_quic_compat_keylog_callback(const SSL *ssl, const char *line) +{ + int write; + u_char ch, *p, *start, value; + size_t n; + const SSL_CIPHER *cipher; + ngx_quic_compat_t *com; + ngx_connection_t *c; + ngx_quic_connection_t *qc; + enum ssl_encryption_level_t level; + u_char secret[EVP_MAX_MD_SIZE]; + + c = ngx_ssl_get_connection(ssl); + if (c->type != SOCK_DGRAM) { + return; + } + + p = (u_char *) line; + + for (start = p; *p && *p != ' '; p++); + + n = p - start; + + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, + "quic compat secret %*s", n, start); + + if (n == sizeof(CLIENT_EARLY_LABEL) - 1 + && ngx_strncmp(start, CLIENT_EARLY_LABEL, n) == 0) + { + level = ssl_encryption_early_data; + write = 0; + + } else if (n == sizeof(CLIENT_HANDSHAKE_LABEL) - 1 + && ngx_strncmp(start, CLIENT_HANDSHAKE_LABEL, n) == 0) + { + level = ssl_encryption_handshake; + write = 0; + + } else if (n == sizeof(SERVER_HANDSHAKE_LABEL) - 1 + && ngx_strncmp(start, SERVER_HANDSHAKE_LABEL, n) == 0) + { + level = ssl_encryption_handshake; + write = 1; + + } else if (n == sizeof(CLIENT_APPLICATION_LABEL) - 1 + && ngx_strncmp(start, CLIENT_APPLICATION_LABEL, n) == 0) + { + level = ssl_encryption_application; + write = 0; + + } else if (n == sizeof(SERVER_APPLICATION_LABEL) - 1 + && ngx_strncmp(start, SERVER_APPLICATION_LABEL, n) == 0) + { + level = ssl_encryption_application; + write = 1; + + } else { + return; + } + + if (*p++ == 0) { + return; + } + + for ( /* void */ ; *p && *p != ' '; p++); + + if (*p++ == 0) { + return; + } + + for (n = 0, start = p; *p; p++) { + ch = *p; + + if (ch >= '0' && ch <= '9') { + value = ch - '0'; + goto next; + } + + ch = (u_char) (ch | 0x20); + + if (ch >= 'a' && ch <= 'f') { + value = ch - 'a' + 10; + goto next; + } + + ngx_log_error(NGX_LOG_EMERG, c->log, 0, + "invalid OpenSSL QUIC secret format"); + + return; + + next: + + if ((p - start) % 2) { + secret[n] += value; + + if (++n >= EVP_MAX_MD_SIZE) { + ngx_log_error(NGX_LOG_EMERG, c->log, 0, + "too big OpenSSL QUIC secret"); + return; + } + + } else { + secret[n] = (value << 4); + } + } + + qc = ngx_quic_get_connection(c); + com = qc->compat; + + cipher = SSL_get_current_cipher(ssl); + + if (write) { + com->method->set_write_secret((SSL *) ssl, level, cipher, secret, n); + com->write_level = level; + + } else { + com->method->set_read_secret((SSL *) ssl, level, cipher, secret, n); + com->read_level = level; + com->read_record = 0; + + (void) ngx_quic_compat_set_encryption_secret(c->log, &com->keys, level, + cipher, secret, n); + } +} + + +static ngx_int_t +ngx_quic_compat_set_encryption_secret(ngx_log_t *log, + ngx_quic_compat_keys_t *keys, enum ssl_encryption_level_t level, + const SSL_CIPHER *cipher, const uint8_t *secret, size_t secret_len) +{ + ngx_int_t key_len; + ngx_str_t secret_str; + ngx_uint_t i; + ngx_quic_hkdf_t seq[2]; + ngx_quic_secret_t *peer_secret; + ngx_quic_ciphers_t ciphers; + + peer_secret = &keys->secret; + + keys->cipher = SSL_CIPHER_get_id(cipher); + + key_len = ngx_quic_ciphers(keys->cipher, &ciphers, level); + + if (key_len == NGX_ERROR) { + ngx_ssl_error(NGX_LOG_INFO, log, 0, "unexpected cipher"); + return NGX_ERROR; + } + + if (sizeof(peer_secret->secret.data) < secret_len) { + ngx_log_error(NGX_LOG_ALERT, log, 0, + "unexpected secret len: %uz", secret_len); + return NGX_ERROR; + } + + peer_secret->secret.len = secret_len; + ngx_memcpy(peer_secret->secret.data, secret, secret_len); + + peer_secret->key.len = key_len; + peer_secret->iv.len = NGX_QUIC_IV_LEN; + + secret_str.len = secret_len; + secret_str.data = (u_char *) secret; + + ngx_quic_hkdf_set(&seq[0], "tls13 key", &peer_secret->key, &secret_str); + ngx_quic_hkdf_set(&seq[1], "tls13 iv", &peer_secret->iv, &secret_str); + + for (i = 0; i < (sizeof(seq) / sizeof(seq[0])); i++) { + if (ngx_quic_hkdf_expand(&seq[i], ciphers.d, log) != NGX_OK) { + return NGX_ERROR; + } + } + + return NGX_OK; +} + + +static void +ngx_quic_compat_message_callback(int write_p, int version, int content_type, + const void *buf, size_t len, SSL *ssl, void *arg) +{ + ngx_quic_compat_t *com; + ngx_connection_t *c; + ngx_quic_connection_t *qc; + enum ssl_encryption_level_t level; + + if (!write_p || content_type != SSL3_RT_HANDSHAKE) { + return; + } + + c = ngx_ssl_get_connection(ssl); + qc = ngx_quic_get_connection(c); + com = qc->compat; + + level = com->write_level == ssl_encryption_initial + ? ssl_encryption_initial + : ssl_encryption_handshake; + + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, + "quic compat tx %s len:%uz ", + ngx_quic_level_name(level), len); + + (void) com->method->add_handshake_data(ssl, level, buf, len); +} + + +static int +ngx_quic_compat_add_transport_params_callback(SSL *ssl, unsigned int ext_type, + unsigned int context, const unsigned char **out, size_t *outlen, X509 *x, + size_t chainidx, int *al, void *add_arg) +{ + ngx_connection_t *c; + ngx_quic_compat_t *com; + ngx_quic_connection_t *qc; + + c = ngx_ssl_get_connection(ssl); + if (c->type != SOCK_DGRAM) { + return 0; + } + + ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, + "quic compat add transport params"); + + qc = ngx_quic_get_connection(c); + com = qc->compat; + + *out = com->tp.data; + *outlen = com->tp.len; + + return 1; +} + + +static int +ngx_quic_compat_parse_transport_params_callback(SSL *ssl, unsigned int ext_type, + unsigned int context, const unsigned char *in, size_t inlen, X509 *x, + size_t chainidx, int *al, void *parse_arg) +{ + u_char *p; + ngx_connection_t *c; + ngx_quic_compat_t *com; + ngx_quic_connection_t *qc; + + c = ngx_ssl_get_connection(ssl); + if (c->type != SOCK_DGRAM) { + return 0; + } + + ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, + "quic compat parse transport params"); + + qc = ngx_quic_get_connection(c); + com = qc->compat; + + p = ngx_pnalloc(c->pool, inlen); + if (p == NULL) { + return 0; + } + + ngx_memcpy(p, in, inlen); + + com->ctp.data = p; + com->ctp.len = inlen; + + return 1; +} + + +int +SSL_set_quic_method(SSL *ssl, const SSL_QUIC_METHOD *quic_method) +{ + BIO *rbio, *wbio; + ngx_connection_t *c; + ngx_quic_compat_t *com; + ngx_quic_connection_t *qc; + + c = ngx_ssl_get_connection(ssl); + + ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, "quic compat set method"); + + qc = ngx_quic_get_connection(c); + + qc->compat = ngx_pcalloc(c->pool, sizeof(ngx_quic_compat_t)); + if (qc->compat == NULL) { + return 0; + } + + com = qc->compat; + com->method = quic_method; + + rbio = BIO_new(BIO_s_mem()); + if (rbio == NULL) { + return 0; + } + + wbio = BIO_new(BIO_s_null()); + if (wbio == NULL) { + return 0; + } + + SSL_set_bio(ssl, rbio, wbio); + + SSL_set_msg_callback(ssl, ngx_quic_compat_message_callback); + + return 1; +} + + +int +SSL_provide_quic_data(SSL *ssl, enum ssl_encryption_level_t level, + const uint8_t *data, size_t len) +{ + BIO *rbio; + size_t n; + ngx_str_t res; + ngx_connection_t *c; + ngx_quic_compat_t *com; + ngx_quic_connection_t *qc; + ngx_quic_compat_record_t rec; + u_char in[NGX_QUIC_COMPAT_RECORD_BUFFER]; + u_char out[NGX_QUIC_COMPAT_RECORD_BUFFER + + SSL3_RT_HEADER_LENGTH + + EVP_GCM_TLS_TAG_LEN]; + + c = ngx_ssl_get_connection(ssl); + + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, "quic compat rx %s len:%uz", + ngx_quic_level_name(level), len); + + qc = ngx_quic_get_connection(c); + com = qc->compat; + + rbio = SSL_get_rbio(ssl); + + while (len) { + + ngx_memzero(&rec, sizeof(ngx_quic_compat_record_t)); + + rec.type = SSL3_RT_HANDSHAKE; + rec.log = c->log; + rec.number = com->read_record++; + rec.keys = &com->keys; + + if (level == ssl_encryption_initial) { + n = ngx_min(len, 65535); + + rec.payload.len = n; + rec.payload.data = (u_char *) data; + + ngx_quic_compat_create_header(&rec, out, 1); + + BIO_write(rbio, out, SSL3_RT_HEADER_LENGTH); + BIO_write(rbio, data, n); + + } else { + n = ngx_min(len, NGX_QUIC_COMPAT_RECORD_BUFFER - 1); + + ngx_memcpy(in, data, n); + in[n] = SSL3_RT_HANDSHAKE; + + rec.payload.len = n + 1; + rec.payload.data = in; + + res.data = out; + + if (ngx_quic_compat_create_record(&rec, &res) != NGX_OK) { + return 0; + } + +#if defined(NGX_QUIC_DEBUG_CRYPTO) && defined(NGX_QUIC_DEBUG_PACKETS) + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, + "quic compat rx encrypt len:%uz %xV", res.len, &res); +#endif + + BIO_write(rbio, res.data, res.len); + } + + data += n; + len -= n; + } + + return 1; +} + + +static ngx_int_t +ngx_quic_compat_create_record(ngx_quic_compat_record_t *rec, ngx_str_t *res) +{ + ngx_str_t ad, out; + ngx_quic_secret_t *secret; + ngx_quic_ciphers_t ciphers; + u_char nonce[NGX_QUIC_IV_LEN]; + + ad.data = res->data; + ad.len = ngx_quic_compat_create_header(rec, ad.data, 0); + + out.len = rec->payload.len + EVP_GCM_TLS_TAG_LEN; + out.data = res->data + ad.len; + +#ifdef NGX_QUIC_DEBUG_CRYPTO + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, rec->log, 0, + "quic compat ad len:%uz %xV", ad.len, &ad); +#endif + + if (ngx_quic_ciphers(rec->keys->cipher, &ciphers, rec->level) == NGX_ERROR) + { + return NGX_ERROR; + } + + secret = &rec->keys->secret; + + ngx_memcpy(nonce, secret->iv.data, secret->iv.len); + ngx_quic_compute_nonce(nonce, sizeof(nonce), rec->number); + + if (ngx_quic_tls_seal(ciphers.c, secret, &out, + nonce, &rec->payload, &ad, rec->log) + != NGX_OK) + { + return NGX_ERROR; + } + + res->len = ad.len + out.len; + + return NGX_OK; +} + + +size_t +ngx_quic_compat_create_header(ngx_quic_compat_record_t *rec, u_char *out, + ngx_uint_t plain) +{ + u_char type; + size_t len; + + len = rec->payload.len; + + if (plain) { + type = rec->type; + + } else { + type = SSL3_RT_APPLICATION_DATA; + len += EVP_GCM_TLS_TAG_LEN; + } + + out[0] = type; + out[1] = 0x03; + out[2] = 0x03; + out[3] = (len >> 8); + out[4] = len; + + return 5; +} + + +enum ssl_encryption_level_t +SSL_quic_read_level(const SSL *ssl) +{ + ngx_connection_t *c; + ngx_quic_connection_t *qc; + + c = ngx_ssl_get_connection(ssl); + qc = ngx_quic_get_connection(c); + + return qc->compat->read_level; +} + + +enum ssl_encryption_level_t +SSL_quic_write_level(const SSL *ssl) +{ + ngx_connection_t *c; + ngx_quic_connection_t *qc; + + c = ngx_ssl_get_connection(ssl); + qc = ngx_quic_get_connection(c); + + return qc->compat->write_level; +} + + +int +SSL_set_quic_transport_params(SSL *ssl, const uint8_t *params, + size_t params_len) +{ + ngx_connection_t *c; + ngx_quic_compat_t *com; + ngx_quic_connection_t *qc; + + c = ngx_ssl_get_connection(ssl); + qc = ngx_quic_get_connection(c); + com = qc->compat; + + com->tp.len = params_len; + com->tp.data = (u_char *) params; + + return 1; +} + + +void +SSL_get_peer_quic_transport_params(const SSL *ssl, const uint8_t **out_params, + size_t *out_params_len) +{ + ngx_connection_t *c; + ngx_quic_compat_t *com; + ngx_quic_connection_t *qc; + + c = ngx_ssl_get_connection(ssl); + qc = ngx_quic_get_connection(c); + com = qc->compat; + + *out_params = com->ctp.data; + *out_params_len = com->ctp.len; +} diff --git a/src/event/quic/ngx_event_quic_openssl_compat.h b/src/event/quic/ngx_event_quic_openssl_compat.h new file mode 100644 --- /dev/null +++ b/src/event/quic/ngx_event_quic_openssl_compat.h @@ -0,0 +1,51 @@ + +/* + * Copyright (C) Nginx, Inc. + */ + + +#ifndef _NGX_EVENT_QUIC_OPENSSL_COMPAT_H_INCLUDED_ +#define _NGX_EVENT_QUIC_OPENSSL_COMPAT_H_INCLUDED_ + + +#include +#include + + +enum ssl_encryption_level_t { + ssl_encryption_initial = 0, + ssl_encryption_early_data, + ssl_encryption_handshake, + ssl_encryption_application +}; + + +typedef struct ssl_quic_method_st { + int (*set_read_secret)(SSL *ssl, enum ssl_encryption_level_t level, + const SSL_CIPHER *cipher, + const uint8_t *rsecret, size_t secret_len); + int (*set_write_secret)(SSL *ssl, enum ssl_encryption_level_t level, + const SSL_CIPHER *cipher, + const uint8_t *wsecret, size_t secret_len); + int (*add_handshake_data)(SSL *ssl, enum ssl_encryption_level_t level, + const uint8_t *data, size_t len); + int (*flush_flight)(SSL *ssl); + int (*send_alert)(SSL *ssl, enum ssl_encryption_level_t level, + uint8_t alert); +} SSL_QUIC_METHOD; + + +ngx_int_t ngx_quic_compat_init(SSL_CTX *ctx); + +int SSL_set_quic_method(SSL *ssl, const SSL_QUIC_METHOD *quic_method); +enum ssl_encryption_level_t SSL_quic_read_level(const SSL *ssl); +enum ssl_encryption_level_t SSL_quic_write_level(const SSL *ssl); +int SSL_provide_quic_data(SSL *ssl, enum ssl_encryption_level_t level, + const uint8_t *data, size_t len); +int SSL_set_quic_transport_params(SSL *ssl, const uint8_t *params, + size_t params_len); +void SSL_get_peer_quic_transport_params(const SSL *ssl, + const uint8_t **out_params, size_t *out_params_len); + + +#endif /* _NGX_EVENT_QUIC_OPENSSL_COMPAT_H_INCLUDED_ */ diff --git a/src/event/quic/ngx_event_quic_protection.c b/src/event/quic/ngx_event_quic_protection.c --- a/src/event/quic/ngx_event_quic_protection.c +++ b/src/event/quic/ngx_event_quic_protection.c @@ -23,37 +23,6 @@ #endif -#ifdef OPENSSL_IS_BORINGSSL -#define ngx_quic_cipher_t EVP_AEAD -#else -#define ngx_quic_cipher_t EVP_CIPHER -#endif - - -typedef struct { - const ngx_quic_cipher_t *c; - const EVP_CIPHER *hp; - const EVP_MD *d; -} ngx_quic_ciphers_t; - - -typedef struct { - size_t out_len; - u_char *out; - - size_t prk_len; - const uint8_t *prk; - - size_t label_len; - const u_char *label; -} ngx_quic_hkdf_t; - -#define ngx_quic_hkdf_set(seq, _label, _out, _prk) \ - (seq)->out_len = (_out)->len; (seq)->out = (_out)->data; \ - (seq)->prk_len = (_prk)->len, (seq)->prk = (_prk)->data, \ - (seq)->label_len = (sizeof(_label) - 1); (seq)->label = (u_char *)(_label); - - static ngx_int_t ngx_hkdf_expand(u_char *out_key, size_t out_len, const EVP_MD *digest, const u_char *prk, size_t prk_len, const u_char *info, size_t info_len); @@ -63,20 +32,12 @@ static ngx_int_t ngx_hkdf_extract(u_char static uint64_t ngx_quic_parse_pn(u_char **pos, ngx_int_t len, u_char *mask, uint64_t *largest_pn); -static void ngx_quic_compute_nonce(u_char *nonce, size_t len, uint64_t pn); -static ngx_int_t ngx_quic_ciphers(ngx_uint_t id, - ngx_quic_ciphers_t *ciphers, enum ssl_encryption_level_t level); static ngx_int_t ngx_quic_tls_open(const ngx_quic_cipher_t *cipher, ngx_quic_secret_t *s, ngx_str_t *out, u_char *nonce, ngx_str_t *in, ngx_str_t *ad, ngx_log_t *log); -static ngx_int_t ngx_quic_tls_seal(const ngx_quic_cipher_t *cipher, - ngx_quic_secret_t *s, ngx_str_t *out, u_char *nonce, ngx_str_t *in, - ngx_str_t *ad, ngx_log_t *log); static ngx_int_t ngx_quic_tls_hp(ngx_log_t *log, const EVP_CIPHER *cipher, ngx_quic_secret_t *s, u_char *out, u_char *in); -static ngx_int_t ngx_quic_hkdf_expand(ngx_quic_hkdf_t *hkdf, - const EVP_MD *digest, ngx_log_t *log); static ngx_int_t ngx_quic_create_packet(ngx_quic_header_t *pkt, ngx_str_t *res); @@ -84,7 +45,7 @@ static ngx_int_t ngx_quic_create_retry_p ngx_str_t *res); -static ngx_int_t +ngx_int_t ngx_quic_ciphers(ngx_uint_t id, ngx_quic_ciphers_t *ciphers, enum ssl_encryption_level_t level) { @@ -221,7 +182,7 @@ ngx_quic_keys_set_initial_secret(ngx_qui } -static ngx_int_t +ngx_int_t ngx_quic_hkdf_expand(ngx_quic_hkdf_t *h, const EVP_MD *digest, ngx_log_t *log) { size_t info_len; @@ -480,7 +441,7 @@ ngx_quic_tls_open(const ngx_quic_cipher_ } -static ngx_int_t +ngx_int_t ngx_quic_tls_seal(const ngx_quic_cipher_t *cipher, ngx_quic_secret_t *s, ngx_str_t *out, u_char *nonce, ngx_str_t *in, ngx_str_t *ad, ngx_log_t *log) { @@ -961,7 +922,7 @@ ngx_quic_parse_pn(u_char **pos, ngx_int_ } -static void +void ngx_quic_compute_nonce(u_char *nonce, size_t len, uint64_t pn) { nonce[len - 8] ^= (pn >> 56) & 0x3f; diff --git a/src/event/quic/ngx_event_quic_protection.h b/src/event/quic/ngx_event_quic_protection.h --- a/src/event/quic/ngx_event_quic_protection.h +++ b/src/event/quic/ngx_event_quic_protection.h @@ -23,6 +23,13 @@ #define NGX_QUIC_MAX_MD_SIZE 48 +#ifdef OPENSSL_IS_BORINGSSL +#define ngx_quic_cipher_t EVP_AEAD +#else +#define ngx_quic_cipher_t EVP_CIPHER +#endif + + typedef struct { size_t len; u_char data[NGX_QUIC_MAX_MD_SIZE]; @@ -56,6 +63,30 @@ struct ngx_quic_keys_s { }; +typedef struct { + const ngx_quic_cipher_t *c; + const EVP_CIPHER *hp; + const EVP_MD *d; +} ngx_quic_ciphers_t; + + +typedef struct { + size_t out_len; + u_char *out; + + size_t prk_len; + const uint8_t *prk; + + size_t label_len; + const u_char *label; +} ngx_quic_hkdf_t; + +#define ngx_quic_hkdf_set(seq, _label, _out, _prk) \ + (seq)->out_len = (_out)->len; (seq)->out = (_out)->data; \ + (seq)->prk_len = (_prk)->len, (seq)->prk = (_prk)->data, \ + (seq)->label_len = (sizeof(_label) - 1); (seq)->label = (u_char *)(_label); + + ngx_int_t ngx_quic_keys_set_initial_secret(ngx_quic_keys_t *keys, ngx_str_t *secret, ngx_log_t *log); ngx_int_t ngx_quic_keys_set_encryption_secret(ngx_log_t *log, @@ -70,6 +101,14 @@ void ngx_quic_keys_switch(ngx_connection ngx_int_t ngx_quic_keys_update(ngx_connection_t *c, ngx_quic_keys_t *keys); ngx_int_t ngx_quic_encrypt(ngx_quic_header_t *pkt, ngx_str_t *res); ngx_int_t ngx_quic_decrypt(ngx_quic_header_t *pkt, uint64_t *largest_pn); +ngx_int_t ngx_quic_ciphers(ngx_uint_t id, ngx_quic_ciphers_t *ciphers, + enum ssl_encryption_level_t level); +ngx_int_t ngx_quic_hkdf_expand(ngx_quic_hkdf_t *hkdf, const EVP_MD *digest, + ngx_log_t *log); +void ngx_quic_compute_nonce(u_char *nonce, size_t len, uint64_t pn); +ngx_int_t ngx_quic_tls_seal(const ngx_quic_cipher_t *cipher, + ngx_quic_secret_t *s, ngx_str_t *out, u_char *nonce, ngx_str_t *in, + ngx_str_t *ad, ngx_log_t *log); #endif /* _NGX_EVENT_QUIC_PROTECTION_H_INCLUDED_ */ diff --git a/src/event/quic/ngx_event_quic_ssl.c b/src/event/quic/ngx_event_quic_ssl.c --- a/src/event/quic/ngx_event_quic_ssl.c +++ b/src/event/quic/ngx_event_quic_ssl.c @@ -18,7 +18,8 @@ #define NGX_QUIC_MAX_BUFFERED 65535 -#if defined OPENSSL_IS_BORINGSSL || defined LIBRESSL_VERSION_NUMBER +#if defined OPENSSL_IS_BORINGSSL || defined LIBRESSL_VERSION_NUMBER \ + || NGX_QUIC_OPENSSL_COMPAT static int ngx_quic_set_read_secret(ngx_ssl_conn_t *ssl_conn, enum ssl_encryption_level_t level, const SSL_CIPHER *cipher, const uint8_t *secret, size_t secret_len); @@ -39,7 +40,8 @@ static int ngx_quic_send_alert(ngx_ssl_c static ngx_int_t ngx_quic_crypto_input(ngx_connection_t *c, ngx_chain_t *data); -#if defined OPENSSL_IS_BORINGSSL || defined LIBRESSL_VERSION_NUMBER +#if defined OPENSSL_IS_BORINGSSL || defined LIBRESSL_VERSION_NUMBER \ + || NGX_QUIC_OPENSSL_COMPAT static int ngx_quic_set_read_secret(ngx_ssl_conn_t *ssl_conn, @@ -540,7 +542,8 @@ ngx_quic_init_connection(ngx_connection_ ssl_conn = c->ssl->connection; if (!quic_method.send_alert) { -#if defined OPENSSL_IS_BORINGSSL || defined LIBRESSL_VERSION_NUMBER +#if defined OPENSSL_IS_BORINGSSL || defined LIBRESSL_VERSION_NUMBER \ + || NGX_QUIC_OPENSSL_COMPAT quic_method.set_read_secret = ngx_quic_set_read_secret; quic_method.set_write_secret = ngx_quic_set_write_secret; #else diff --git a/src/event/quic/ngx_event_quic_transport.h b/src/event/quic/ngx_event_quic_transport.h --- a/src/event/quic/ngx_event_quic_transport.h +++ b/src/event/quic/ngx_event_quic_transport.h @@ -11,6 +11,10 @@ #include #include +#if (NGX_QUIC_OPENSSL_COMPAT) +#include +#endif + /* * RFC 9000, 17.2. Long Header Packets From mdounin at mdounin.ru Wed Dec 21 12:11:36 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 21 Dec 2022 15:11:36 +0300 Subject: [PATCH] Updated link to OpenVZ suspend/resume bug In-Reply-To: <82D3AAC6-76A0-41A7-9E02-51D5D72B0FAC@nginx.com> References: <82D3AAC6-76A0-41A7-9E02-51D5D72B0FAC@nginx.com> Message-ID: Hello! On Wed, Dec 21, 2022 at 01:49:41AM +0400, Sergey Kandaurov wrote: > > > On 18 Dec 2022, at 00:01, Maxim Dounin wrote: > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1671306987 -10800 > > # Sat Dec 17 22:56:27 2022 +0300 > > # Node ID d3b64770c1e78fa5cef907c35904b21fb8b8e281 > > # Parent 3108d4d668e4b907868b815f0441d4c893bf4188 > > Updated link to OpenVZ suspend/resume bug. > > > > diff --git a/src/core/ngx_connection.c b/src/core/ngx_connection.c > > --- a/src/core/ngx_connection.c > > +++ b/src/core/ngx_connection.c > > @@ -660,7 +660,7 @@ ngx_open_listening_sockets(ngx_cycle_t * > > /* > > * on OpenVZ after suspend/resume EADDRINUSE > > * may be returned by listen() instead of bind(), see > > - * https://bugzilla.openvz.org/show_bug.cgi?id=2470 > > + * https://bugs.openvz.org/browse/OVZ-5587 > > */ > > > > if (err != NGX_EADDRINUSE || !ngx_test_config) { > > Looks good. Pushed to http://mdounin.ru/hg/nginx. > On a related note, I wonder if the bug was ever fixed. That was how I've found out that the old link is no longer relevant. > Looking at the bug tracker, it seems not. > OTOH, the bug manifested on linux 2.6.32, while > "OpenVZ can work with unpatched Linux 3.x kernels" (c). > Anyway, the workaround doesn't seem to have noticeable > maintenance costs. AFAIK, on unpatched kernels OpenVZ works with various limitations, and it might simply do not support suspend/resume. My impression is that the bug is still there, and the workaround is still needed. Either way, I think we can keep the workaround for some additional time. -- Maxim Dounin http://mdounin.ru/ From yar at nginx.com Thu Dec 22 10:38:48 2022 From: yar at nginx.com (=?utf-8?q?Yaroslav_Zhuravlev?=) Date: Thu, 22 Dec 2022 10:38:48 +0000 Subject: [PATCH] Documented automatic rotation of TLS ticket keys for stream/mail Message-ID: <8033ffaedeb9f029897d.1671705528@ORK-ML-00007151> xml/en/docs/mail/ngx_mail_ssl_module.xml | 6 +++++- xml/en/docs/stream/ngx_stream_ssl_module.xml | 6 +++++- xml/ru/docs/mail/ngx_mail_ssl_module.xml | 6 +++++- xml/ru/docs/stream/ngx_stream_ssl_module.xml | 6 +++++- 4 files changed, 20 insertions(+), 4 deletions(-) -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.org.patch Type: text/x-patch Size: 3876 bytes Desc: not available URL: From pluknet at nginx.com Thu Dec 22 10:45:00 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 22 Dec 2022 14:45:00 +0400 Subject: [PATCH] Documented automatic rotation of TLS ticket keys for stream/mail In-Reply-To: <8033ffaedeb9f029897d.1671705528@ORK-ML-00007151> References: <8033ffaedeb9f029897d.1671705528@ORK-ML-00007151> Message-ID: > On 22 Dec 2022, at 14:38, Yaroslav Zhuravlev wrote: > > xml/en/docs/mail/ngx_mail_ssl_module.xml | 6 +++++- > xml/en/docs/stream/ngx_stream_ssl_module.xml | 6 +++++- > xml/ru/docs/mail/ngx_mail_ssl_module.xml | 6 +++++- > xml/ru/docs/stream/ngx_stream_ssl_module.xml | 6 +++++- > 4 files changed, 20 insertions(+), 4 deletions(-) > > > # HG changeset patch > # User Yaroslav Zhuravlev > # Date 1671705355 0 > # Thu Dec 22 10:35:55 2022 +0000 > # Node ID 8033ffaedeb9f029897d464d85454b9cffa35fd9 > # Parent b2249a72e3deb9f2c3d213667cc5176db5ec2738 > Documented automatic rotation of TLS ticket keys for stream/mail. > Looks good. [..] -- Sergey Kandaurov From yar at nginx.com Thu Dec 22 10:52:28 2022 From: yar at nginx.com (Yaroslav Zhuravlev) Date: Thu, 22 Dec 2022 10:52:28 +0000 Subject: [PATCH] Documented automatic rotation of TLS ticket keys for stream/mail In-Reply-To: References: <8033ffaedeb9f029897d.1671705528@ORK-ML-00007151> Message-ID: > On 22 Dec 2022, at 10:45, Sergey Kandaurov wrote: > >> >> On 22 Dec 2022, at 14:38, Yaroslav Zhuravlev wrote: >> >> xml/en/docs/mail/ngx_mail_ssl_module.xml | 6 +++++- >> xml/en/docs/stream/ngx_stream_ssl_module.xml | 6 +++++- >> xml/ru/docs/mail/ngx_mail_ssl_module.xml | 6 +++++- >> xml/ru/docs/stream/ngx_stream_ssl_module.xml | 6 +++++- >> 4 files changed, 20 insertions(+), 4 deletions(-) >> >> >> # HG changeset patch >> # User Yaroslav Zhuravlev >> # Date 1671705355 0 >> # Thu Dec 22 10:35:55 2022 +0000 >> # Node ID 8033ffaedeb9f029897d464d85454b9cffa35fd9 >> # Parent b2249a72e3deb9f2c3d213667cc5176db5ec2738 >> Documented automatic rotation of TLS ticket keys for stream/mail. >> > > Looks good. Thank you, committed: http://hg.nginx.org/nginx.org/rev/8033ffaedeb9 > > [..] > > -- > Sergey Kandaurov > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel From pluknet at nginx.com Fri Dec 23 15:25:32 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 23 Dec 2022 15:25:32 +0000 Subject: [nginx] Fixed port ranges support in the listen directive. Message-ID: details: https://hg.nginx.org/nginx/rev/2af1287d2da7 branches: changeset: 8117:2af1287d2da7 user: Valentin Bartenev date: Sun Dec 18 21:29:02 2022 +0300 description: Fixed port ranges support in the listen directive. Ports difference must be respected when checking addresses for duplicates, otherwise configurations like this are broken: listen 127.0.0.1:6000-6005 It was broken by 4cc2bfeff46c (nginx 1.23.3). diffstat: src/http/ngx_http_core_module.c | 2 +- src/mail/ngx_mail_core_module.c | 2 +- src/stream/ngx_stream_core_module.c | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diffs (36 lines): diff -r 3108d4d668e4 -r 2af1287d2da7 src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c Fri Dec 16 01:15:15 2022 +0400 +++ b/src/http/ngx_http_core_module.c Sun Dec 18 21:29:02 2022 +0300 @@ -4292,7 +4292,7 @@ ngx_http_core_listen(ngx_conf_t *cf, ngx for (i = 0; i < n; i++) { if (ngx_cmp_sockaddr(u.addrs[n].sockaddr, u.addrs[n].socklen, - u.addrs[i].sockaddr, u.addrs[i].socklen, 0) + u.addrs[i].sockaddr, u.addrs[i].socklen, 1) == NGX_OK) { goto next; diff -r 3108d4d668e4 -r 2af1287d2da7 src/mail/ngx_mail_core_module.c --- a/src/mail/ngx_mail_core_module.c Fri Dec 16 01:15:15 2022 +0400 +++ b/src/mail/ngx_mail_core_module.c Sun Dec 18 21:29:02 2022 +0300 @@ -572,7 +572,7 @@ ngx_mail_core_listen(ngx_conf_t *cf, ngx for (i = 0; i < n; i++) { if (ngx_cmp_sockaddr(u.addrs[n].sockaddr, u.addrs[n].socklen, - u.addrs[i].sockaddr, u.addrs[i].socklen, 0) + u.addrs[i].sockaddr, u.addrs[i].socklen, 1) == NGX_OK) { goto next; diff -r 3108d4d668e4 -r 2af1287d2da7 src/stream/ngx_stream_core_module.c --- a/src/stream/ngx_stream_core_module.c Fri Dec 16 01:15:15 2022 +0400 +++ b/src/stream/ngx_stream_core_module.c Sun Dec 18 21:29:02 2022 +0300 @@ -890,7 +890,7 @@ ngx_stream_core_listen(ngx_conf_t *cf, n for (i = 0; i < n; i++) { if (ngx_cmp_sockaddr(u.addrs[n].sockaddr, u.addrs[n].socklen, - u.addrs[i].sockaddr, u.addrs[i].socklen, 0) + u.addrs[i].sockaddr, u.addrs[i].socklen, 1) == NGX_OK) { goto next; From pluknet at nginx.com Fri Dec 23 15:27:04 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 23 Dec 2022 19:27:04 +0400 Subject: [PATCH] Fixed port ranges support in the listen directive In-Reply-To: <2af1287d2da744335932.1671388615@vbart-laptop> References: <2af1287d2da744335932.1671388615@vbart-laptop> Message-ID: <7F2C3E1D-53AA-4FA7-A0FB-2D461703BAE2@nginx.com> > On 18 Dec 2022, at 22:36, Valentin V. Bartenev wrote: > > # HG changeset patch > # User Valentin Bartenev > # Date 1671388142 -10800 > # Sun Dec 18 21:29:02 2022 +0300 > # Node ID 2af1287d2da744335932f6dca345618f7b80d1c1 > # Parent 3108d4d668e4b907868b815f0441d4c893bf4188 > Fixed port ranges support in the listen directive. > > Ports difference must be respected when checking addresses for duplicates, > otherwise configurations like this are broken: > > listen 127.0.0.1:6000-6005 > > It was broken by 4cc2bfeff46c (nginx 1.23.3). > Thanks for the report, the patch looks good, pushed. It could've been caught by tests, if they were not skipped by default due to using wildcard addresses (as you recall, changes went there too), a measure to prevent from running on a box with external addresses. Nowadays, it is rather historic, it's much easier to setup an isolated environment. Probably it's time to move on to 21st century and start running nginx tests with wildcard listening sockets unconditionally. Meanwhile, I've made some changes to have a better chance to catch this. -- Sergey Kandaurov From pluknet at nginx.com Fri Dec 23 16:00:29 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 23 Dec 2022 16:00:29 +0000 Subject: [nginx] Updated link to OpenVZ suspend/resume bug. Message-ID: details: https://hg.nginx.org/nginx/rev/07b0bee87f32 branches: changeset: 8118:07b0bee87f32 user: Maxim Dounin date: Wed Dec 21 14:53:27 2022 +0300 description: Updated link to OpenVZ suspend/resume bug. diffstat: src/core/ngx_connection.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r 2af1287d2da7 -r 07b0bee87f32 src/core/ngx_connection.c --- a/src/core/ngx_connection.c Sun Dec 18 21:29:02 2022 +0300 +++ b/src/core/ngx_connection.c Wed Dec 21 14:53:27 2022 +0300 @@ -660,7 +660,7 @@ ngx_open_listening_sockets(ngx_cycle_t * /* * on OpenVZ after suspend/resume EADDRINUSE * may be returned by listen() instead of bind(), see - * https://bugzilla.openvz.org/show_bug.cgi?id=2470 + * https://bugs.openvz.org/browse/OVZ-5587 */ if (err != NGX_EADDRINUSE || !ngx_test_config) { From qiaozhiqi2016 at gmail.com Mon Dec 26 07:02:04 2022 From: qiaozhiqi2016 at gmail.com (=?UTF-8?B?5LmU5b+X5aWH?=) Date: Mon, 26 Dec 2022 15:02:04 +0800 Subject: [PATCH] Fixed state protection when restarting during the websocket request process In-Reply-To: References: Message-ID: Hello, Thanks for the suggestion, "worker_shutdown_timeout" is good for me. Maxim Dounin 于2022年12月20日周二 16:04写道: > Hello! > > On Tue, Dec 20, 2022 at 03:01:59PM +0800, 乔志奇 wrote: > > > # HG changeset patch > > # User 乔志奇@Matebook-Qiao > > # Date 1671515941 -28800 > > # Tue Dec 20 13:59:01 2022 +0800 > > # Branch nginx-bugfix-websocket > > # Node ID 3e68435db4a9991921b5bf91d792787a1ad387fb > > # Parent 3108d4d668e4b907868b815f0441d4c893bf4188 > > Fixed state protection when restarting during the websocket request > process > > > > During the websocket request process, it is necessary to add a timer > > operation, but we need to do state protection for the timer addition > > operation. When the nginx process is restarted or stopped, the timer > should > > be prohibited from being added, otherwise continuous websocket requests > > will cause the old process to be unable to exit during the restart > process > > or unable to exit during the stop process. > > Thanks, but no. > > The graceful exit process is expected to handle all existing > connections till they are closed normally. This includes > Websocket connections. > > The recommended approach for long-running connections (including > Websocket, stream module connections, or just long-running HTTP > requests) is to close them periodically from the server side, so > they can be properly re-established without errors. > > If this does not work for you for some reason (for example, if you > cannot control the behaviour of the backend server), consider > using the "worker_shutdown_timeout" directive > (http://nginx.org/r/worker_shutdown_timeout) to terminate all > connections after certain period of time. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From qiaozhiqi2016 at gmail.com Mon Dec 26 07:08:18 2022 From: qiaozhiqi2016 at gmail.com (=?UTF-8?B?5LmU5b+X5aWH?=) Date: Mon, 26 Dec 2022 15:08:18 +0800 Subject: [PATCH] Fixed crash protection in round robin In-Reply-To: References: Message-ID: Hello, I'm sorry, I triggered this issue while developing the dynamic configuration change module. When I directly modify the configuration content in memory through the API interface, it may set all the servers to the backup state, which will cause rrp->peers to be NULL. However, during the process of starting nginx with local configuration, this situation will not occur because the configuration check will inform "no server in upstream", so this is not a problem. Thank you for your response. Maxim Dounin 于2022年12月20日周二 16:48写道: > Hello! > > On Tue, Dec 20, 2022 at 04:16:37PM +0800, 乔志奇 wrote: > > > # HG changeset patch > > # User 乔志奇@Matebook-Qiao > > # Date 1671521412 -28800 > > # Tue Dec 20 15:30:12 2022 +0800 > > # Branch nginx-bugfix-crash > > # Node ID 992013158c8970318c20e2e3294dbc9311bb20c8 > > # Parent 3108d4d668e4b907868b815f0441d4c893bf4188 > > Fixed crash protection in round robin > > > > When all servers in the upstream are in the down state, rrp->peers will > be > > NULL, initialization will crash here, and protection is needed. > > > > diff -r 3108d4d668e4 -r 992013158c89 > > src/http/ngx_http_upstream_round_robin.c > > --- a/src/http/ngx_http_upstream_round_robin.c Fri Dec 16 01:15:15 2022 > > +0400 > > +++ b/src/http/ngx_http_upstream_round_robin.c Tue Dec 20 15:30:12 2022 > > +0800 > > @@ -275,6 +275,10 @@ > > rrp->current = NULL; > > rrp->config = 0; > > > > + if (rrp->peers == NULL) { > > + return NGX_ERROR; > > + } > > + > > n = rrp->peers->number; > > > > if (rrp->peers->next && rrp->peers->next->number > n) { > > Could you please clarify how rrp->peers can be NULL here? An > example configuration and/or test which demonstrates the problem > would be awesome. Thanks in advance. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Dec 26 22:57:45 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 27 Dec 2022 01:57:45 +0300 Subject: [PATCH] Fixed port ranges support in the listen directive In-Reply-To: <7F2C3E1D-53AA-4FA7-A0FB-2D461703BAE2@nginx.com> References: <2af1287d2da744335932.1671388615@vbart-laptop> <7F2C3E1D-53AA-4FA7-A0FB-2D461703BAE2@nginx.com> Message-ID: Hello! On Fri, Dec 23, 2022 at 07:27:04PM +0400, Sergey Kandaurov wrote: > > On 18 Dec 2022, at 22:36, Valentin V. Bartenev wrote: > > > > # HG changeset patch > > # User Valentin Bartenev > > # Date 1671388142 -10800 > > # Sun Dec 18 21:29:02 2022 +0300 > > # Node ID 2af1287d2da744335932f6dca345618f7b80d1c1 > > # Parent 3108d4d668e4b907868b815f0441d4c893bf4188 > > Fixed port ranges support in the listen directive. > > > > Ports difference must be respected when checking addresses for duplicates, > > otherwise configurations like this are broken: > > > > listen 127.0.0.1:6000-6005 > > > > It was broken by 4cc2bfeff46c (nginx 1.23.3). > > > > Thanks for the report, the patch looks good, pushed. > > It could've been caught by tests, if they were not skipped by default > due to using wildcard addresses (as you recall, changes went there too), > a measure to prevent from running on a box with external addresses. > Nowadays, it is rather historic, it's much easier to setup an isolated > environment. Probably it's time to move on to 21st century and start > running nginx tests with wildcard listening sockets unconditionally. > Meanwhile, I've made some changes to have a better chance to catch this. Testing port ranges certainly do not depend on using the wildcard address. It's more about properly structuring tests. Note though that even with proper structuring and/or with TEST_NGINX_UNSAFE the particular test is likely to be skipped with parallel test execution since the test requires two consecutive ports, which are unlikely to be allocated with parallel tests. Either way, I'm certainly against the idea of listening on the wildcard address by default in tests. The original idea is that tests should listen on local addresses only, so it should be reasonably safe to run tests on any host. And I would rather preserve this behaviour. If tests on wildcard addresses are indeed important, it might be a good idea to introduce a separate switch to allow such tests instead of requiring TEST_NGINX_UNSAFE. I don't think this is needed in this particular case though, and properly re-structuring tests should be sufficient (at least assuming no parallel test execution; addressing this would require some port ranges support it port allocation infrastructure). -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Thu Dec 29 13:13:49 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 29 Dec 2022 17:13:49 +0400 Subject: [PATCH 1 of 6] QUIC: ignore server address while looking up a connection In-Reply-To: <1038d7300c29eea02b47.1670578727@ip-10-1-18-114.eu-central-1.compute.internal> References: <1038d7300c29eea02b47.1670578727@ip-10-1-18-114.eu-central-1.compute.internal> Message-ID: > On 9 Dec 2022, at 13:38, Roman Arutyunyan wrote: > > # HG changeset patch > # User Roman Arutyunyan > # Date 1670322119 0 > # Tue Dec 06 10:21:59 2022 +0000 > # Branch quic > # Node ID 1038d7300c29eea02b47eac3f205e293b1e55f5b > # Parent b87a0dbc1150f415def5bc1e1f00d02b33519026 > QUIC: ignore server address while looking up a connection. > > The server connection check was copied from the common UDP code in c2f5d79cde64. > In QUIC it does not make much sense though. Technically client is not allowed > to migrate to a different server address. However, migrating withing a single within > wildcard listening does not seem to affect anything. > > diff --git a/src/event/quic/ngx_event_quic_udp.c b/src/event/quic/ngx_event_quic_udp.c > --- a/src/event/quic/ngx_event_quic_udp.c > +++ b/src/event/quic/ngx_event_quic_udp.c > @@ -13,7 +13,7 @@ > > static void ngx_quic_close_accepted_connection(ngx_connection_t *c); > static ngx_connection_t *ngx_quic_lookup_connection(ngx_listening_t *ls, > - ngx_str_t *key, struct sockaddr *local_sockaddr, socklen_t local_socklen); > + ngx_str_t *key); > > > void > @@ -156,7 +156,7 @@ ngx_quic_recvmsg(ngx_event_t *ev) > goto next; > } > > - c = ngx_quic_lookup_connection(ls, &key, local_sockaddr, local_socklen); > + c = ngx_quic_lookup_connection(ls, &key); > > if (c) { > > @@ -370,7 +370,6 @@ ngx_quic_rbtree_insert_value(ngx_rbtree_ > ngx_rbtree_node_t *node, ngx_rbtree_node_t *sentinel) > { > ngx_int_t rc; > - ngx_connection_t *c, *ct; > ngx_rbtree_node_t **p; > ngx_quic_socket_t *qsock, *qsockt; > > @@ -387,19 +386,11 @@ ngx_quic_rbtree_insert_value(ngx_rbtree_ > } else { /* node->key == temp->key */ > > qsock = (ngx_quic_socket_t *) node; > - c = qsock->udp.connection; > - > qsockt = (ngx_quic_socket_t *) temp; > - ct = qsockt->udp.connection; > > rc = ngx_memn2cmp(qsock->sid.id, qsockt->sid.id, > qsock->sid.len, qsockt->sid.len); > > - if (rc == 0 && c->listening->wildcard) { > - rc = ngx_cmp_sockaddr(c->local_sockaddr, c->local_socklen, > - ct->local_sockaddr, ct->local_socklen, 1); > - } > - > p = (rc < 0) ? &temp->left : &temp->right; > } > > @@ -419,8 +410,7 @@ ngx_quic_rbtree_insert_value(ngx_rbtree_ > > > static ngx_connection_t * > -ngx_quic_lookup_connection(ngx_listening_t *ls, ngx_str_t *key, > - struct sockaddr *local_sockaddr, socklen_t local_socklen) > +ngx_quic_lookup_connection(ngx_listening_t *ls, ngx_str_t *key) > { > uint32_t hash; > ngx_int_t rc; > @@ -454,14 +444,8 @@ ngx_quic_lookup_connection(ngx_listening > > rc = ngx_memn2cmp(key->data, qsock->sid.id, key->len, qsock->sid.len); > > - c = qsock->udp.connection; > - > - if (rc == 0 && ls->wildcard) { > - rc = ngx_cmp_sockaddr(local_sockaddr, local_socklen, > - c->local_sockaddr, c->local_socklen, 1); > - } > - > if (rc == 0) { > + c = qsock->udp.connection; > c->udp = &qsock->udp; > return c; > } While indeed it might be useful to allow migration within a wildcard, it needs more work to be done to make this change do something usable. Please see attached an interim work, I still have concerns how this could be done better, e.g. paths vs sockets. # HG changeset patch # User Yu Zhu # Date 1672317960 -14400 # Thu Dec 29 16:46:00 2022 +0400 # Branch quic # Node ID 46e738a5c0ab9d98577b21e72447cc9a6e3e9784 # Parent 91ad1abfb2850f952bccb607e4c5843854576a09 QUIC: moved rtt and congestion control to ngx_quic_path_t. As per RFC 9002, section 6. Loss Detection: Loss detection is separate per packet number space, unlike RTT measurement and congestion control, because RTT and congestion control are properties of the path, whereas loss detection also relies upon key availability. No functional changes. diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c --- a/src/event/quic/ngx_event_quic.c +++ b/src/event/quic/ngx_event_quic.c @@ -263,15 +263,6 @@ ngx_quic_new_connection(ngx_connection_t ngx_queue_init(&qc->free_frames); - qc->avg_rtt = NGX_QUIC_INITIAL_RTT; - qc->rttvar = NGX_QUIC_INITIAL_RTT / 2; - qc->min_rtt = NGX_TIMER_INFINITE; - qc->first_rtt = NGX_TIMER_INFINITE; - - /* - * qc->latest_rtt = 0 - */ - qc->pto.log = c->log; qc->pto.data = c; qc->pto.handler = ngx_quic_pto_handler; @@ -311,12 +302,6 @@ ngx_quic_new_connection(ngx_connection_t qc->streams.client_max_streams_uni = qc->tp.initial_max_streams_uni; qc->streams.client_max_streams_bidi = qc->tp.initial_max_streams_bidi; - qc->congestion.window = ngx_min(10 * qc->tp.max_udp_payload_size, - ngx_max(2 * qc->tp.max_udp_payload_size, - 14720)); - qc->congestion.ssthresh = (size_t) -1; - qc->congestion.recovery_start = ngx_current_msec; - if (pkt->validated && pkt->retried) { qc->tp.retry_scid.len = pkt->dcid.len; qc->tp.retry_scid.data = ngx_pstrdup(c->pool, &pkt->dcid); diff --git a/src/event/quic/ngx_event_quic_ack.c b/src/event/quic/ngx_event_quic_ack.c --- a/src/event/quic/ngx_event_quic_ack.c +++ b/src/event/quic/ngx_event_quic_ack.c @@ -29,7 +29,7 @@ typedef struct { } ngx_quic_ack_stat_t; -static ngx_inline ngx_msec_t ngx_quic_lost_threshold(ngx_quic_connection_t *qc); +static ngx_inline ngx_msec_t ngx_quic_lost_threshold(ngx_quic_path_t *path); static void ngx_quic_rtt_sample(ngx_connection_t *c, ngx_quic_ack_frame_t *ack, enum ssl_encryption_level_t level, ngx_msec_t send_time); static ngx_int_t ngx_quic_handle_ack_frame_range(ngx_connection_t *c, @@ -48,11 +48,11 @@ static void ngx_quic_lost_handler(ngx_ev /* RFC 9002, 6.1.2. Time Threshold: kTimeThreshold, kGranularity */ static ngx_inline ngx_msec_t -ngx_quic_lost_threshold(ngx_quic_connection_t *qc) +ngx_quic_lost_threshold(ngx_quic_path_t *path) { ngx_msec_t thr; - thr = ngx_max(qc->latest_rtt, qc->avg_rtt); + thr = ngx_max(path->latest_rtt, path->avg_rtt); thr += thr >> 3; return ngx_max(thr, NGX_QUIC_TIME_GRANULARITY); @@ -179,21 +179,23 @@ ngx_quic_rtt_sample(ngx_connection_t *c, enum ssl_encryption_level_t level, ngx_msec_t send_time) { ngx_msec_t latest_rtt, ack_delay, adjusted_rtt, rttvar_sample; + ngx_quic_path_t *path; ngx_quic_connection_t *qc; qc = ngx_quic_get_connection(c); + path = qc->path; latest_rtt = ngx_current_msec - send_time; - qc->latest_rtt = latest_rtt; + path->latest_rtt = latest_rtt; - if (qc->min_rtt == NGX_TIMER_INFINITE) { - qc->min_rtt = latest_rtt; - qc->avg_rtt = latest_rtt; - qc->rttvar = latest_rtt / 2; - qc->first_rtt = ngx_current_msec; + if (path->min_rtt == NGX_TIMER_INFINITE) { + path->min_rtt = latest_rtt; + path->avg_rtt = latest_rtt; + path->rttvar = latest_rtt / 2; + path->first_rtt = ngx_current_msec; } else { - qc->min_rtt = ngx_min(qc->min_rtt, latest_rtt); + path->min_rtt = ngx_min(path->min_rtt, latest_rtt); ack_delay = (ack->delay << qc->ctp.ack_delay_exponent) / 1000; @@ -203,18 +205,19 @@ ngx_quic_rtt_sample(ngx_connection_t *c, adjusted_rtt = latest_rtt; - if (qc->min_rtt + ack_delay < latest_rtt) { + if (path->min_rtt + ack_delay < latest_rtt) { adjusted_rtt -= ack_delay; } - qc->avg_rtt += (adjusted_rtt >> 3) - (qc->avg_rtt >> 3); - rttvar_sample = ngx_abs((ngx_msec_int_t) (qc->avg_rtt - adjusted_rtt)); - qc->rttvar += (rttvar_sample >> 2) - (qc->rttvar >> 2); + path->avg_rtt += (adjusted_rtt >> 3) - (path->avg_rtt >> 3); + rttvar_sample = ngx_abs((ngx_msec_int_t) + (path->avg_rtt - adjusted_rtt)); + path->rttvar += (rttvar_sample >> 2) - (path->rttvar >> 2); } ngx_log_debug4(NGX_LOG_DEBUG_EVENT, c->log, 0, "quic rtt sample latest:%M min:%M avg:%M var:%M", - latest_rtt, qc->min_rtt, qc->avg_rtt, qc->rttvar); + latest_rtt, path->min_rtt, path->avg_rtt, path->rttvar); } @@ -317,7 +320,7 @@ ngx_quic_congestion_ack(ngx_connection_t } qc = ngx_quic_get_connection(c); - cg = &qc->congestion; + cg = &qc->path->congestion; blocked = (cg->in_flight >= cg->window) ? 1 : 0; @@ -428,13 +431,15 @@ ngx_quic_detect_lost(ngx_connection_t *c ngx_uint_t i, nlost; ngx_msec_t now, wait, thr, oldest, newest; ngx_queue_t *q; + ngx_quic_path_t *path; ngx_quic_frame_t *start; ngx_quic_send_ctx_t *ctx; ngx_quic_connection_t *qc; qc = ngx_quic_get_connection(c); + path = qc->path; now = ngx_current_msec; - thr = ngx_quic_lost_threshold(qc); + thr = ngx_quic_lost_threshold(path); /* send time of lost packets across all send contexts */ oldest = NGX_TIMER_INFINITE; @@ -471,7 +476,7 @@ ngx_quic_detect_lost(ngx_connection_t *c break; } - if (start->last > qc->first_rtt) { + if (start->last > path->first_rtt) { if (oldest == NGX_TIMER_INFINITE || start->last < oldest) { oldest = start->last; @@ -519,8 +524,8 @@ ngx_quic_pcg_duration(ngx_connection_t * qc = ngx_quic_get_connection(c); - duration = qc->avg_rtt; - duration += ngx_max(4 * qc->rttvar, NGX_QUIC_TIME_GRANULARITY); + duration = qc->path->avg_rtt; + duration += ngx_max(4 * qc->path->rttvar, NGX_QUIC_TIME_GRANULARITY); duration += qc->ctp.max_ack_delay; duration *= NGX_QUIC_PERSISTENT_CONGESTION_THR; @@ -535,7 +540,7 @@ ngx_quic_persistent_congestion(ngx_conne ngx_quic_connection_t *qc; qc = ngx_quic_get_connection(c); - cg = &qc->congestion; + cg = &qc->path->congestion; cg->recovery_start = ngx_current_msec; cg->window = qc->tp.max_udp_payload_size * 2; @@ -656,7 +661,7 @@ ngx_quic_congestion_lost(ngx_connection_ } qc = ngx_quic_get_connection(c); - cg = &qc->congestion; + cg = &qc->path->congestion; blocked = (cg->in_flight >= cg->window) ? 1 : 0; @@ -721,7 +726,8 @@ ngx_quic_set_lost_timer(ngx_connection_t if (ctx->largest_ack != NGX_QUIC_UNSET_PN) { q = ngx_queue_head(&ctx->sent); f = ngx_queue_data(q, ngx_quic_frame_t, queue); - w = (ngx_msec_int_t) (f->last + ngx_quic_lost_threshold(qc) - now); + w = (ngx_msec_int_t) + (f->last + ngx_quic_lost_threshold(qc->path) - now); if (f->pnum <= ctx->largest_ack) { if (w < 0 || ctx->largest_ack - f->pnum >= NGX_QUIC_PKT_THR) { @@ -777,17 +783,19 @@ ngx_msec_t ngx_quic_pto(ngx_connection_t *c, ngx_quic_send_ctx_t *ctx) { ngx_msec_t duration; + ngx_quic_path_t *path; ngx_quic_connection_t *qc; qc = ngx_quic_get_connection(c); + path = qc->path; /* RFC 9002, Appendix A.8. Setting the Loss Detection Timer */ - duration = qc->avg_rtt; + duration = path->avg_rtt; - duration += ngx_max(4 * qc->rttvar, NGX_QUIC_TIME_GRANULARITY); + duration += ngx_max(4 * path->rttvar, NGX_QUIC_TIME_GRANULARITY); duration <<= qc->pto_count; - if (qc->congestion.in_flight == 0) { /* no in-flight packets */ + if (path->congestion.in_flight == 0) { /* no in-flight packets */ return duration; } diff --git a/src/event/quic/ngx_event_quic_connection.h b/src/event/quic/ngx_event_quic_connection.h --- a/src/event/quic/ngx_event_quic_connection.h +++ b/src/event/quic/ngx_event_quic_connection.h @@ -80,6 +80,14 @@ struct ngx_quic_server_id_s { }; +typedef struct { + size_t in_flight; + size_t window; + size_t ssthresh; + ngx_msec_t recovery_start; +} ngx_quic_congestion_t; + + struct ngx_quic_path_s { ngx_queue_t queue; struct sockaddr *sockaddr; @@ -96,6 +104,15 @@ struct ngx_quic_path_s { uint64_t seqnum; ngx_str_t addr_text; u_char text[NGX_SOCKADDR_STRLEN]; + + ngx_msec_t first_rtt; + ngx_msec_t latest_rtt; + ngx_msec_t avg_rtt; + ngx_msec_t min_rtt; + ngx_msec_t rttvar; + + ngx_quic_congestion_t congestion; + unsigned validated:1; unsigned validating:1; unsigned limited:1; @@ -143,14 +160,6 @@ typedef struct { } ngx_quic_streams_t; -typedef struct { - size_t in_flight; - size_t window; - size_t ssthresh; - ngx_msec_t recovery_start; -} ngx_quic_congestion_t; - - /* * RFC 9000, 12.3. Packet Numbers * @@ -218,12 +227,6 @@ struct ngx_quic_connection_s { ngx_event_t path_validation; ngx_msec_t last_cc; - ngx_msec_t first_rtt; - ngx_msec_t latest_rtt; - ngx_msec_t avg_rtt; - ngx_msec_t min_rtt; - ngx_msec_t rttvar; - ngx_uint_t pto_count; ngx_queue_t free_frames; @@ -237,7 +240,6 @@ struct ngx_quic_connection_s { #endif ngx_quic_streams_t streams; - ngx_quic_congestion_t congestion; off_t received; diff --git a/src/event/quic/ngx_event_quic_migration.c b/src/event/quic/ngx_event_quic_migration.c --- a/src/event/quic/ngx_event_quic_migration.c +++ b/src/event/quic/ngx_event_quic_migration.c @@ -135,17 +135,26 @@ valid: { /* address did not change */ rst = 0; + + path->avg_rtt = prev->avg_rtt; + path->rttvar = prev->rttvar; + path->min_rtt = prev->min_rtt; + path->first_rtt = prev->first_rtt; + path->latest_rtt = prev->latest_rtt; + + ngx_memcpy(&path->congestion, &prev->congestion, + sizeof(ngx_quic_congestion_t)); } } if (rst) { - ngx_memzero(&qc->congestion, sizeof(ngx_quic_congestion_t)); + ngx_memzero(&path->congestion, sizeof(ngx_quic_congestion_t)); - qc->congestion.window = ngx_min(10 * qc->tp.max_udp_payload_size, - ngx_max(2 * qc->tp.max_udp_payload_size, - 14720)); - qc->congestion.ssthresh = (size_t) -1; - qc->congestion.recovery_start = ngx_current_msec; + path->congestion.window = ngx_min(10 * qc->tp.max_udp_payload_size, + ngx_max(2 * qc->tp.max_udp_payload_size, + 14720)); + path->congestion.ssthresh = (size_t) -1; + path->congestion.recovery_start = ngx_current_msec; } /* @@ -217,6 +226,21 @@ ngx_quic_new_path(ngx_connection_t *c, path->addr_text.len = ngx_sock_ntop(sockaddr, socklen, path->text, NGX_SOCKADDR_STRLEN, 1); + path->avg_rtt = NGX_QUIC_INITIAL_RTT; + path->rttvar = NGX_QUIC_INITIAL_RTT / 2; + path->min_rtt = NGX_TIMER_INFINITE; + path->first_rtt = NGX_TIMER_INFINITE; + + /* + * path->latest_rtt = 0 + */ + + path->congestion.window = ngx_min(10 * qc->tp.max_udp_payload_size, + ngx_max(2 * qc->tp.max_udp_payload_size, + 14720)); + path->congestion.ssthresh = (size_t) -1; + path->congestion.recovery_start = ngx_current_msec; + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, "quic path seq:%uL created addr:%V", path->seqnum, &path->addr_text); diff --git a/src/event/quic/ngx_event_quic_output.c b/src/event/quic/ngx_event_quic_output.c --- a/src/event/quic/ngx_event_quic_output.c +++ b/src/event/quic/ngx_event_quic_output.c @@ -87,7 +87,7 @@ ngx_quic_output(ngx_connection_t *c) c->log->action = "sending frames"; qc = ngx_quic_get_connection(c); - cg = &qc->congestion; + cg = &qc->path->congestion; in_flight = cg->in_flight; @@ -135,8 +135,8 @@ ngx_quic_create_datagrams(ngx_connection static u_char dst[NGX_QUIC_MAX_UDP_PAYLOAD_SIZE]; qc = ngx_quic_get_connection(c); - cg = &qc->congestion; path = qc->path; + cg = &path->congestion; while (cg->in_flight < cg->window) { @@ -222,8 +222,7 @@ ngx_quic_commit_send(ngx_connection_t *c ngx_quic_connection_t *qc; qc = ngx_quic_get_connection(c); - - cg = &qc->congestion; + cg = &qc->path->congestion; while (!ngx_queue_empty(&ctx->sending)) { @@ -336,8 +335,8 @@ ngx_quic_create_segments(ngx_connection_ static u_char dst[NGX_QUIC_MAX_UDP_SEGMENT_BUF]; qc = ngx_quic_get_connection(c); - cg = &qc->congestion; path = qc->path; + cg = &path->congestion; ctx = ngx_quic_get_send_ctx(qc, ssl_encryption_application); # HG changeset patch # User Sergey Kandaurov # Date 1672317973 -14400 # Thu Dec 29 16:46:13 2022 +0400 # Branch quic # Node ID 0c8d81ada23c23f079db5c9c7f60d0cd3768555a # Parent 46e738a5c0ab9d98577b21e72447cc9a6e3e9784 QUIC: path aware in-flight bytes accounting. On packet acknowledgement is made path aware, as per RFC 9000, section 9.4: Packets sent on the old path MUST NOT contribute to congestion control or RTT estimation for the new path. Previously, the in-flight counter could be decremented for the wrong path. If the active path was switched on connection migration with in-flight contributing packets, the acknowledgement received after the congestion controller is reset resulted in the counter underflow on the new path. diff --git a/src/event/quic/ngx_event_quic_ack.c b/src/event/quic/ngx_event_quic_ack.c --- a/src/event/quic/ngx_event_quic_ack.c +++ b/src/event/quic/ngx_event_quic_ack.c @@ -312,6 +312,8 @@ ngx_quic_congestion_ack(ngx_connection_t { ngx_uint_t blocked; ngx_msec_t timer; + ngx_queue_t *q; + ngx_quic_path_t *path; ngx_quic_congestion_t *cg; ngx_quic_connection_t *qc; @@ -320,7 +322,27 @@ ngx_quic_congestion_ack(ngx_connection_t } qc = ngx_quic_get_connection(c); - cg = &qc->path->congestion; + +#if (NGX_SUPPRESS_WARN) + path = NULL; +#endif + + for (q = ngx_queue_head(&qc->paths); + q != ngx_queue_sentinel(&qc->paths); + q = ngx_queue_next(q)) + { + path = ngx_queue_data(q, ngx_quic_path_t, queue); + + if (path == f->path) { + break; + } + } + + if (q == ngx_queue_sentinel(&qc->paths)) { + return; + } + + cg = &path->congestion; blocked = (cg->in_flight >= cg->window) ? 1 : 0; diff --git a/src/event/quic/ngx_event_quic_output.c b/src/event/quic/ngx_event_quic_output.c --- a/src/event/quic/ngx_event_quic_output.c +++ b/src/event/quic/ngx_event_quic_output.c @@ -234,6 +234,7 @@ ngx_quic_commit_send(ngx_connection_t *c if (f->pkt_need_ack && !qc->closing) { ngx_queue_insert_tail(&ctx->sent, q); + f->path = qc->path; cg->in_flight += f->plen; } else { diff --git a/src/event/quic/ngx_event_quic_transport.h b/src/event/quic/ngx_event_quic_transport.h --- a/src/event/quic/ngx_event_quic_transport.h +++ b/src/event/quic/ngx_event_quic_transport.h @@ -265,6 +265,7 @@ struct ngx_quic_frame_s { ngx_uint_t type; enum ssl_encryption_level_t level; ngx_queue_t queue; + ngx_quic_path_t *path; uint64_t pnum; size_t plen; ngx_msec_t first; # HG changeset patch # User Sergey Kandaurov # Date 1672317976 -14400 # Thu Dec 29 16:46:16 2022 +0400 # Branch quic # Node ID c450d58ba0108311c6e8dfa3e1ef15b267e12c9a # Parent 0c8d81ada23c23f079db5c9c7f60d0cd3768555a QUIC: avoid sending in-flight packets on a not yet validated path. Sending in-flight packets on a not yet validated path followed by confirming a peer's ownership of its new address and the congestion controller reset, as per RFC 9000, section 9.4, resulted in the lost accouting of in-flight bytes and the bytes counter underflow on subsequent acknowledgement. In practice, this occurred with NEW_CONNECTION_ID sent in response to peer's RETIRE_CONNECTION_ID, which is acknowledged after the new path is validated. Since we change the address to send to in response to highest-numbered packets, this measure should be sufficiently safe as an interim solution. diff --git a/src/event/quic/ngx_event_quic_output.c b/src/event/quic/ngx_event_quic_output.c --- a/src/event/quic/ngx_event_quic_output.c +++ b/src/event/quic/ngx_event_quic_output.c @@ -153,6 +153,10 @@ ngx_quic_create_datagrams(ngx_connection ctx = &qc->send_ctx[i]; + if (ctx->level == ssl_encryption_application && !path->validated) { + break; + } + preserved_pnum[i] = ctx->pnum; if (ngx_quic_generate_ack(c, ctx) != NGX_OK) { # HG changeset patch # User Sergey Kandaurov # Date 1672317981 -14400 # Thu Dec 29 16:46:21 2022 +0400 # Branch quic # Node ID 1afa9e4c5dd7d2ff1b5f9b4fe521b429fc44e78c # Parent c450d58ba0108311c6e8dfa3e1ef15b267e12c9a QUIC: updating the local sockaddr when receiving a QUIC packet. In addition to saving the local sockaddr when establishing a new connection, this change updates the connection local sockaddr on the next received packet. The cached value will be used to set the property of a new path, which aims to be different when using a preferred address. diff --git a/src/event/quic/ngx_event_quic_udp.c b/src/event/quic/ngx_event_quic_udp.c --- a/src/event/quic/ngx_event_quic_udp.c +++ b/src/event/quic/ngx_event_quic_udp.c @@ -174,6 +174,24 @@ ngx_quic_recvmsg(ngx_event_t *ev) } #endif + qsock = ngx_quic_get_socket(c); + + ngx_memcpy(&qsock->sockaddr.sockaddr, sockaddr, socklen); + qsock->socklen = socklen; + + if (local_sockaddr == &lsa.sockaddr) { + local_sockaddr = ngx_palloc(c->pool, local_socklen); + if (local_sockaddr == NULL) { + ngx_quic_close_accepted_connection(c); + return; + } + + ngx_memcpy(local_sockaddr, &lsa, local_socklen); + } + + c->local_sockaddr = local_sockaddr; + c->local_socklen = local_socklen; + ngx_memzero(&buf, sizeof(ngx_buf_t)); buf.pos = buffer; @@ -181,11 +199,6 @@ ngx_quic_recvmsg(ngx_event_t *ev) buf.start = buf.pos; buf.end = buffer + sizeof(buffer); - qsock = ngx_quic_get_socket(c); - - ngx_memcpy(&qsock->sockaddr.sockaddr, sockaddr, socklen); - qsock->socklen = socklen; - c->udp->buffer = &buf; rev = c->read; # HG changeset patch # User Sergey Kandaurov # Date 1672317994 -14400 # Thu Dec 29 16:46:34 2022 +0400 # Branch quic # Node ID ec44672584c4ba56247cf756c3fb6eeac7bfd924 # Parent 1afa9e4c5dd7d2ff1b5f9b4fe521b429fc44e78c QUIC: sending using address tuple within a path where appropriate. This change fixes sending from a correct local address, a property of the path. This is required to support connection migration caused by changing the server address by peer as provided in the preferred address. diff --git a/src/event/quic/ngx_event_quic_connection.h b/src/event/quic/ngx_event_quic_connection.h --- a/src/event/quic/ngx_event_quic_connection.h +++ b/src/event/quic/ngx_event_quic_connection.h @@ -93,6 +93,8 @@ struct ngx_quic_path_s { struct sockaddr *sockaddr; ngx_sockaddr_t sa; socklen_t socklen; + struct sockaddr *local_sockaddr; + socklen_t local_socklen; ngx_quic_client_id_t *cid; ngx_msec_t expires; ngx_uint_t tries; diff --git a/src/event/quic/ngx_event_quic_migration.c b/src/event/quic/ngx_event_quic_migration.c --- a/src/event/quic/ngx_event_quic_migration.c +++ b/src/event/quic/ngx_event_quic_migration.c @@ -226,6 +226,9 @@ ngx_quic_new_path(ngx_connection_t *c, path->addr_text.len = ngx_sock_ntop(sockaddr, socklen, path->text, NGX_SOCKADDR_STRLEN, 1); + path->local_sockaddr = c->local_sockaddr; + path->local_socklen = c->local_socklen; + path->avg_rtt = NGX_QUIC_INITIAL_RTT; path->rttvar = NGX_QUIC_INITIAL_RTT / 2; path->min_rtt = NGX_TIMER_INFINITE; diff --git a/src/event/quic/ngx_event_quic_output.c b/src/event/quic/ngx_event_quic_output.c --- a/src/event/quic/ngx_event_quic_output.c +++ b/src/event/quic/ngx_event_quic_output.c @@ -54,7 +54,7 @@ static void ngx_quic_init_packet(ngx_con ngx_quic_header_t *pkt, ngx_quic_path_t *path); static ngx_uint_t ngx_quic_get_padding_level(ngx_connection_t *c); static ssize_t ngx_quic_send(ngx_connection_t *c, u_char *buf, size_t len, - struct sockaddr *sockaddr, socklen_t socklen); + ngx_quic_path_t *path); static void ngx_quic_set_packet_number(ngx_quic_header_t *pkt, ngx_quic_send_ctx_t *ctx); static size_t ngx_quic_path_limit(ngx_connection_t *c, ngx_quic_path_t *path, @@ -191,7 +191,7 @@ ngx_quic_create_datagrams(ngx_connection break; } - n = ngx_quic_send(c, dst, len, path->sockaddr, path->socklen); + n = ngx_quic_send(c, dst, len, path); if (n == NGX_ERROR) { return NGX_ERROR; @@ -729,16 +729,29 @@ ngx_quic_init_packet(ngx_connection_t *c static ssize_t ngx_quic_send(ngx_connection_t *c, u_char *buf, size_t len, - struct sockaddr *sockaddr, socklen_t socklen) + ngx_quic_path_t *path) { - ssize_t n; - struct iovec iov; - struct msghdr msg; + ssize_t n; + socklen_t socklen; + struct iovec iov; + struct msghdr msg; + struct sockaddr *sockaddr, *local_sockaddr; #if (NGX_HAVE_ADDRINFO_CMSG) - struct cmsghdr *cmsg; - char msg_control[CMSG_SPACE(sizeof(ngx_addrinfo_t))]; + struct cmsghdr *cmsg; + char msg_control[CMSG_SPACE(sizeof(ngx_addrinfo_t))]; #endif + if (path) { + sockaddr = path->sockaddr; + socklen = path->socklen; + local_sockaddr = path->local_sockaddr; + + } else { + sockaddr = c->sockaddr; + socklen = c->socklen; + local_sockaddr = c->local_sockaddr; + } + ngx_memzero(&msg, sizeof(struct msghdr)); iov.iov_len = len; @@ -759,7 +772,7 @@ ngx_quic_send(ngx_connection_t *c, u_cha cmsg = CMSG_FIRSTHDR(&msg); - msg.msg_controllen = ngx_set_srcaddr_cmsg(cmsg, c->local_sockaddr); + msg.msg_controllen = ngx_set_srcaddr_cmsg(cmsg, local_sockaddr); } #endif @@ -826,7 +839,7 @@ ngx_quic_negotiate_version(ngx_connectio "quic vnego packet to send len:%uz %*xs", len, len, buf); #endif - (void) ngx_quic_send(c, buf, len, c->sockaddr, c->socklen); + (void) ngx_quic_send(c, buf, len, NULL); return NGX_DONE; } @@ -877,7 +890,7 @@ ngx_quic_send_stateless_reset(ngx_connec return NGX_ERROR; } - (void) ngx_quic_send(c, buf, len, c->sockaddr, c->socklen); + (void) ngx_quic_send(c, buf, len, NULL); return NGX_DECLINED; } @@ -994,7 +1007,7 @@ ngx_quic_send_early_cc(ngx_connection_t return NGX_ERROR; } - if (ngx_quic_send(c, res.data, res.len, c->sockaddr, c->socklen) < 0) { + if (ngx_quic_send(c, res.data, res.len, NULL) < 0) { return NGX_ERROR; } @@ -1056,7 +1069,7 @@ ngx_quic_send_retry(ngx_connection_t *c, "quic packet to send len:%uz %xV", res.len, &res); #endif - len = ngx_quic_send(c, res.data, res.len, c->sockaddr, c->socklen); + len = ngx_quic_send(c, res.data, res.len, NULL); if (len < 0) { return NGX_ERROR; } @@ -1265,7 +1278,7 @@ ngx_quic_frame_sendto(ngx_connection_t * ctx->pnum++; - sent = ngx_quic_send(c, res.data, res.len, path->sockaddr, path->socklen); + sent = ngx_quic_send(c, res.data, res.len, path); if (sent < 0) { return NGX_ERROR; } # HG changeset patch # User Sergey Kandaurov # Date 1672318851 -14400 # Thu Dec 29 17:00:51 2022 +0400 # Branch quic # Node ID 179b31fac328ce1a31f3a062acfdac5325d0f8f8 # Parent ec44672584c4ba56247cf756c3fb6eeac7bfd924 QUIC: debugging local and client addresses in send/receive. XXX Not to be pushed. XXX diff --git a/src/event/quic/ngx_event_quic_output.c b/src/event/quic/ngx_event_quic_output.c --- a/src/event/quic/ngx_event_quic_output.c +++ b/src/event/quic/ngx_event_quic_output.c @@ -776,6 +776,27 @@ ngx_quic_send(ngx_connection_t *c, u_cha } #endif +#if (NGX_DEBUG) + { + u_char text[NGX_SOCKADDR_STRLEN]; + ngx_str_t addr; + + addr.data = text; + + addr.len = ngx_sock_ntop(c->local_sockaddr, c->local_socklen, + text, NGX_SOCKADDR_STRLEN, 1); + + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, + "ngx_quic_send from %V", &addr); + + addr.len = ngx_sock_ntop(sockaddr, socklen, + text, NGX_SOCKADDR_STRLEN, 1); + + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, + "ngx_quic_send to %V", &addr); + } +#endif + n = ngx_sendmsg(c, &msg, 0); if (n < 0) { return n; diff --git a/src/event/quic/ngx_event_quic_udp.c b/src/event/quic/ngx_event_quic_udp.c --- a/src/event/quic/ngx_event_quic_udp.c +++ b/src/event/quic/ngx_event_quic_udp.c @@ -132,6 +132,27 @@ ngx_quic_recvmsg(ngx_event_t *ev) local_sockaddr = ls->sockaddr; local_socklen = ls->socklen; +#if (NGX_DEBUG) + { + u_char text[NGX_SOCKADDR_STRLEN]; + ngx_str_t addr; + + addr.data = text; + + addr.len = ngx_sock_ntop(local_sockaddr, local_socklen, + text, NGX_SOCKADDR_STRLEN, 1); + + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, + "ngx_quic_recvmsg from %V", &addr); + + addr.len = ngx_sock_ntop(sockaddr, socklen, + text, NGX_SOCKADDR_STRLEN, 1); + + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, + "ngx_quic_recvmsg to %V", &addr); + } +#endif + #if (NGX_HAVE_ADDRINFO_CMSG) if (ls->wildcard) { @@ -145,6 +166,20 @@ ngx_quic_recvmsg(ngx_event_t *ev) cmsg = CMSG_NXTHDR(&msg, cmsg)) { if (ngx_get_srcaddr_cmsg(cmsg, local_sockaddr) == NGX_OK) { + +#if (NGX_DEBUG) + { + ngx_str_t addr; + u_char text[NGX_SOCKADDR_STRLEN]; + + addr.len = ngx_sock_ntop(local_sockaddr, local_socklen, + text, NGX_SOCKADDR_STRLEN, 1); + + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, ngx_cycle->log, 0, + "ngx_quic_recvmsg from %V", &addr); + } +#endif + break; } } diff --git a/src/os/unix/ngx_udp_sendmsg_chain.c b/src/os/unix/ngx_udp_sendmsg_chain.c --- a/src/os/unix/ngx_udp_sendmsg_chain.c +++ b/src/os/unix/ngx_udp_sendmsg_chain.c @@ -236,6 +236,27 @@ ngx_sendmsg_vec(ngx_connection_t *c, ngx } #endif +#if (NGX_DEBUG) + { + u_char text[NGX_SOCKADDR_STRLEN]; + ngx_str_t addr; + + addr.data = text; + + addr.len = ngx_sock_ntop(c->local_sockaddr, c->local_socklen, + text, NGX_SOCKADDR_STRLEN, 1); + + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, + "ngx_sendmsg from %V", &addr); + + addr.len = ngx_sock_ntop(c->sockaddr, c->socklen, + text, NGX_SOCKADDR_STRLEN, 1); + + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, + "ngx_sendmsg to %V", &addr); + } +#endif + return ngx_sendmsg(c, &msg, 0); } # HG changeset patch # User Sergey Kandaurov # Date 1672318864 -14400 # Thu Dec 29 17:01:04 2022 +0400 # Branch quic # Node ID a20332359df750541373d3c3eab0bd656c6f7117 # Parent 179b31fac328ce1a31f3a062acfdac5325d0f8f8 QUIC: preferred address support. The quic_preferred_address directive specifies one or two addresses to provide in the preferred_address transport parameter. diff --git a/src/event/quic/ngx_event_quic.h b/src/event/quic/ngx_event_quic.h --- a/src/event/quic/ngx_event_quic.h +++ b/src/event/quic/ngx_event_quic.h @@ -21,6 +21,7 @@ #define NGX_QUIC_AV_KEY_LEN 32 #define NGX_QUIC_SR_TOKEN_LEN 16 +#define NGX_QUIC_PREFADDR_BASE_LEN 24 #define NGX_QUIC_MIN_INITIAL_SIZE 1200 @@ -69,6 +70,7 @@ typedef struct { ngx_flag_t disable_active_migration; ngx_msec_t timeout; ngx_str_t host_key; + u_char *preferred_address; size_t mtu; size_t stream_buffer_size; ngx_uint_t max_concurrent_streams_bidi; diff --git a/src/event/quic/ngx_event_quic_ssl.c b/src/event/quic/ngx_event_quic_ssl.c --- a/src/event/quic/ngx_event_quic_ssl.c +++ b/src/event/quic/ngx_event_quic_ssl.c @@ -574,6 +574,39 @@ ngx_quic_init_connection(ngx_connection_ return NGX_ERROR; } + if (qc->conf->preferred_address) { + + qsock = ngx_quic_create_socket(c, qc); + if (qsock == NULL) { + return NGX_ERROR; + } + + if (ngx_quic_listen(c, qc, qsock) != NGX_OK) { + return NGX_ERROR; + } + + dcid.data = qsock->sid.id; + dcid.len = qsock->sid.len; + + p = ngx_palloc(c->pool, NGX_QUIC_PREFADDR_LEN); + if (p == NULL) { + return NGX_ERROR; + } + + qc->tp.preferred_address = p; + + p = ngx_cpymem(p, qc->conf->preferred_address, + NGX_QUIC_PREFADDR_BASE_LEN); + p = ngx_cpymem(p, &dcid.len, 1); + p = ngx_cpymem(p, dcid.data, dcid.len); + + if (ngx_quic_new_sr_token(c, &dcid, qc->conf->sr_token_key, p) + != NGX_OK) + { + return NGX_ERROR; + } + } + len = ngx_quic_create_transport_params(NULL, NULL, &qc->tp, &clen); /* always succeeds */ diff --git a/src/event/quic/ngx_event_quic_transport.c b/src/event/quic/ngx_event_quic_transport.c --- a/src/event/quic/ngx_event_quic_transport.c +++ b/src/event/quic/ngx_event_quic_transport.c @@ -2087,6 +2087,12 @@ ngx_quic_create_transport_params(u_char len += ngx_quic_varint_len(NGX_QUIC_SR_TOKEN_LEN); len += NGX_QUIC_SR_TOKEN_LEN; + if (tp->preferred_address) { + len += ngx_quic_varint_len(NGX_QUIC_TP_PREFERRED_ADDRESS); + len += ngx_quic_varint_len(NGX_QUIC_PREFADDR_LEN); + len += NGX_QUIC_PREFADDR_LEN; + } + if (pos == NULL) { return len; } @@ -2142,6 +2148,12 @@ ngx_quic_create_transport_params(u_char ngx_quic_build_int(&p, NGX_QUIC_SR_TOKEN_LEN); p = ngx_cpymem(p, tp->sr_token, NGX_QUIC_SR_TOKEN_LEN); + if (tp->preferred_address) { + ngx_quic_build_int(&p, NGX_QUIC_TP_PREFERRED_ADDRESS); + ngx_quic_build_int(&p, NGX_QUIC_PREFADDR_LEN); + p = ngx_cpymem(p, tp->preferred_address, NGX_QUIC_PREFADDR_LEN); + } + return p - pos; } diff --git a/src/event/quic/ngx_event_quic_transport.h b/src/event/quic/ngx_event_quic_transport.h --- a/src/event/quic/ngx_event_quic_transport.h +++ b/src/event/quic/ngx_event_quic_transport.h @@ -51,6 +51,10 @@ : (lvl == ssl_encryption_initial) ? "init" \ : (lvl == ssl_encryption_handshake) ? "hs" : "early" +#define NGX_QUIC_PREFADDR_LEN NGX_QUIC_PREFADDR_BASE_LEN \ + + 1 + NGX_QUIC_SERVER_CID_LEN \ + + NGX_QUIC_SR_TOKEN_LEN + #define NGX_QUIC_MAX_CID_LEN 20 #define NGX_QUIC_SERVER_CID_LEN NGX_QUIC_MAX_CID_LEN @@ -362,8 +366,7 @@ typedef struct { ngx_str_t retry_scid; u_char sr_token[NGX_QUIC_SR_TOKEN_LEN]; - /* TODO */ - void *preferred_address; + u_char *preferred_address; } ngx_quic_tp_t; diff --git a/src/http/v3/ngx_http_v3_module.c b/src/http/v3/ngx_http_v3_module.c --- a/src/http/v3/ngx_http_v3_module.c +++ b/src/http/v3/ngx_http_v3_module.c @@ -20,6 +20,8 @@ static char *ngx_http_quic_mtu(ngx_conf_ void *data); static char *ngx_http_quic_host_key(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); +static char *ngx_http_quic_preferred_address(ngx_conf_t *cf, + ngx_command_t *cmd, void *conf); static void *ngx_http_v3_create_loc_conf(ngx_conf_t *cf); static char *ngx_http_v3_merge_loc_conf(ngx_conf_t *cf, void *parent, void *child); @@ -111,6 +113,13 @@ static ngx_command_t ngx_http_v3_comman offsetof(ngx_http_v3_srv_conf_t, quic.active_connection_id_limit), NULL }, + { ngx_string("quic_preferred_address"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_CONF_TAKE12, + ngx_http_quic_preferred_address, + NGX_HTTP_SRV_CONF_OFFSET, + 0, + NULL }, + ngx_null_command }; @@ -239,6 +248,7 @@ ngx_http_v3_create_srv_conf(ngx_conf_t * h3scf->hq = NGX_CONF_UNSET; #endif + h3scf->quic.preferred_address = NGX_CONF_UNSET_PTR; h3scf->quic.mtu = NGX_CONF_UNSET_SIZE; h3scf->quic.stream_buffer_size = NGX_CONF_UNSET_SIZE; h3scf->quic.max_concurrent_streams_bidi = NGX_CONF_UNSET_UINT; @@ -291,6 +301,10 @@ ngx_http_v3_merge_srv_conf(ngx_conf_t *c ngx_conf_merge_str_value(conf->quic.host_key, prev->quic.host_key, ""); + ngx_conf_merge_ptr_value(conf->quic.preferred_address, + prev->quic.preferred_address, + NULL); + ngx_conf_merge_uint_value(conf->quic.active_connection_id_limit, prev->quic.active_connection_id_limit, 2); @@ -455,6 +469,72 @@ failed: } +static char * +ngx_http_quic_preferred_address(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) +{ + ngx_http_v3_srv_conf_t *h3scf = conf; + + u_char *p; + ngx_str_t *value; + ngx_uint_t i; + ngx_addr_t addr; + ngx_quic_conf_t *qcf; + struct sockaddr_in *sin; +#if (NGX_HAVE_INET6) + struct sockaddr_in6 *sin6; +#endif + + qcf = &h3scf->quic; + + if (qcf->preferred_address != NGX_CONF_UNSET_PTR) { + return "is duplicate"; + } + + p = ngx_pcalloc(cf->pool, NGX_QUIC_PREFADDR_BASE_LEN); + if (p == NULL) { + return NGX_CONF_ERROR; + } + + qcf->preferred_address = p; + + value = cf->args->elts; + for (i = 1; i < cf->args->nelts; i++) { + + if (ngx_parse_addr_port(cf->pool, &addr, value[i].data, value[i].len) + != NGX_OK) + { + return "invalid value"; + } + + switch (addr.sockaddr->sa_family) { + +#if (NGX_HAVE_INET6) + case AF_INET6: + sin6 = (struct sockaddr_in6 *) addr.sockaddr; + + memcpy(&p[6], &sin6->sin6_addr.s6_addr, sizeof(struct in6_addr)); + memcpy(&p[22], &sin6->sin6_port, sizeof(in_port_t)); + + break; +#endif + + case AF_INET: + sin = (struct sockaddr_in *) addr.sockaddr; + + memcpy(&p[0], &sin->sin_addr, sizeof(struct in_addr)); + memcpy(&p[4], &sin->sin_port, sizeof(in_port_t)); + + break; + + default: + return "invalid value"; + } + } + + return NGX_CONF_OK; +} + + static void * ngx_http_v3_create_loc_conf(ngx_conf_t *cf) { diff --git a/src/stream/ngx_stream_quic_module.c b/src/stream/ngx_stream_quic_module.c --- a/src/stream/ngx_stream_quic_module.c +++ b/src/stream/ngx_stream_quic_module.c @@ -19,6 +19,8 @@ static char *ngx_stream_quic_merge_srv_c static char *ngx_stream_quic_mtu(ngx_conf_t *cf, void *post, void *data); static char *ngx_stream_quic_host_key(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); +static char *ngx_stream_quic_preferred_address(ngx_conf_t *cf, + ngx_command_t *cmd, void *conf); static ngx_conf_post_t ngx_stream_quic_mtu_post = { ngx_stream_quic_mtu }; @@ -74,6 +76,13 @@ static ngx_command_t ngx_stream_quic_co offsetof(ngx_quic_conf_t, active_connection_id_limit), NULL }, + { ngx_string("quic_preferred_address"), + NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_TAKE12, + ngx_stream_quic_preferred_address, + NGX_STREAM_SRV_CONF_OFFSET, + 0, + NULL }, + ngx_null_command }; @@ -175,6 +184,7 @@ ngx_stream_quic_create_srv_conf(ngx_conf */ conf->timeout = NGX_CONF_UNSET_MSEC; + conf->preferred_address = NGX_CONF_UNSET_PTR; conf->mtu = NGX_CONF_UNSET_SIZE; conf->stream_buffer_size = NGX_CONF_UNSET_SIZE; conf->max_concurrent_streams_bidi = NGX_CONF_UNSET_UINT; @@ -217,6 +227,10 @@ ngx_stream_quic_merge_srv_conf(ngx_conf_ ngx_conf_merge_str_value(conf->host_key, prev->host_key, ""); + ngx_conf_merge_ptr_value(conf->preferred_address, + prev->preferred_address, + NULL); + ngx_conf_merge_uint_value(conf->active_connection_id_limit, conf->active_connection_id_limit, 2); @@ -375,3 +389,66 @@ failed: return NGX_CONF_ERROR; } + + +static char * +ngx_stream_quic_preferred_address(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) +{ + ngx_quic_conf_t *qcf = conf; + + u_char *p; + ngx_str_t *value; + ngx_uint_t i; + ngx_addr_t addr; + struct sockaddr_in *sin; +#if (NGX_HAVE_INET6) + struct sockaddr_in6 *sin6; +#endif + + if (qcf->preferred_address != NGX_CONF_UNSET_PTR) { + return "is duplicate"; + } + + p = ngx_pcalloc(cf->pool, NGX_QUIC_PREFADDR_BASE_LEN); + if (p == NULL) { + return NGX_CONF_ERROR; + } + + qcf->preferred_address = p; + + value = cf->args->elts; + for (i = 1; i < cf->args->nelts; i++) { + + if (ngx_parse_addr_port(cf->pool, &addr, value[i].data, value[i].len) + != NGX_OK) + { + return "invalid value"; + } + + switch (addr.sockaddr->sa_family) { + +#if (NGX_HAVE_INET6) + case AF_INET6: + sin6 = (struct sockaddr_in6 *) addr.sockaddr; + + memcpy(&p[6], &sin6->sin6_addr.s6_addr, sizeof(struct in6_addr)); + memcpy(&p[22], &sin6->sin6_port, sizeof(in_port_t)); + + break; +#endif + + case AF_INET: + sin = (struct sockaddr_in *) addr.sockaddr; + + memcpy(&p[0], &sin->sin_addr, sizeof(struct in_addr)); + memcpy(&p[4], &sin->sin_port, sizeof(in_port_t)); + + break; + + default: + return "invalid value"; + } + } + + return NGX_CONF_OK; +} -- Sergey Kandaurov From jordanc.carter at outlook.com Fri Dec 30 22:23:12 2022 From: jordanc.carter at outlook.com (J Carter) Date: Fri, 30 Dec 2022 22:23:12 +0000 Subject: [PATCH] Dynamic rate limiting for limit_req module Message-ID: Hello, Please find below a patch to enable dynamic rate limiting for limit_req module. /* ----------------------------EXAMPLE---------------------------------*/ geo $traffic_tier { default free; 127.0.1.0/24 basic; 127.0.2.0/24 premium; } map $traffic_tier $rate { free 1r/m; basic 2r/m; premium 1r/s; } limit_req_zone $binary_remote_addr zone=one:10m rate=$rate; server { limit_req zone=one; listen 80; server_name localhost; return 200; } curl --interface 127.0.X.X localhost /* ----------------------------NGINX-TESTS---------------------------------*/ debian at debian:~/projects/nginx-merc/nginx-tests$ prove limit_req* limit_req2.t ......... ok limit_req_delay.t .... ok limit_req_dry_run.t .. ok limit_req.t .......... ok All tests successful. Files=4, Tests=40, 4 wallclock secs ( 0.04 usr 0.00 sys + 0.28 cusr 0.11 csys = 0.43 CPU) Result: PASS /* ----------------------------CHANGES OF BEHAVIOUR---------------------------------*/ It is backwards compatible with the syntax of existing configurations, either a rate=$variable can be used or the existing syntax of rate=xy/s. - 'rate=' can be assigned empty value, which results in an unlimited(maximum) rate limits value. - 'rate=' set to an invalid value also results in an unlimited(maximum) rate limit value. - The value of rate is now limited to prevent integer overflow in certain operations. - The maximum time between consecutive requests that is determined is now limited to 60s (60000ms) to prevent integer overflow/underflow. /* ----------------------------USE-CASES---------------------------------*/ Allow rate limits for a given user to be determined by mapping trusted request values to a rate, such as: - Source IP CIDR. - Client Certificate identifiers. - JWT claims. This could also be performed dynamically at runtime with key_val zone to alter rate limits on the fly without a reload. /* ----------------------------PATCHBOMB---------------------------------*/ # HG changeset patch # User jordanc.carter at outlook.com # Date 1672437935 0 # Fri Dec 30 22:05:35 2022 +0000 # Branch dynamic-rate-limiting # Node ID b2bd50efa81e5aeeb9b8f84ee0af34463add07fa # Parent 07b0bee87f32be91a33210bc06973e07c4c1dac9 Changed 'rate=' to complex value and added limits to the rate value to prevent integer overflow/underflow diff -r 07b0bee87f32 -r b2bd50efa81e src/http/modules/ngx_http_limit_req_module.c --- a/src/http/modules/ngx_http_limit_req_module.c Wed Dec 21 14:53:27 2022 +0300 +++ b/src/http/modules/ngx_http_limit_req_module.c Fri Dec 30 22:05:35 2022 +0000 @@ -26,6 +26,7 @@ /* integer value, 1 corresponds to 0.001 r/s */ ngx_uint_t excess; ngx_uint_t count; + ngx_uint_t rate; u_char data[1]; } ngx_http_limit_req_node_t; @@ -41,7 +42,7 @@ ngx_http_limit_req_shctx_t *sh; ngx_slab_pool_t *shpool; /* integer value, 1 corresponds to 0.001 r/s */ - ngx_uint_t rate; + ngx_http_complex_value_t rate; ngx_http_complex_value_t key; ngx_http_limit_req_node_t *node; } ngx_http_limit_req_ctx_t; @@ -66,9 +67,9 @@ static void ngx_http_limit_req_delay(ngx_http_request_t *r); static ngx_int_t ngx_http_limit_req_lookup(ngx_http_limit_req_limit_t *limit, - ngx_uint_t hash, ngx_str_t *key, ngx_uint_t *ep, ngx_uint_t account); + ngx_uint_t hash, ngx_str_t *key, ngx_uint_t *ep, ngx_uint_t account, ngx_uint_t rate); static ngx_msec_t ngx_http_limit_req_account(ngx_http_limit_req_limit_t *limits, - ngx_uint_t n, ngx_uint_t *ep, ngx_http_limit_req_limit_t **limit); + ngx_uint_t n, ngx_uint_t *ep, ngx_http_limit_req_limit_t **limit, ngx_uint_t rate); static void ngx_http_limit_req_unlock(ngx_http_limit_req_limit_t *limits, ngx_uint_t n); static void ngx_http_limit_req_expire(ngx_http_limit_req_ctx_t *ctx, @@ -195,10 +196,13 @@ ngx_http_limit_req_handler(ngx_http_request_t *r) { uint32_t hash; - ngx_str_t key; + ngx_str_t key, rate_s; ngx_int_t rc; ngx_uint_t n, excess; + ngx_uint_t scale; + ngx_uint_t rate; ngx_msec_t delay; + u_char *p; ngx_http_limit_req_ctx_t *ctx; ngx_http_limit_req_conf_t *lrcf; ngx_http_limit_req_limit_t *limit, *limits; @@ -243,10 +247,34 @@ hash = ngx_crc32_short(key.data, key.len); + if (ngx_http_complex_value(r, &ctx->rate, &rate_s) != NGX_OK) { + ngx_http_limit_req_unlock(limits, n); + return NGX_HTTP_INTERNAL_SERVER_ERROR; + } + + scale = 1; + rate = NGX_ERROR; + + if (rate_s.len > 8) { + + rate = (ngx_uint_t) ngx_atoi(rate_s.data + 5, rate_s.len - 8); + + p = rate_s.data + rate_s.len - 3; + if (ngx_strncmp(p, "r/m", 3) == 0) { + scale = 60; + } else if (ngx_strncmp(p, "r/s", 3) != 0){ + rate = NGX_ERROR; + } + } + + rate = (rate != 0 && rate < NGX_MAX_INT_T_VALUE / 60000000 - 1001) ? + rate * 1000 / scale : + NGX_MAX_INT_T_VALUE / 60000000 - 1001; + ngx_shmtx_lock(&ctx->shpool->mutex); rc = ngx_http_limit_req_lookup(limit, hash, &key, &excess, - (n == lrcf->limits.nelts - 1)); + (n == lrcf->limits.nelts - 1), rate); ngx_shmtx_unlock(&ctx->shpool->mutex); @@ -291,7 +319,7 @@ excess = 0; } - delay = ngx_http_limit_req_account(limits, n, &excess, &limit); + delay = ngx_http_limit_req_account(limits, n, &excess, &limit, rate); if (!delay) { r->main->limit_req_status = NGX_HTTP_LIMIT_REQ_PASSED; @@ -403,7 +431,7 @@ static ngx_int_t ngx_http_limit_req_lookup(ngx_http_limit_req_limit_t *limit, ngx_uint_t hash, - ngx_str_t *key, ngx_uint_t *ep, ngx_uint_t account) + ngx_str_t *key, ngx_uint_t *ep, ngx_uint_t account, ngx_uint_t rate) { size_t size; ngx_int_t rc, excess; @@ -412,7 +440,6 @@ ngx_rbtree_node_t *node, *sentinel; ngx_http_limit_req_ctx_t *ctx; ngx_http_limit_req_node_t *lr; - now = ngx_current_msec; ctx = limit->shm_zone->data; @@ -446,12 +473,14 @@ if (ms < -60000) { ms = 1; - } else if (ms < 0) { ms = 0; + } else if (ms > 60000) { + ms = 60000; } - excess = lr->excess - ctx->rate * ms / 1000 + 1000; + lr->rate = rate; + excess = lr->excess - lr->rate * ms / 1000 + 1000; if (excess < 0) { excess = 0; @@ -510,6 +539,7 @@ lr->len = (u_short) key->len; lr->excess = 0; + lr->rate = rate; ngx_memcpy(lr->data, key->data, key->len); @@ -534,7 +564,7 @@ static ngx_msec_t ngx_http_limit_req_account(ngx_http_limit_req_limit_t *limits, ngx_uint_t n, - ngx_uint_t *ep, ngx_http_limit_req_limit_t **limit) + ngx_uint_t *ep, ngx_http_limit_req_limit_t **limit, ngx_uint_t rate) { ngx_int_t excess; ngx_msec_t now, delay, max_delay; @@ -543,13 +573,13 @@ ngx_http_limit_req_node_t *lr; excess = *ep; + max_delay = 0; if ((ngx_uint_t) excess <= (*limit)->delay) { max_delay = 0; } else { - ctx = (*limit)->shm_zone->data; - max_delay = (excess - (*limit)->delay) * 1000 / ctx->rate; + max_delay = (excess - (*limit)->delay) * 1000 / rate; } while (n--) { @@ -570,9 +600,11 @@ } else if (ms < 0) { ms = 0; + } else if (ms > 60000) { + ms = 60000; } - excess = lr->excess - ctx->rate * ms / 1000 + 1000; + excess = lr->excess - lr->rate * ms / 1000 + 1000; if (excess < 0) { excess = 0; @@ -593,7 +625,7 @@ continue; } - delay = (excess - limits[n].delay) * 1000 / ctx->rate; + delay = (excess - limits[n].delay) * 1000 / lr->rate; if (delay > max_delay) { max_delay = delay; @@ -674,9 +706,11 @@ if (ms < 60000) { return; + } else { + ms = 60000; } - excess = lr->excess - ctx->rate * ms / 1000; + excess = lr->excess - lr->rate * ms / 1000; if (excess > 0) { return; @@ -833,14 +867,12 @@ ngx_http_limit_req_zone(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) { u_char *p; - size_t len; ssize_t size; ngx_str_t *value, name, s; - ngx_int_t rate, scale; ngx_uint_t i; ngx_shm_zone_t *shm_zone; ngx_http_limit_req_ctx_t *ctx; - ngx_http_compile_complex_value_t ccv; + ngx_http_compile_complex_value_t key, rate; value = cf->args->elts; @@ -849,19 +881,17 @@ return NGX_CONF_ERROR; } - ngx_memzero(&ccv, sizeof(ngx_http_compile_complex_value_t)); + ngx_memzero(&key, sizeof(ngx_http_compile_complex_value_t)); - ccv.cf = cf; - ccv.value = &value[1]; - ccv.complex_value = &ctx->key; + key.cf = cf; + key.value = &value[1]; + key.complex_value = &ctx->key; - if (ngx_http_compile_complex_value(&ccv) != NGX_OK) { + if (ngx_http_compile_complex_value(&key) != NGX_OK) { return NGX_CONF_ERROR; } size = 0; - rate = 1; - scale = 1; name.len = 0; for (i = 2; i < cf->args->nelts; i++) { @@ -902,25 +932,14 @@ if (ngx_strncmp(value[i].data, "rate=", 5) == 0) { - len = value[i].len; - p = value[i].data + len - 3; - - if (ngx_strncmp(p, "r/s", 3) == 0) { - scale = 1; - len -= 3; + ngx_memzero(&rate, sizeof(ngx_http_compile_complex_value_t)); + rate.cf = cf; + rate.value = &value[i]; + rate.complex_value = &ctx->rate; - } else if (ngx_strncmp(p, "r/m", 3) == 0) { - scale = 60; - len -= 3; - } - - rate = ngx_atoi(value[i].data + 5, len - 5); - if (rate <= 0) { - ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, - "invalid rate \"%V\"", &value[i]); + if (ngx_http_compile_complex_value(&rate) != NGX_OK) { return NGX_CONF_ERROR; } - continue; } @@ -936,8 +955,6 @@ return NGX_CONF_ERROR; } - ctx->rate = rate * 1000 / scale; - shm_zone = ngx_shared_memory_add(cf, &name, size, &ngx_http_limit_req_module); if (shm_zone == NULL) { -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Dec 31 14:35:59 2022 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Sat, 31 Dec 2022 17:35:59 +0300 Subject: [PATCH] Added warning about redefinition of listen socket protocol options Message-ID: # HG changeset patch # User Maxim Dounin # Date 1672497248 -10800 # Sat Dec 31 17:34:08 2022 +0300 # Node ID c215d5cf25732ece1819cf1cd48ebb480bb642c7 # Parent 07b0bee87f32be91a33210bc06973e07c4c1dac9 Added warning about redefinition of listen socket protocol options. The "listen" directive in the http module can be used multiple times in different server blocks. Originally, it was supposed to be specified once with various socket options, and without any parameters in virtual server blocks. For example: server { listen 80 backlog=1024; server_name foo; ... } server { listen 80; server_name bar; ... } server { listen 80; server_name bazz; ... } The address part of the syntax ("address[:port]" / "port" / "unix:path") uniquely identifies the listening socket, and therefore is enough for name-based virtual servers (to let nginx know that the virtual server accepts requests on the listening socket in question). To ensure that listening options do not conflict between virtual servers, they were allowed only once. For example, the following configuration will be rejected ("duplicate listen options for 0.0.0.0:80 in ..."): server { listen 80 backlog=1024; server_name foo; ... } server { listen 80 backlog=512; server_name bar; ... } At some point it was, however, noticed, that it is sometimes convenient to repeat some options for clarity. In nginx 0.8.51 the "ssl" parameter was allowed to be specified multiple times, e.g.: server { listen 443 ssl backlog=1024; server_name foo; ... } server { listen 443 ssl; server_name bar; ... } server { listen 443 ssl; server_name bazz; ... } This approach makes configuration more readable, since SSL sockets are immediately visible in the configuration. If this is not needed, just the address can still be used. Later, additional protocol-specific options similar to "ssl" were introduced, notably "http2" and "proxy_protocol". With these options, one can write: server { listen 443 ssl backlog=1024; server_name foo; ... } server { listen 443 http2; server_name bar; ... } server { listen 443 proxy_protocol; server_name bazz; ... } The resulting socket will use ssl, http2, and proxy_protocol, but this is not really obvious from the configuration. To ensure such misleading configurations are not allowed, nginx now warns as long as the "listen" directive is used with options different from the options previously used if these are potentially confusing. In particular, the following configurations are allowed: server { listen 8401 ssl backlog=1024; server_name foo; } server { listen 8401 ssl; server_name bar; } server { listen 8401 ssl; server_name bazz; } server { listen 8402 ssl http2 backlog=1024; server_name foo; } server { listen 8402 ssl; server_name bar; } server { listen 8402 ssl; server_name bazz; } server { listen 8403 ssl; server_name bar; } server { listen 8403 ssl; server_name bazz; } server { listen 8403 ssl http2; server_name foo; } server { listen 8404 ssl http2 backlog=1024; server_name foo; } server { listen 8404 http2; server_name bar; } server { listen 8404 http2; server_name bazz; } server { listen 8405 ssl http2 backlog=1024; server_name foo; } server { listen 8405 ssl http2; server_name bar; } server { listen 8405 ssl http2; server_name bazz; } server { listen 8406 ssl; server_name foo; } server { listen 8406; server_name bar; } server { listen 8406; server_name bazz; } And the following configurations will generate warnings: server { listen 8501 ssl http2 backlog=1024; server_name foo; } server { listen 8501 http2; server_name bar; } server { listen 8501 ssl; server_name bazz; } server { listen 8502 backlog=1024; server_name foo; } server { listen 8502 ssl; server_name bar; } server { listen 8503 ssl; server_name foo; } server { listen 8503 http2; server_name bar; } server { listen 8504 ssl; server_name foo; } server { listen 8504 http2; server_name bar; } server { listen 8504 proxy_protocol; server_name bazz; } server { listen 8505 ssl http2 proxy_protocol; server_name foo; } server { listen 8505 ssl http2; server_name bar; } server { listen 8505 ssl; server_name bazz; } server { listen 8506 ssl http2; server_name foo; } server { listen 8506 ssl; server_name bar; } server { listen 8506; server_name bazz; } server { listen 8507 ssl; server_name bar; } server { listen 8507; server_name bazz; } server { listen 8507 ssl http2; server_name foo; } server { listen 8508 ssl; server_name bar; } server { listen 8508; server_name bazz; } server { listen 8508 ssl backlog=1024; server_name foo; } server { listen 8509; server_name bazz; } server { listen 8509 ssl; server_name bar; } server { listen 8509 ssl backlog=1024; server_name foo; } The basic idea is that at most two sets of protocol options are allowed: the main one (with socket options, if any), and a shorter one, with options being a subset of the main options, repeated for clarity. As long as the shorter set of protocol options is used, all listen directives except the main one should use it. diff --git a/src/http/ngx_http.c b/src/http/ngx_http.c --- a/src/http/ngx_http.c +++ b/src/http/ngx_http.c @@ -1228,7 +1228,8 @@ static ngx_int_t ngx_http_add_addresses(ngx_conf_t *cf, ngx_http_core_srv_conf_t *cscf, ngx_http_conf_port_t *port, ngx_http_listen_opt_t *lsopt) { - ngx_uint_t i, default_server, proxy_protocol; + ngx_uint_t i, default_server, proxy_protocol, + protocols, protocols_prev; ngx_http_conf_addr_t *addr; #if (NGX_HTTP_SSL) ngx_uint_t ssl; @@ -1264,12 +1265,18 @@ ngx_http_add_addresses(ngx_conf_t *cf, n default_server = addr[i].opt.default_server; proxy_protocol = lsopt->proxy_protocol || addr[i].opt.proxy_protocol; + protocols = lsopt->proxy_protocol; + protocols_prev = addr[i].opt.proxy_protocol; #if (NGX_HTTP_SSL) ssl = lsopt->ssl || addr[i].opt.ssl; + protocols |= lsopt->ssl << 1; + protocols_prev |= addr[i].opt.ssl << 1; #endif #if (NGX_HTTP_V2) http2 = lsopt->http2 || addr[i].opt.http2; + protocols |= lsopt->http2 << 2; + protocols_prev |= addr[i].opt.http2 << 2; #endif if (lsopt->set) { @@ -1299,6 +1306,55 @@ ngx_http_add_addresses(ngx_conf_t *cf, n addr[i].default_server = cscf; } + /* check for conflicting protocol options */ + + if ((protocols | protocols_prev) != protocols_prev) { + + /* options added */ + + if ((addr[i].opt.set && !lsopt->set) + || addr[i].protocols_changed + || (protocols | protocols_prev) != protocols) + { + ngx_conf_log_error(NGX_LOG_WARN, cf, 0, + "protocol options redefined for %V", + &addr[i].opt.addr_text); + } + + addr[i].protocols = protocols_prev; + addr[i].protocols_set = 1; + addr[i].protocols_changed = 1; + + } else if ((protocols_prev | protocols) != protocols) { + + /* options removed */ + + if (lsopt->set + || (addr[i].protocols_set && protocols != addr[i].protocols)) + { + ngx_conf_log_error(NGX_LOG_WARN, cf, 0, + "protocol options redefined for %V", + &addr[i].opt.addr_text); + } + + addr[i].protocols = protocols; + addr[i].protocols_set = 1; + addr[i].protocols_changed = 1; + + } else { + + /* the same options */ + + if (lsopt->set && addr[i].protocols_changed) { + ngx_conf_log_error(NGX_LOG_WARN, cf, 0, + "protocol options redefined for %V", + &addr[i].opt.addr_text); + } + + addr[i].protocols = protocols; + addr[i].protocols_set = 1; + } + addr[i].opt.default_server = default_server; addr[i].opt.proxy_protocol = proxy_protocol; #if (NGX_HTTP_SSL) @@ -1355,6 +1411,9 @@ ngx_http_add_address(ngx_conf_t *cf, ngx } addr->opt = *lsopt; + addr->protocols = 0; + addr->protocols_set = 0; + addr->protocols_changed = 0; addr->hash.buckets = NULL; addr->hash.size = 0; addr->wc_head = NULL; diff --git a/src/http/ngx_http_core_module.h b/src/http/ngx_http_core_module.h --- a/src/http/ngx_http_core_module.h +++ b/src/http/ngx_http_core_module.h @@ -274,6 +274,10 @@ typedef struct { typedef struct { ngx_http_listen_opt_t opt; + unsigned protocols:3; + unsigned protocols_set:1; + unsigned protocols_changed:1; + ngx_hash_t hash; ngx_hash_wildcard_t *wc_head; ngx_hash_wildcard_t *wc_tail;