From devashi.tandon at appsentinels.ai Tue Jan 4 10:09:39 2022 From: devashi.tandon at appsentinels.ai (Devashi Tandon) Date: Tue, 4 Jan 2022 10:09:39 +0000 Subject: Pushing configuration to NGINX from a custom server Message-ID: Hi, We want to push some string based configuration (maybe in json format) to NGINX server, from our server. The requirement is to NOT put the configuration in nginx.conf file but we will be specifying the configuration server in the nginx.conf file. Whenever we update the configuration in our server, we want NGINX to get updated with the new configuration. What is the safest and most secure way to achieve this? Any help is appreciated. Thanks, Devashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume.bilic at gmail.com Tue Jan 4 16:11:19 2022 From: guillaume.bilic at gmail.com (Guillaume Bilic) Date: Tue, 4 Jan 2022 17:11:19 +0100 Subject: [nginx-quic] revision 6ccf3867959a seems to break http3 response Message-ID: Hi all, Latest revisions of quic branch does not work anymore in h3 using chrome and firefox. Revision 6ccf3867959a "refactored ngx_quic_order_bufs() and ngx_quic_split_bufs()" seems to be the culprit. Regards, Guillaume -------------- next part -------------- An HTML attachment was scrubbed... URL: From mandeep-singh.chhabra at thalesgroup.com Thu Jan 6 11:52:44 2022 From: mandeep-singh.chhabra at thalesgroup.com (CHHABRA Mandeep Singh) Date: Thu, 6 Jan 2022 11:52:44 +0000 Subject: [PATCH] Add provision to fetch certificate chain from Nginx In-Reply-To: References: Message-ID: Hi Maxim, Could you please share your thoughts on the trailing mail. Regards, Mandeep -----Original Message----- From: CHHABRA Mandeep Singh Sent: Thursday, December 30, 2021 3:05 PM To: nginx-devel at nginx.org Subject: RE: [PATCH] Add provision to fetch certificate chain from Nginx Hi Maxim, Thanks for giving time to this. As far as my understanding goes, the intermediate CA certificates are not required to be known to the server. It is only the trust anchor(the root CA certificate) which is required to be known and trusted on the sever. And in our case also, the root CA certificate is trusted for the web. I have tried to give a brief of the problem in the following section. We have a product which supports multi-tenancy and uses Nginx as a reverse proxy. There are different isolated domains which share the same trust anchor. But there could be difference in the client certificate chain in different domains. There is a need to do some extra validations based on the CAs in the chain. To be more precise, we have option to specify if a CA could be used to do client or user authentication. There is a possibility that in one domain, a CA is enabled for client authentication and in another , the same CA is disabled. So, we need a way to get the certificate chain from Nginx, to do these extra validations, apart from what Nginx does i.e. checking if the chain could be verified. But there is no way to get the chain, today. This could be a common problem applicable to multiple use cases, depending upon how a product wants its CA to behave. And we think, it could be a good to have feature in Nginx. Please let me know if I should be specify more details on the problem. Regards Mandeep -----Original Message----- From: nginx-devel On Behalf Of Maxim Dounin Sent: Tuesday, December 28, 2021 9:28 PM To: nginx-devel at nginx.org Subject: Re: [PATCH] Add provision to fetch certificate chain from Nginx Hello! On Tue, Dec 28, 2021 at 11:56:50AM +0000, CHHABRA Mandeep Singh wrote: > # HG changeset patch > # User Mandeep Singh Chhabra > # Date 1640691269 -19800 > # Tue Dec 28 17:04:29 2021 +0530 > # Node ID 9baaef976ac80f05107b60801ebe6559cdb2cbc6 > # Parent b002ad258f1d70924dc13d8f4bc0cc44362f0d0a > Add provision to fetch certificate chain from Nginx > > The change adds a new variable ('ssl_client_cert_chain') to the > existing set of variables. It is being part of the http's SSL module. > With this, the middleware can fetch the certificate chain from Nginx > using the variable mentioned. The variable returns a verified chain of > certificates. > If the trust anchor is a root certificate (self signed) which has > issued an intermediate certificate and the client certificate is > issued by the intermediate certificate. The variable > ('ssl_client_cert_chain') will return three certificates (rootCert -> > intermediateCert -> clientCert) Thanks for the patch. You may want to be more specific about which problem you are trying to solve. In particular, all root and intermediate certificates are expected to be known on the server. If they aren't for some reason, it might be a good idea to clarify why they aren't known or reconsider particular configuration. [...] > + p = s->data; > + > + for (i = 0; i < cert_chain.len - 1; i++) { > + *p++ = cert_chain.data[i]; > + if (cert_chain.data[i] == LF) { > + *p++ = '\t'; > + } Just a side note: certainly we are not going to introduce new variables using this syntax. Also it might be a good idea to fix various style issues in the patch, but probably it make sense to resolve the "why it should be needed" question first. [...] -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel From winshining at 163.com Thu Jan 6 14:48:34 2022 From: winshining at 163.com (winshining) Date: Thu, 6 Jan 2022 22:48:34 +0800 (CST) Subject: Pushing configuration to NGINX from a custom server In-Reply-To: References: Message-ID: <3afeec8d.c4be.17e2fdcea3d.Coremail.winshining@163.com> Hello, Devashi Tandon! This 3rd party module may help you: https://github.com/weibocom/nginx-upsync-module At 2022-01-04 20:00:00, nginx-devel-request at nginx.org wrote: >Send nginx-devel mailing list submissions to > nginx-devel at nginx.org > >To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.nginx.org/mailman/listinfo/nginx-devel >or, via email, send a message with subject or body 'help' to > nginx-devel-request at nginx.org > >You can reach the person managing the list at > nginx-devel-owner at nginx.org > >When replying, please edit your Subject line so it is more specific >than "Re: Contents of nginx-devel digest..." > > >Today's Topics: > > 1. Pushing configuration to NGINX from a custom server > (Devashi Tandon) > > >---------------------------------------------------------------------- > >Message: 1 >Date: Tue, 4 Jan 2022 10:09:39 +0000 >From: Devashi Tandon >To: "nginx-devel at nginx.org" >Subject: Pushing configuration to NGINX from a custom server >Message-ID: > > >Content-Type: text/plain; charset="iso-8859-1" > >Hi, > >We want to push some string based configuration (maybe in json format) to NGINX server, from our server. The requirement is to NOT put the configuration in nginx.conf file but we will be specifying the configuration server in the nginx.conf file. Whenever we update the configuration in our server, we want NGINX to get updated with the new configuration. What is the safest and most secure way to achieve this? > >Any help is appreciated. > >Thanks, >Devashi >-------------- next part -------------- >An HTML attachment was scrubbed... >URL: > >------------------------------ > >Subject: Digest Footer > >_______________________________________________ >nginx-devel mailing list >nginx-devel at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx-devel > >------------------------------ > >End of nginx-devel Digest, Vol 147, Issue 2 >******************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.dymond at microsoft.com Mon Jan 10 10:10:37 2022 From: max.dymond at microsoft.com (Max Dymond) Date: Mon, 10 Jan 2022 10:10:37 +0000 Subject: "the stream output chain is empty" while writing a stream filter module Message-ID: Hi, I'm trying to use NGINX as an SSL "man in the middle" to (a) terminate an SSL stream, (b) parse a binary protocol inside that stream to accept or reject messages, (c) connect over SSL to another target and send all the accepted messages. After a bit of investigation I'm trying my hand at a stream filter module, and I'm trying to do a very simple module which: - splits incoming messages into individual buffers (the protocol has well-defined packet sizes) - creates chain links for each of these messages. // Create a new buffer and chain link. link = ngx_alloc_chain_link(c->pool); if (link == NULL) { return NGX_ERROR; } b = ngx_create_temp_buf(c->pool, frame_length); link->buf = b; // Copy the frame into the buffer. ngx_memcpy(link->buf->start, pos, frame_length); link->buf->last += frame_length; link->buf->tag = cl->buf->tag; link->buf->flush = cl->buf->flush; This process seems to work but I'm getting an error message when I pass my chain to the next filter: return ngx_stream_next_filter(s, out, from_upstream); gives the stream output chain is empty while proxying and sending to upstream, client: 172.20.0.5, server: 0.0.0.0:9043, upstream: "172.20.0.2:9042", bytes from/to client:9/61, bytes from/to upstream:61/9 I've got diagnostics which verify the (single) chainlink I'm passing to the next filter: 2022/01/10 09:26:14 [info] 24#24: *3 MD: OUT chainlink ....3FB0: buf: ....ADE0 pos:....3FC0 last:....3FC9 file_pos:0 file_last:0 start:....3FC0 end:....3FC9 tag:000055D260A4A880 file:0000000000000000 temporary:1 memory:0 mmap:0 recycled:0 in_file:0 flush:1 last_buf:0 last_in_chain:0 temp_file:0 (I'm copying the tag and the flush bit from the original chain in the hopes of it doing something, but no dice). When I turn debug mode on logs are a little confusing as well; for the incoming chain: 2022/01/10 10:03:30 [info] 24#24: *5 MD: IN chainlink ....7C00: buf: ....7C10 pos:....91E0 last:....91E9 file_pos:0 file_last:0 start:0000000000000000 end:0000000000000000 tag:000055928B63E880 file:0000000000000000 temporary:1 memory:0 mmap:0 recycled:0 in_file:0 flush:1 last_buf:0 last_in_chain:0 temp_file:0 2022/01/10 10:03:30 [info] 24#24: *5 MD: OUT chainlink ....7C60: buf: ....EA70 pos:....7C70 last:....7C79 file_pos:0 file_last:0 start:.....7C70 end:.....7C79 tag:000055928B63E880 file:0000000000000000 temporary:1 memory:0 mmap:0 recycled:0 in_file:0 flush:1 last_buf:0 last_in_chain:0 temp_file:0 2022/01/10 10:03:30 [debug] 24#24: *5 write new buf t:1 f:0 00007FFB71B57C70, pos 00007FFB71B57C70, size: 9 file: 0, size: 0 2022/01/10 10:03:30 [debug] 24#24: *5 stream write filter: l:0 f:1 s:9 2022/01/10 10:03:30 [debug] 24#24: *5 writev: 9 of 9 2022/01/10 10:03:30 [debug] 24#24: *5 stream write filter 0000000000000000 2022/01/10 10:03:30 [debug] 24#24: *5 event timer: 4, old: 515169565, new: 515169565 2022/01/10 10:03:30 [debug] 24#24: *5 recv: eof:0, avail:-1 2022/01/10 10:03:30 [debug] 24#24: *5 recv: fd:12 61 of 16384 2022/01/10 10:03:30 [debug] 24#24: *5 write new buf t:1 f:0 0000000000000000, pos 00007FFB70AADC10, size: 61 file: 0, size: 0 2022/01/10 10:03:30 [debug] 24#24: *5 stream write filter: l:0 f:1 s:61 2022/01/10 10:03:30 [debug] 24#24: *5 SSL to write: 61 2022/01/10 10:03:30 [debug] 24#24: *5 SSL_write: 61 2022/01/10 10:03:30 [debug] 24#24: *5 stream write filter 0000000000000000 2022/01/10 10:03:30 [debug] 24#24: *5 event timer: 4, old: 515169565, new: 515169665 2022/01/10 10:03:30 [debug] 24#24: *5 stream write filter: l:0 f:0 s:0 2022/01/10 10:03:30 [alert] 24#24: *5 the stream output chain is empty Am I missing some step somewhere that's causing this to fail? Does anyone have an example of a simple stream filter module which repackages buffers? From arut at nginx.com Mon Jan 10 12:49:46 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 10 Jan 2022 15:49:46 +0300 Subject: [nginx-quic] revision 6ccf3867959a seems to break http3 response In-Reply-To: References: Message-ID: <20220110124946.jab2l524qd6gnxph@Romans-MacBook-Pro.local> Hi Guillaume, On Tue, Jan 04, 2022 at 05:11:19PM +0100, Guillaume Bilic wrote: > Hi all, > > Latest revisions of quic branch does not work anymore in h3 using chrome > and firefox. > Revision 6ccf3867959a "refactored ngx_quic_order_bufs() and > ngx_quic_split_bufs()" seems to be the culprit. Can you provide the debug log? You can send it to the mailing list or directly to my email. -- Roman Arutyunyan From hle at owl.eu.com Mon Jan 10 13:28:52 2022 From: hle at owl.eu.com (Hugo Lefeuvre) Date: Mon, 10 Jan 2022 13:28:52 +0000 Subject: test suite failure with 1.20.1 In-Reply-To: References: <20210703081853.xhj6huxl2tytvsih@behemoth.owl.eu.com.local> Message-ID: <20220110132852.r6d5xzaib4ztk5m5@behemoth.owl.eu.com.local> Hi Maxim, On Sun, Jul 04, 2021 at 04:29:43AM +0300, Maxim Dounin wrote: > Hello! > > On Sat, Jul 03, 2021 at 09:18:53AM +0100, Hugo Lefeuvre wrote: > > > I am trying to run the test suite, but it seems that, no matter how I build > > Nginx, it systematically fails. > > > > It seems that, most (all?) of the time, tests fail because Nginx returns > > 403 error codes, e.g.: > > > > ./ssi_waited.t ............................. 1/3 > > # Failed test 'waited non-active' > > # at ./ssi_waited.t line 60. > > # 'HTTP/1.1 403 Forbidden > > # Server: nginx/1.21.0 > > # Date: Sat, 03 Jul 2021 08:06:00 GMT > > # Content-Type: text/html > > # Connection: close > > # > > # > > # 403 Forbidden > > # > > #

403 Forbidden

> > #
nginx/1.21.0
> > # > > # > > # ' > > # doesn't match '(?^m:^xFIRSTxWAITEDxSECONDx$)' > > > > The runtime configuration is the default one from nginx-1.20.1.tar.gz > > (conf/nginx.conf). > > > > I must be doing something wrong with the build or run time configuration, > > but I cannot pinpoint what. Any idea? > > Test output suggests that you are testing nginx 1.21.0, not > 1.20.1. It looks like you are testing the wrong nginx binary, not > the one you think you are testing. This might be the reason, for > example, if you have some 3rd party modules compiled in and these > modules reject requests for some reason. > > For additional details try looking into test details - in > particular, test suite can leave full test configuration and logs > for you with TEST_NGINX_LEAVE environment variable set, or simply > cat the error log to the terminal before removing files with > TEST_NGINX_CATLOG. See README of the test suite for details. Sorry for the late answer, but better late than never! The reason was that I was running the test suite as root. Switching to a non-root user did the trick. I was indeed trying to run 1.21.1, not 1.20.1, that was a typo in the title. I think that the test suite README should mention that it cannot be run as root. Thanks for your answer! Best, Hugo -- Hugo Lefeuvre (hle) | www.owl.eu.com RSA4096_ 360B 03B3 BF27 4F4D 7A3F D5E8 14AA 1EB8 A247 3DFD ed25519_ 37B2 6D38 0B25 B8A2 6B9F 3A65 A36F 5357 5F2D DC4C -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 659 bytes Desc: not available URL: From mdounin at mdounin.ru Mon Jan 10 18:28:42 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 10 Jan 2022 21:28:42 +0300 Subject: "the stream output chain is empty" while writing a stream filter module In-Reply-To: References: Message-ID: Hello! On Mon, Jan 10, 2022 at 10:10:37AM +0000, Max Dymond wrote: > I'm trying to use NGINX as an SSL "man in the middle" to (a) > terminate an SSL stream, (b) parse a binary protocol inside that > stream to accept or reject messages, (c) connect over SSL to > another target and send all the accepted messages. > > After a bit of investigation I'm trying my hand at a stream > filter module, and I'm trying to do a very simple module which: > - splits incoming messages into individual buffers (the protocol > has well-defined packet sizes) > - creates chain links for each of these messages. > > // Create a new buffer and chain link. > link = ngx_alloc_chain_link(c->pool); > if (link == NULL) { > return NGX_ERROR; > } > > b = ngx_create_temp_buf(c->pool, frame_length); > link->buf = b; > > // Copy the frame into the buffer. > ngx_memcpy(link->buf->start, pos, frame_length); > link->buf->last += frame_length; > link->buf->tag = cl->buf->tag; > link->buf->flush = cl->buf->flush; > > > This process seems to work but I'm getting an error message when > I pass my chain to the next filter: > > return ngx_stream_next_filter(s, out, from_upstream); > > gives > > the stream output chain is empty while proxying and sending to upstream, client: 172.20.0.5, server: 0.0.0.0:9043, upstream: "172.20.0.2:9042", bytes from/to client:9/61, bytes from/to upstream:61/9 > > I've got diagnostics which verify the (single) chainlink I'm > passing to the next filter: > > 2022/01/10 09:26:14 [info] 24#24: *3 MD: OUT chainlink ....3FB0: buf: ....ADE0 pos:....3FC0 last:....3FC9 file_pos:0 file_last:0 start:....3FC0 end:....3FC9 tag:000055D260A4A880 file:0000000000000000 temporary:1 memory:0 mmap:0 recycled:0 in_file:0 flush:1 last_buf:0 last_in_chain:0 temp_file:0 > > (I'm copying the tag and the flush bit from the original chain > in the hopes of it doing something, but no dice). Note that copying tags is generally wrong. Tags are used to identify buffers owned by a module, and wrong tag may result in incorrect behaviour if a module will try to modify/reuse such buffer. I don't think this can be a problem in your particular case, yet it's something to avoid. > When I turn debug mode on logs are a little confusing as well; > for the incoming chain: > > 2022/01/10 10:03:30 [info] 24#24: *5 MD: IN chainlink ....7C00: buf: ....7C10 pos:....91E0 last:....91E9 file_pos:0 file_last:0 start:0000000000000000 end:0000000000000000 tag:000055928B63E880 file:0000000000000000 temporary:1 memory:0 mmap:0 recycled:0 in_file:0 flush:1 last_buf:0 last_in_chain:0 temp_file:0 > > 2022/01/10 10:03:30 [info] 24#24: *5 MD: OUT chainlink ....7C60: buf: ....EA70 pos:....7C70 last:....7C79 file_pos:0 file_last:0 start:.....7C70 end:.....7C79 tag:000055928B63E880 file:0000000000000000 temporary:1 memory:0 mmap:0 recycled:0 in_file:0 flush:1 last_buf:0 last_in_chain:0 temp_file:0 > 2022/01/10 10:03:30 [debug] 24#24: *5 write new buf t:1 f:0 00007FFB71B57C70, pos 00007FFB71B57C70, size: 9 file: 0, size: 0 > 2022/01/10 10:03:30 [debug] 24#24: *5 stream write filter: l:0 f:1 s:9 > 2022/01/10 10:03:30 [debug] 24#24: *5 writev: 9 of 9 > 2022/01/10 10:03:30 [debug] 24#24: *5 stream write filter 0000000000000000 > 2022/01/10 10:03:30 [debug] 24#24: *5 event timer: 4, old: 515169565, new: 515169565 Writing of your buffer is done here. Following lines correspond to the next chunk of data: > 2022/01/10 10:03:30 [debug] 24#24: *5 recv: eof:0, avail:-1 > 2022/01/10 10:03:30 [debug] 24#24: *5 recv: fd:12 61 of 16384 > 2022/01/10 10:03:30 [debug] 24#24: *5 write new buf t:1 f:0 0000000000000000, pos 00007FFB70AADC10, size: 61 file: 0, size: 0 > 2022/01/10 10:03:30 [debug] 24#24: *5 stream write filter: l:0 f:1 s:61 > 2022/01/10 10:03:30 [debug] 24#24: *5 SSL to write: 61 > 2022/01/10 10:03:30 [debug] 24#24: *5 SSL_write: 61 > 2022/01/10 10:03:30 [debug] 24#24: *5 stream write filter 0000000000000000 > 2022/01/10 10:03:30 [debug] 24#24: *5 event timer: 4, old: 515169565, new: 515169665 (As far as I understand, this is a chunk of data in the opposite direction: note it uses SSL_write() instead of writev().) And here is another call of the stream write filter, which results in the error: > 2022/01/10 10:03:30 [debug] 24#24: *5 stream write filter: l:0 f:0 s:0 > 2022/01/10 10:03:30 [alert] 24#24: *5 the stream output chain is empty Given no read operations before the call, most likely it's called again by nginx because you haven't correctly marked original buffers as sent. > Am I missing some step somewhere that's causing this to fail? > Does anyone have an example of a simple stream filter module > which repackages buffers? Some example of a stream filter module can be found in njs code, see here: http://hg.nginx.org/njs/file/9b112a44e540/nginx/ngx_stream_js_module.c#l585 Note the "ctx->buf->pos = ctx->buf->last;" line, which marks the original buffer as fully sent: http://hg.nginx.org/njs/file/9b112a44e540/nginx/ngx_stream_js_module.c#l648 Hope this helps. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Jan 10 23:05:31 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Jan 2022 02:05:31 +0300 Subject: test suite failure with 1.20.1 In-Reply-To: <20220110132852.r6d5xzaib4ztk5m5@behemoth.owl.eu.com.local> References: <20210703081853.xhj6huxl2tytvsih@behemoth.owl.eu.com.local> <20220110132852.r6d5xzaib4ztk5m5@behemoth.owl.eu.com.local> Message-ID: Hello! On Mon, Jan 10, 2022 at 01:28:52PM +0000, Hugo Lefeuvre wrote: > Hi Maxim, > > On Sun, Jul 04, 2021 at 04:29:43AM +0300, Maxim Dounin wrote: > > Hello! > > > > On Sat, Jul 03, 2021 at 09:18:53AM +0100, Hugo Lefeuvre wrote: > > > > > I am trying to run the test suite, but it seems that, no matter how I build > > > Nginx, it systematically fails. > > > > > > It seems that, most (all?) of the time, tests fail because Nginx returns > > > 403 error codes, e.g.: > > > > > > ./ssi_waited.t ............................. 1/3 > > > # Failed test 'waited non-active' > > > # at ./ssi_waited.t line 60. > > > # 'HTTP/1.1 403 Forbidden > > > # Server: nginx/1.21.0 > > > # Date: Sat, 03 Jul 2021 08:06:00 GMT > > > # Content-Type: text/html > > > # Connection: close > > > # > > > # > > > # 403 Forbidden > > > # > > > #

403 Forbidden

> > > #
nginx/1.21.0
> > > # > > > # > > > # ' > > > # doesn't match '(?^m:^xFIRSTxWAITEDxSECONDx$)' > > > > > > The runtime configuration is the default one from nginx-1.20.1.tar.gz > > > (conf/nginx.conf). > > > > > > I must be doing something wrong with the build or run time configuration, > > > but I cannot pinpoint what. Any idea? > > > > Test output suggests that you are testing nginx 1.21.0, not > > 1.20.1. It looks like you are testing the wrong nginx binary, not > > the one you think you are testing. This might be the reason, for > > example, if you have some 3rd party modules compiled in and these > > modules reject requests for some reason. > > > > For additional details try looking into test details - in > > particular, test suite can leave full test configuration and logs > > for you with TEST_NGINX_LEAVE environment variable set, or simply > > cat the error log to the terminal before removing files with > > TEST_NGINX_CATLOG. See README of the test suite for details. > > Sorry for the late answer, but better late than never! > > The reason was that I was running the test suite as root. Switching to a > non-root user did the trick. I was indeed trying to run 1.21.1, not 1.20.1, > that was a typo in the title. > > I think that the test suite README should mention that it cannot be run as > root. Ah, that's certainly explains the failure. The test suite can be run as root, but, given that nginx switches to a non-privileged user by default (https://nginx.org/r/user), and temporary directory is only readable by the owner, running test suite as root requires some additional tuning for most of the tests to work, e.g.: # TEST_NGINX_GLOBALS="user root wheel;" prove ssi_waited.t The fact that the test suite by default is expected to be run under a normal user is already in the README, though may be in somewhat obscure form: note the "$ " prompt in the usage example. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Jan 11 01:12:03 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Jan 2022 01:12:03 +0000 Subject: [nginx] Avoid sending "Connection: keep-alive" when shutting down. Message-ID: details: https://hg.nginx.org/nginx/rev/96ae8e57b3dd branches: changeset: 7993:96ae8e57b3dd user: Maxim Dounin date: Tue Jan 11 02:23:49 2022 +0300 description: Avoid sending "Connection: keep-alive" when shutting down. When a worker process is shutting down, keepalive is not used: this is checked before the ngx_http_set_keepalive() call in ngx_http_finalize_connection(). Yet the "Connection: keep-alive" header was still sent, even if we know that the worker process is shutting down, potentially resulting in additional requests being sent to the connection which is going to be closed anyway. While clients are expected to be able to handle asynchronous close events (see ticket #1022), it is certainly possible to send the "Connection: close" header instead, informing the client that the connection is going to be closed and potentially saving some unneeded work. With this change, we additionally check for worker process shutdown just before sending response headers, and disable keepalive accordingly. diffstat: src/http/ngx_http_header_filter_module.c | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diffs (14 lines): diff -r e2d07e4ec636 -r 96ae8e57b3dd src/http/ngx_http_header_filter_module.c --- a/src/http/ngx_http_header_filter_module.c Thu Dec 30 01:08:46 2021 +0300 +++ b/src/http/ngx_http_header_filter_module.c Tue Jan 11 02:23:49 2022 +0300 @@ -197,6 +197,10 @@ ngx_http_header_filter(ngx_http_request_ } } + if (r->keepalive && (ngx_terminate || ngx_exiting)) { + r->keepalive = 0; + } + len = sizeof("HTTP/1.x ") - 1 + sizeof(CRLF) - 1 /* the end of the header */ + sizeof(CRLF) - 1; From mdounin at mdounin.ru Tue Jan 11 01:12:15 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Jan 2022 04:12:15 +0300 Subject: [PATCH] HTTP: keepalive_graceful_close support In-Reply-To: References: Message-ID: Hello! On Fri, Dec 31, 2021 at 02:24:13PM +0000, 幼麟 封 wrote: > You are right, we can’t completely avoid this kind of problem, > but we can reduce some. Maybe "keepalive_graceful_close" is not > exact, it should be "wait_keepalive_on_exit". I mean, with this > patch, I want to optimize the connection closure problem caused > by the exit of the work process, is an optimization for a > specific scene. By setting a shorter expiration time for the > keepalived connection on the client side than the > "keepalive_timeout" on the nginx side, we can really reduce the > number of client retries. Sure enough, by waiting longer before closing the keepalive connection we can reduce retries. But this delays worker shutdown and therefore comes at cost, and this cost might be bigger than saved client retries. Further, this won't completely eliminate retries anyway, even if clients are configured to use shorter timeouts than nginx: there are cases when nginx closes keepalive connections before keepalive_timeout expires, notably when there are no free worker connections. I've committed the patch to disable keepalive as long as we know it cannot be used when sending response headers. This should significantly reduce retries in practical cases, and comes at (almost) no cost. Thanks for prodding this. -- Maxim Dounin http://mdounin.ru/ From xeioex at nginx.com Tue Jan 11 13:03:44 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 11 Jan 2022 13:03:44 +0000 Subject: [njs] Fixed fuzzing target bug introduced in 4d4657128baf (0.7.1). Message-ID: details: https://hg.nginx.org/njs/rev/abbf77fcd111 branches: changeset: 1799:abbf77fcd111 user: Dmitry Volyntsev date: Tue Jan 11 13:02:33 2022 +0000 description: Fixed fuzzing target bug introduced in 4d4657128baf (0.7.1). Previously, njs_process_script() took vm pointer from console object, but after 4d4657128baf the object is not initialized in LLVMFuzzerTestOneInput(). The fix is to always pass vm pointer explicitly. This also closes #456 issue on Github. diffstat: src/njs_shell.c | 29 +++++++++++++++++------------ 1 files changed, 17 insertions(+), 12 deletions(-) diffs (105 lines): diff -r 9b112a44e540 -r abbf77fcd111 src/njs_shell.c --- a/src/njs_shell.c Wed Dec 29 18:26:40 2021 +0000 +++ b/src/njs_shell.c Tue Jan 11 13:02:33 2022 +0000 @@ -88,8 +88,8 @@ typedef struct { static njs_int_t njs_console_init(njs_vm_t *vm, njs_console_t *console); static njs_int_t njs_externals_init(njs_vm_t *vm); static njs_vm_t *njs_create_vm(njs_opts_t *opts, njs_vm_opt_t *vm_options); -static njs_int_t njs_process_script(njs_opts_t *opts, - njs_console_t *console, const njs_str_t *script); +static njs_int_t njs_process_script(njs_vm_t *vm, njs_opts_t *opts, + void *runtime, const njs_str_t *script); #ifndef NJS_FUZZER_TARGET @@ -307,7 +307,7 @@ main(int argc, char **argv) if (vm != NULL) { command.start = (u_char *) opts.command; command.length = njs_strlen(opts.command); - ret = njs_process_script(&opts, vm_options.external, &command); + ret = njs_process_script(vm, &opts, vm_options.external, &command); njs_vm_destroy(vm); } @@ -612,7 +612,7 @@ njs_process_file(njs_opts_t *opts, njs_v } } - ret = njs_process_script(opts, vm_options->external, &script); + ret = njs_process_script(vm, opts, vm_options->external, &script); if (ret != NJS_OK) { ret = NJS_ERROR; goto done; @@ -662,7 +662,6 @@ LLVMFuzzerTestOneInput(const uint8_t* da vm_options.init = 1; vm_options.backtrace = 0; vm_options.ops = &njs_console_ops; - vm_options.external = &njs_console; vm = njs_create_vm(&opts, &vm_options); @@ -670,7 +669,7 @@ LLVMFuzzerTestOneInput(const uint8_t* da script.length = size; script.start = (u_char *) data; - (void) njs_process_script(&opts, vm_options.external, &script); + (void) njs_process_script(vm, &opts, NULL, &script); njs_vm_destroy(vm); } @@ -834,12 +833,20 @@ njs_output(njs_opts_t *opts, njs_vm_t *v static njs_int_t -njs_process_events(njs_console_t *console) +njs_process_events(void *runtime) { njs_ev_t *ev; njs_queue_t *events; + njs_console_t *console; njs_queue_link_t *link; + if (runtime == NULL) { + njs_stderror("njs_process_events(): no runtime\n"); + return NJS_ERROR; + } + + console = runtime; + events = &console->posted_events; for ( ;; ) { @@ -863,14 +870,12 @@ njs_process_events(njs_console_t *consol static njs_int_t -njs_process_script(njs_opts_t *opts, njs_console_t *console, +njs_process_script(njs_vm_t *vm, njs_opts_t *opts, void *runtime, const njs_str_t *script) { u_char *start, *end; - njs_vm_t *vm; njs_int_t ret; - vm = console->vm; start = script->start; end = start + script->length; @@ -897,7 +902,7 @@ njs_process_script(njs_opts_t *opts, njs break; } - ret = njs_process_events(console); + ret = njs_process_events(runtime); if (njs_slow_path(ret != NJS_OK)) { njs_stderror("njs_process_events() failed\n"); ret = NJS_ERROR; @@ -962,7 +967,7 @@ njs_interactive_shell(njs_opts_t *opts, if (line.length != 0) { add_history((char *) line.start); - njs_process_script(opts, vm_options->external, &line); + njs_process_script(vm, opts, vm_options->external, &line); } /* editline allocs a new buffer every time. */ From mdounin at mdounin.ru Tue Jan 11 20:41:21 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Jan 2022 23:41:21 +0300 Subject: [PATCH] Add provision to fetch certificate chain from Nginx In-Reply-To: References: Message-ID: Hello! On Thu, Dec 30, 2021 at 09:35:26AM +0000, CHHABRA Mandeep Singh wrote: > As far as my understanding goes, the intermediate CA > certificates are not required to be known to the server. > It is only the trust anchor(the root CA certificate) which is > required to be known and trusted on the sever. > And in our case also, the root CA certificate is trusted for the > web. Sure, intermediate certificates are not required to be known by the server and can be provided by the client in the extra certificates during SSL/TLS handshake. Such configurations are believed to be extremely rare though: in most cases intermediate certificates are well known and can be easily configured on the server side, and this saves extra configuration on clients. Further, it is not really possible to properly retrieve such client-provided intermediate certificates after the initial handshake: these certificates are not saved to the session data and therefore not available after session reuse, see 7653:8409f9df6219 (http://hg.nginx.org/nginx/rev/8409f9df6219). Hence the original question about the problem you trying to solve. > I have tried to give a brief of the problem in the following > section. > > We have a product which supports multi-tenancy and uses Nginx as > a reverse proxy. > There are different isolated domains which share the same trust > anchor. But there could be difference > in the client certificate chain in different domains. There is a > need to do some extra validations based on the CAs in the chain. > To be more precise, we have option to specify if a CA could be > used to > do client or user authentication. There is a possibility that in > one domain, a CA is enabled for client authentication and in > another , the same CA is disabled. > > So, we need a way to get the certificate chain from Nginx, to do > these extra validations, apart from what Nginx does i.e. > checking if the chain could be verified. > But there is no way to get the chain, today. Not sure I've understood your description correctly, but from what I understood it looks like you are not trying to retrieve client-provided intermediate certificates, but instead trying to do additional checking on the chain which contains client-provided end certificate and the chain constructed by nginx from the intermediate certificates known on the server during certificate verification. That is, you have something like: - Root CA, Intermediate1 CA, Intermediate2 CA - all known on the server; - Client certs signed by Intermediate1 CA; - Client certs signed by Intermediate2 CA. And you want to allow access only to certificates signed by Intermediate1 CA in some cases, and only certificates signed by Intermediate2 CA in other cases. Is that correct? Such problem seems to be solvable by just looking at $ssl_client_escaped_cert and re-creating the certificate chain from the list of CA certificates known on the server. In simple cases (assuming all intermediate CA DNs are unique) just checking the $ssl_client_i_dn variable would be enough. Does it look reasonable, or I misunderstood something? -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Wed Jan 12 11:14:17 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 12 Jan 2022 14:14:17 +0300 Subject: nginx-quic, php able to access last set cookie only In-Reply-To: References: Message-ID: <74F985D7-F09E-4FD4-828C-3652BD264D82@nginx.com> > On 19 Dec 2021, at 21:24, Kareti Ramakrishna MBA wrote: > > I implemented nginx-quic using the steps at https://quic.nginx.org/readme.html > The page is validating http3 quic at https://http3check.net and https://gf.dev/http3-test > > The page elements show h3 protocol in developer tools network tab. > > in a test.php page, I have set 3 php cookies like this: > > $q=setcookie('test1', 'content1', time()+315360000, '/', '', true, true > ); > > $q=setcookie('test2', 'content2', time()+315360000, '/', '', true, true > ); > > $q=setcookie('test3', 'content3', time()+315360000, '/', '', true, true > ); > > ?> > In test2.php in the same domain and same directory, I tried to access the cookies : > > > var_dump( > $_COOKIE > ); > > ?> > It is showing only the last set cookie. > > array(1) { ["test3"]=> string(8) "content3" > } > > all the three cookies are showing in developer tools. > > Javascript is able to read all the three cookies : > > > > If I use nginx http2, php is able to access all the three cookies. > > But, If I use nginx http3, php is able to access only the last cookie. Hello. There was a recent fix to proxy cookies in a concatenated list. See https://hg.nginx.org/nginx-quic/rev/10522e8dea41 -- Sergey Kandaurov From pluknet at nginx.com Wed Jan 12 11:16:44 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 12 Jan 2022 14:16:44 +0300 Subject: [nginx-quic] fastcgi cookie param is overwritten resulting in getting only last cookie In-Reply-To: References: Message-ID: <1EA45167-CDED-4915-A564-883FE8A45694@nginx.com> > On 23 Dec 2021, at 19:19, Guillaume Bilic wrote: > > Hi all, > > > > Using nginx-quic (1.21.4), cookies are parsed individually by http3 code : > > > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 parse literal done "number1=this+is+the+first+one" > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 parse field lri done static[5] "number1=this+is+the+first+one" > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 static[5] lookup "cookie":"" > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 parse field representation done > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 header: "cookie: number1=this+is+the+first+one" > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 parse field representation > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 parse field lri > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 parse prefix int 5 > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 parse prefix int 24 > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 parse literal huff:1, len:24 > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 parse literal done "number2=this+is+the+second+one" > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 parse field lri done static[5] "number2=this+is+the+second+one" > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 static[5] lookup "cookie":"" > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 parse field representation done > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 header: "cookie: number2=this+is+the+second+one" > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 parse field representation > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 parse field lri > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 parse prefix int 5 > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 parse prefix int 23 > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 parse literal huff:1, len:23 > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 parse literal done "number3=this+is+the+third+one" > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 parse field lri done static[5] "number3=this+is+the+third+one" > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 static[5] lookup "cookie":"" > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 parse field representation done > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 parse headers done > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 http3 header: "cookie: number3=this+is+the+third+one » > > > > > > But then the fastcgi param HTTP_COOKIE is passed for each cookie, resulting in overwriting it and keeping only the last one : > > > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 fastcgi param: "HTTP_COOKIE: number1=this+is+the+first+one" > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 fastcgi param: "HTTP_COOKIE: number2=this+is+the+second+one" > > 2021/12/23 14:29:37 [debug] 32322#0: *3576 fastcgi param: "HTTP_COOKIE: number3=this+is+the+third+one » > > > > The HTTP_COOKIE param should be the whole cookie header. > > Http2 code handles cookie header in a dedicated function « ngx_http_v2_construct_cookie_header » and then processes other headers. > > There doesn’t seem to be the case of http3 code which process cookie the same way of others headers. This behaviour was recently applied to HTTP/3 implementation, see https://hg.nginx.org/nginx-quic/rev/10522e8dea41 Thanks for prodding. -- Sergey Kandaurov From xeioex at nginx.com Wed Jan 12 18:00:07 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 12 Jan 2022 18:00:07 +0000 Subject: [njs] Removing MSAN unpoison workarounds for clang-9 and below. Message-ID: details: https://hg.nginx.org/njs/rev/c786ef848004 branches: changeset: 1800:c786ef848004 user: Dmitry Volyntsev date: Wed Jan 12 17:58:18 2022 +0000 description: Removing MSAN unpoison workarounds for clang-9 and below. MSAN unpoison workaround was introduced in 13dbdff9b76f (0.3.9) for a false-positive bug in clang-9 and below. Also, after 80d95b2881f6 (0.4.1) the bug is not triggered anymore. diffstat: src/njs_value.h | 11 ----------- 1 files changed, 0 insertions(+), 11 deletions(-) diffs (42 lines): diff -r abbf77fcd111 -r c786ef848004 src/njs_value.h --- a/src/njs_value.h Tue Jan 11 13:02:33 2022 +0000 +++ b/src/njs_value.h Wed Jan 12 17:58:18 2022 +0000 @@ -1028,8 +1028,6 @@ njs_set_object_value(njs_value_t *value, (pq)->lhq.key.length = 0; \ (pq)->lhq.key.start = NULL; \ (pq)->lhq.value = NULL; \ - /* FIXME: False-positive in MSAN?. */ \ - njs_msan_unpoison(&(pq)->key, sizeof(njs_value_t)); \ (pq)->own_whiteout = NULL; \ (pq)->query = _query; \ (pq)->shared = 0; \ @@ -1085,9 +1083,6 @@ njs_value_property_i64(njs_vm_t *vm, njs njs_int_t ret; njs_value_t key; - /* FIXME: False-positive in MSAN?. */ - njs_msan_unpoison(&key, sizeof(njs_value_t)); - ret = njs_int64_to_string(vm, &key, index); if (njs_slow_path(ret != NJS_OK)) { return ret; @@ -1104,9 +1099,6 @@ njs_value_property_i64_set(njs_vm_t *vm, njs_int_t ret; njs_value_t key; - /* FIXME: False-positive in MSAN?. */ - njs_msan_unpoison(&key, sizeof(njs_value_t)); - ret = njs_int64_to_string(vm, &key, index); if (njs_slow_path(ret != NJS_OK)) { return ret; @@ -1123,9 +1115,6 @@ njs_value_property_i64_delete(njs_vm_t * njs_int_t ret; njs_value_t key; - /* FIXME: False-positive in MSAN?. */ - njs_msan_unpoison(&key, sizeof(njs_value_t)); - ret = njs_int64_to_string(vm, &key, index); if (njs_slow_path(ret != NJS_OK)) { return ret; From xeioex at nginx.com Wed Jan 12 18:00:09 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 12 Jan 2022 18:00:09 +0000 Subject: [njs] Making njs_value_property_i64() and njs_value_property_i64_set() fast. Message-ID: details: https://hg.nginx.org/njs/rev/2adc0d3fc2bd branches: changeset: 1801:2adc0d3fc2bd user: Dmitry Volyntsev date: Wed Jan 12 17:58:19 2022 +0000 description: Making njs_value_property_i64() and njs_value_property_i64_set() fast. Since f5afb325896f (0.3.9) njs_value_property() and njs_value_property_set() have fast paths when key is a number. Passing key as a number eliminates conversion index to string and back. diffstat: src/njs_value.h | 12 ++---------- 1 files changed, 2 insertions(+), 10 deletions(-) diffs (33 lines): diff -r c786ef848004 -r 2adc0d3fc2bd src/njs_value.h --- a/src/njs_value.h Wed Jan 12 17:58:18 2022 +0000 +++ b/src/njs_value.h Wed Jan 12 17:58:19 2022 +0000 @@ -1080,13 +1080,9 @@ njs_inline njs_int_t njs_value_property_i64(njs_vm_t *vm, njs_value_t *value, int64_t index, njs_value_t *retval) { - njs_int_t ret; njs_value_t key; - ret = njs_int64_to_string(vm, &key, index); - if (njs_slow_path(ret != NJS_OK)) { - return ret; - } + njs_set_number(&key, index); return njs_value_property(vm, value, &key, retval); } @@ -1096,13 +1092,9 @@ njs_inline njs_int_t njs_value_property_i64_set(njs_vm_t *vm, njs_value_t *value, int64_t index, njs_value_t *setval) { - njs_int_t ret; njs_value_t key; - ret = njs_int64_to_string(vm, &key, index); - if (njs_slow_path(ret != NJS_OK)) { - return ret; - } + njs_set_number(&key, index); return njs_value_property_set(vm, value, &key, setval); } From xeioex at nginx.com Wed Jan 12 18:00:11 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 12 Jan 2022 18:00:11 +0000 Subject: [njs] Fixed Array.prototype.join() when array is changed while iterating. Message-ID: details: https://hg.nginx.org/njs/rev/9578cc729205 branches: changeset: 1802:9578cc729205 user: Dmitry Volyntsev date: Wed Jan 12 17:59:42 2022 +0000 description: Fixed Array.prototype.join() when array is changed while iterating. Previously, the function used optimization for ordinary arrays with no gaps (so called fast arrays). For a fast array code took elements directly from internal flat C array. The direct pointer may become invalid as side-effect of custom toString() method for an element. Specifically, the pointer was passed directly to njs_value_to_primitive() which attempts to call toString() followed by valueOf(). When the array size is changed as a side-effect of toString() and not a string value is returned by toString() the pointer becomes invalid and is passed to valueOf() which causes use-after-free. The fix is to eliminate the micro-optimization which uses direct pointers. Found by PolyGlot fuzzing framework. This closes #444 issue on Github. diffstat: src/njs_array.c | 34 +++++++++------------------------- src/test/njs_unit_test.c | 8 ++++++++ 2 files changed, 17 insertions(+), 25 deletions(-) diffs (83 lines): diff -r 2adc0d3fc2bd -r 9578cc729205 src/njs_array.c --- a/src/njs_array.c Wed Jan 12 17:58:19 2022 +0000 +++ b/src/njs_array.c Wed Jan 12 17:59:42 2022 +0000 @@ -1573,7 +1573,6 @@ njs_array_prototype_join(njs_vm_t *vm, n njs_int_t ret; njs_chb_t chain; njs_utf8_t utf8; - njs_array_t *array; njs_value_t *value, *this, entry; njs_string_prop_t separator, string; @@ -1606,18 +1605,11 @@ njs_array_prototype_join(njs_vm_t *vm, n } length = 0; - array = NULL; utf8 = njs_is_byte_string(&separator) ? NJS_STRING_BYTE : NJS_STRING_UTF8; - if (njs_is_fast_array(this)) { - array = njs_array(this); - len = array->length; - - } else { - ret = njs_object_length(vm, this, &len); - if (njs_slow_path(ret == NJS_ERROR)) { - return ret; - } + ret = njs_object_length(vm, this, &len); + if (njs_slow_path(ret == NJS_ERROR)) { + return ret; } if (njs_slow_path(len == 0)) { @@ -1625,25 +1617,17 @@ njs_array_prototype_join(njs_vm_t *vm, n return NJS_OK; } + value = &entry; + njs_chb_init(&chain, vm->mem_pool); for (i = 0; i < len; i++) { - if (njs_fast_path(array != NULL - && array->object.fast_array - && njs_is_valid(&array->start[i]))) - { - value = &array->start[i]; - - } else { - ret = njs_value_property_i64(vm, this, i, &entry); - if (njs_slow_path(ret == NJS_ERROR)) { - return ret; - } - - value = &entry; + ret = njs_value_property_i64(vm, this, i, value); + if (njs_slow_path(ret == NJS_ERROR)) { + return ret; } - if (njs_is_valid(value) && !njs_is_null_or_undefined(value)) { + if (!njs_is_null_or_undefined(value)) { if (!njs_is_string(value)) { last = njs_chb_current(&chain); diff -r 2adc0d3fc2bd -r 9578cc729205 src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Wed Jan 12 17:58:19 2022 +0000 +++ b/src/test/njs_unit_test.c Wed Jan 12 17:59:42 2022 +0000 @@ -4801,6 +4801,14 @@ static njs_unit_test_t njs_test[] = ".map(v=>v.join(''))"), njs_str(",1345,,1,13,13,13") }, + { njs_str("var o = { toString: () => {" + " for (var i = 0; i < 0x10; i++) {a.push(1)};" + " return {};" + "}};" + "var a = [o];" + "a.join()"), + njs_str("TypeError: Cannot convert object to primitive value") }, + { njs_str("Array.prototype.splice.call({0:0,1:1,2:2,3:3,length:4},0,3,4,5)"), njs_str("0,1,2") }, From richagaur586 at gmail.com Wed Jan 12 18:19:58 2022 From: richagaur586 at gmail.com (Richa Gaur) Date: Wed, 12 Jan 2022 23:49:58 +0530 Subject: Stream metrics Message-ID: Hello, I am using nginx as an L4 load balancer. I need to expose some stream level metrics like throughput etc. I am using nginx-module-stream-sts for calculating metrics. However, this module operates at nginx-stream-log-phase which is called just before closing the connection. In case, if the underlying connection is persistent and several requests are being made on the same connection, the metrics are calculated at the end which does not depict the true picture. I tried looking at the stream-module code and based on preliminary observations it looks like the stream phases are called for each stream session and not for each tcp request (analogous to http request). So, even if I register the handler at an earlier phase, let's say nginx-stream-content-phase, it would be called only once in the connection lifecycle instead of being called for every request. Please let me know if my understanding is correct and if there is any mechanism by which I can calculate metrics for each TCP request and show real time data. Basically, collect metric for each read/write event from the client. Any help would be appreciated. Thanks and Regards, Richa Gaur -------------- next part -------------- An HTML attachment was scrubbed... URL: From devashi.tandon at appsentinels.ai Thu Jan 13 07:26:16 2022 From: devashi.tandon at appsentinels.ai (Devashi Tandon) Date: Thu, 13 Jan 2022 07:26:16 +0000 Subject: Using single persistent socket to send subrequests In-Reply-To: References: Message-ID: Hi, I am trying to implement HTTP pipelining through our module. But I am unable to figure out where the source port is allocated. The only function I saw that allocates the port is: ngx_http_upstream_create_round_robin_peer. However, it doesn't get called in the path of ngx_http_run_posted_requests. Further, ngx_http_upstream_get_round_robin_peer allocates the peer, but every time in my case the peer pointer is pointing to the same address. However, in tcpdump I always see a different source port from which the packet is sent out indicating that a new socket connection is created. Can anyone help me understand where we get the source port from the OS and create/update the peer before sending out the packets from the socket connection? My purpose is to reuse the same socket connection without changing the source port and use a persistent connection. Any help is greatly appreciated. Thanks, Devashi ________________________________ From: nginx-devel on behalf of Maxim Dounin Sent: Thursday, December 30, 2021 1:47 PM To: nginx-devel at nginx.org Subject: Re: Using single persistent socket to send subrequests Hello! On Thu, Dec 30, 2021 at 07:58:33AM +0000, Devashi Tandon wrote: > upstream ext-authz-upstream-server { > server 172.20.10.6:9006; > keepalive 4; > } [...] > However, when I create 100 simultaneous connections, they are > all sent via a different source port which means that a new > socket connection is created everytime. That's expected behaviour: the keepalive directive specifies the number of connections to cache, not the limit on the number of connections to the upstream server. With many simultaneous requests nginx will open additional connections as needed. > How can I pipeline requests over 4 connections with keepalive > configuration set to 4? You cannot, pipelining is not supported by the proxy module. If the goal is not pipelining but to limit the number of connections to upstream servers, the "server ... max_conns=..." and the "queue" directive as available in nginx-plus might be what you want, see here: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#max_conns http://nginx.org/en/docs/http/ngx_http_upstream_module.html#queue Note well that such questions do not look like something related to nginx development. A better mailing list for user-level question would be nginx at nginx.org, see here: http://nginx.org/en/support.html Hope this helps. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From sb at nginx.com Thu Jan 13 11:52:35 2022 From: sb at nginx.com (Sergey Budnevitch) Date: Thu, 13 Jan 2022 14:52:35 +0300 Subject: Mailing list migration to mailman3 Message-ID: Hello, As you may noticed already mailing list was migrated to mailman3. It differs significantly from mailman2 we used previously. Please pay attention to a few noticeable changes: * Mailman3 does not add X-BeenThere header to the outbound emails anymore. If you used this header for your filters, you should switch to the List-Id header (or List-Post header). * Old archives are available on https://mailman.nginx.org/pipermail/, New archives started on Jan 1 2020 and could be found on https://mailman.nginx.org/mailman3/lists/ * mail interface for subscribing/unsubscribing works as before, but web interface and authorisation have changed. To get access to the web interface you need to "sign up" with your email address, "reset password" will not work as technically there is no web user yet. From xeioex at nginx.com Fri Jan 14 14:57:21 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 14 Jan 2022 14:57:21 +0000 Subject: [njs] Fixed Array.prototype.slice() when array is changed while iterating. Message-ID: details: https://hg.nginx.org/njs/rev/d940c6aaec5d branches: changeset: 1803:d940c6aaec5d user: Dmitry Volyntsev date: Thu Jan 13 15:59:08 2022 +0000 description: Fixed Array.prototype.slice() when array is changed while iterating. Previously, the flat array may be converted to a slow one as a side-effect of a custom getter invocation for a proto array object. The function erroneously assumed that the this array remains flat while iterating. The fix is to eliminate the micro-optimization which uses direct pointers. The problem is similar to the previous (9578cc729205) commit. This closes #445 issue on Github. diffstat: src/njs_array.c | 49 +++++++++++------------------------------------ src/test/njs_unit_test.c | 10 +++++++++ 2 files changed, 22 insertions(+), 37 deletions(-) diffs (95 lines): diff -r 9578cc729205 -r d940c6aaec5d src/njs_array.c --- a/src/njs_array.c Wed Jan 12 17:59:42 2022 +0000 +++ b/src/njs_array.c Thu Jan 13 15:59:08 2022 +0000 @@ -729,7 +729,7 @@ njs_array_prototype_slice_copy(njs_vm_t uint32_t n; njs_int_t ret; njs_array_t *array, *keys; - njs_value_t *value, retval, self; + njs_value_t *value, *last, retval, self; const u_char *src, *end; njs_slice_prop_t string_slice; njs_string_prop_t string; @@ -748,34 +748,7 @@ njs_array_prototype_slice_copy(njs_vm_t n = 0; if (njs_fast_path(array->object.fast_array)) { - if (njs_fast_path(njs_is_fast_array(this))) { - value = njs_array_start(this); - - do { - if (njs_fast_path(njs_is_valid(&value[start]))) { - array->start[n++] = value[start++]; - - } else { - - /* src value may be in Array.prototype object. */ - - ret = njs_value_property_i64(vm, this, start++, - &array->start[n]); - if (njs_slow_path(ret == NJS_ERROR)) { - return NJS_ERROR; - } - - if (ret != NJS_OK) { - njs_set_invalid(&array->start[n]); - } - - n++; - } - - length--; - } while (length != 0); - - } else if (njs_is_string(this) || njs_is_object_string(this)) { + if (njs_is_string(this) || njs_is_object_string(this)) { if (njs_is_object_string(this)) { this = njs_object_value(this); @@ -816,16 +789,18 @@ njs_array_prototype_slice_copy(njs_vm_t } else if (njs_is_object(this)) { - do { - value = &array->start[n++]; - ret = njs_value_property_i64(vm, this, start++, value); - - if (ret != NJS_OK) { + last = &array->start[length]; + + for (value = array->start; value < last; value++, start++) { + ret = njs_value_property_i64(vm, this, start, value); + if (njs_slow_path(ret != NJS_OK)) { + if (ret == NJS_ERROR) { + return NJS_ERROR; + } + njs_set_invalid(value); } - - length--; - } while (length != 0); + } } else { diff -r 9578cc729205 -r d940c6aaec5d src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Wed Jan 12 17:59:42 2022 +0000 +++ b/src/test/njs_unit_test.c Thu Jan 13 15:59:08 2022 +0000 @@ -4525,6 +4525,16 @@ static njs_unit_test_t njs_test[] = "Array.prototype.slice.call(1, 0, 2)"), njs_str(",") }, + { njs_str("var a = [1, /**/, 3, 4];" + "Object.defineProperty(a.__proto__, 1, {" + " get: () => {" + " a.length = 10**6;" + " return 2;" + " }" + "});" + "a.slice(1)"), + njs_str("2,3,4") }, + { njs_str("Array.prototype.pop()"), njs_str("undefined") }, From xeioex at nginx.com Fri Jan 14 14:57:23 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 14 Jan 2022 14:57:23 +0000 Subject: [njs] Fixed Array.prototype.concat() when array is changed while iterating. Message-ID: details: https://hg.nginx.org/njs/rev/2c1382bab643 branches: changeset: 1804:2c1382bab643 user: Dmitry Volyntsev date: Thu Jan 13 16:20:58 2022 +0000 description: Fixed Array.prototype.concat() when array is changed while iterating. Previously, the flat array may be converted to a slow one as a side-effect of a custom getter invocation for a proto array object. The function erroneously assumed that the this array remains flat while iterating. The fix is to eliminate the micro-optimization which uses direct pointers. The problem is similar to the previous commits. diffstat: src/njs_array.c | 63 +++++++++-------------------------------------- src/test/njs_unit_test.c | 11 ++++++++ 2 files changed, 24 insertions(+), 50 deletions(-) diffs (128 lines): diff -r d940c6aaec5d -r 2c1382bab643 src/njs_array.c --- a/src/njs_array.c Thu Jan 13 15:59:08 2022 +0000 +++ b/src/njs_array.c Thu Jan 13 16:20:58 2022 +0000 @@ -1780,7 +1780,7 @@ njs_array_prototype_concat(njs_vm_t *vm, int64_t k, len, length; njs_int_t ret; njs_uint_t i; - njs_value_t this, retval, *value, *e; + njs_value_t this, retval, *e; njs_array_t *array, *keys; ret = njs_value_to_object(vm, &args[0]); @@ -1819,26 +1819,18 @@ njs_array_prototype_concat(njs_vm_t *vm, return NJS_ERROR; } - if (njs_is_fast_array(&this) && njs_is_fast_array(e) - && (length + len) <= NJS_ARRAY_LARGE_OBJECT_LENGTH) - { + if (njs_is_fast_array(e)) { for (k = 0; k < len; k++, length++) { - value = &njs_array_start(e)[k]; - - if (njs_slow_path(!njs_is_valid(value))) { - ret = njs_value_property_i64(vm, e, k, &retval); - if (njs_slow_path(ret == NJS_ERROR)) { - return ret; + ret = njs_value_property_i64(vm, e, k, &retval); + if (njs_slow_path(ret != NJS_OK)) { + if (ret == NJS_ERROR) { + return NJS_ERROR; } - if (ret == NJS_DECLINED) { - njs_set_invalid(&retval); - } - - value = &retval; + njs_set_invalid(&retval); } - ret = njs_array_add(vm, array, value); + ret = njs_array_add(vm, array, &retval); if (njs_slow_path(ret != NJS_OK)) { return NJS_ERROR; } @@ -1847,27 +1839,6 @@ njs_array_prototype_concat(njs_vm_t *vm, continue; } - if (njs_fast_object(len)) { - for (k = 0; k < len; k++, length++) { - ret = njs_value_property_i64(vm, e, k, &retval); - if (njs_slow_path(ret == NJS_ERROR)) { - return ret; - } - - if (ret != NJS_OK) { - continue; - } - - ret = njs_value_property_i64_set(vm, &this, length, - &retval); - if (njs_slow_path(ret == NJS_ERROR)) { - return ret; - } - } - - continue; - } - keys = njs_array_indices(vm, e); if (njs_slow_path(keys == NULL)) { return NJS_ERROR; @@ -1879,9 +1850,9 @@ njs_array_prototype_concat(njs_vm_t *vm, return ret; } - idx = njs_string_to_index(&keys->start[k]) + length; - if (ret == NJS_OK) { + idx = njs_string_to_index(&keys->start[k]) + length; + ret = njs_value_property_i64_set(vm, &this, idx, &retval); if (njs_slow_path(ret == NJS_ERROR)) { njs_array_destroy(vm, keys); @@ -1902,17 +1873,9 @@ njs_array_prototype_concat(njs_vm_t *vm, return NJS_ERROR; } - if (njs_is_fast_array(&this)) { - ret = njs_array_add(vm, array, e); - if (njs_slow_path(ret != NJS_OK)) { - return ret; - } - - } else { - ret = njs_value_property_i64_set(vm, &this, length, e); - if (njs_slow_path(ret == NJS_ERROR)) { - return ret; - } + ret = njs_value_property_i64_set(vm, &this, length, e); + if (njs_slow_path(ret == NJS_ERROR)) { + return ret; } length++; diff -r d940c6aaec5d -r 2c1382bab643 src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Thu Jan 13 15:59:08 2022 +0000 +++ b/src/test/njs_unit_test.c Thu Jan 13 16:20:58 2022 +0000 @@ -12834,6 +12834,17 @@ static njs_unit_test_t njs_test[] = "Object.getOwnPropertyDescriptor(o, Symbol.isConcatSpreadable).value"), njs_str("true") }, + { njs_str("var a = [1];" + "var b = [2, /**/, 4, 5];" + "Object.defineProperty(b.__proto__, 1, {" + " get: () => {" + " b.length = 10**6;" + " return 3;" + " }" + "});" + "a.concat(b)"), + njs_str("1,2,3,4,5") }, + { njs_str("var o = {}, n = 5381 /* NJS_DJB_HASH_INIT */;" "while(n--) o[Symbol()] = 'test'; o[''];"), njs_str("undefined") }, From xeioex at nginx.com Fri Jan 14 14:57:25 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 14 Jan 2022 14:57:25 +0000 Subject: [njs] Simplified element access in Array.prototype.pop(). Message-ID: details: https://hg.nginx.org/njs/rev/762774041f05 branches: changeset: 1805:762774041f05 user: Dmitry Volyntsev date: Thu Jan 13 18:30:31 2022 +0000 description: Simplified element access in Array.prototype.pop(). The change is similar to the previous commits. diffstat: src/njs_array.c | 60 +++++++++++++++++++------------------------------------- 1 files changed, 21 insertions(+), 39 deletions(-) diffs (91 lines): diff -r 2c1382bab643 -r 762774041f05 src/njs_array.c --- a/src/njs_array.c Thu Jan 13 16:20:58 2022 +0000 +++ b/src/njs_array.c Thu Jan 13 18:30:31 2022 +0000 @@ -944,8 +944,7 @@ njs_array_prototype_pop(njs_vm_t *vm, nj { int64_t length; njs_int_t ret; - njs_array_t *array; - njs_value_t *this, *entry; + njs_value_t *this; this = njs_argument(args, 0); @@ -954,40 +953,20 @@ njs_array_prototype_pop(njs_vm_t *vm, nj return ret; } - njs_set_undefined(&vm->retval); - - if (njs_is_fast_array(this)) { - array = njs_array(this); - - if (array->length != 0) { - array->length--; - entry = &array->start[array->length]; - - if (njs_is_valid(entry)) { - vm->retval = *entry; - - } else { - /* src value may be in Array.prototype object. */ - - ret = njs_value_property_i64(vm, this, array->length, - &vm->retval); - if (njs_slow_path(ret == NJS_ERROR)) { - return NJS_ERROR; - } - } - } - - return NJS_OK; - } - ret = njs_object_length(vm, this, &length); if (njs_slow_path(ret == NJS_ERROR)) { return ret; } if (length == 0) { + ret = njs_object_length_set(vm, this, length); + if (njs_slow_path(ret == NJS_ERROR)) { + return ret; + } + njs_set_undefined(&vm->retval); - goto done; + + return NJS_OK; } ret = njs_value_property_i64(vm, this, --length, &vm->retval); @@ -995,16 +974,19 @@ njs_array_prototype_pop(njs_vm_t *vm, nj return ret; } - ret = njs_value_property_i64_delete(vm, this, length, NULL); - if (njs_slow_path(ret == NJS_ERROR)) { - return ret; - } - -done: - - ret = njs_object_length_set(vm, this, length); - if (njs_slow_path(ret == NJS_ERROR)) { - return ret; + if (njs_is_fast_array(this)) { + njs_array(this)->length--; + + } else { + ret = njs_value_property_i64_delete(vm, this, length, NULL); + if (njs_slow_path(ret == NJS_ERROR)) { + return ret; + } + + ret = njs_object_length_set(vm, this, length); + if (njs_slow_path(ret == NJS_ERROR)) { + return ret; + } } return NJS_OK; From xeioex at nginx.com Fri Jan 14 14:57:27 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 14 Jan 2022 14:57:27 +0000 Subject: [njs] Simplified element access in Array.prototype.shift(). Message-ID: details: https://hg.nginx.org/njs/rev/dfbde7620ced branches: changeset: 1806:dfbde7620ced user: Dmitry Volyntsev date: Thu Jan 13 18:30:31 2022 +0000 description: Simplified element access in Array.prototype.shift(). Previously, array structure may be left in inconsistent state when a custom getter in a proto array changes array size. The change is similar to the previous commits. diffstat: src/njs_array.c | 89 ++++++++++++++++++++++++-------------------------------- 1 files changed, 39 insertions(+), 50 deletions(-) diffs (121 lines): diff -r 762774041f05 -r dfbde7620ced src/njs_array.c --- a/src/njs_array.c Thu Jan 13 18:30:31 2022 +0000 +++ b/src/njs_array.c Thu Jan 13 18:30:31 2022 +0000 @@ -1135,78 +1135,67 @@ njs_array_prototype_shift(njs_vm_t *vm, int64_t i, length; njs_int_t ret; njs_array_t *array; - njs_value_t *this, *item, entry; + njs_value_t *this, entry; this = njs_argument(args, 0); - length = 0; ret = njs_value_to_object(vm, this); if (njs_slow_path(ret != NJS_OK)) { return ret; } - njs_set_undefined(&vm->retval); - - if (njs_is_fast_array(this)) { - array = njs_array(this); - - if (array->length != 0) { - array->length--; - item = &array->start[0]; - - if (njs_is_valid(item)) { - vm->retval = *item; - - } else { - /* src value may be in Array.prototype object. */ - - ret = njs_value_property_i64(vm, this, 0, &vm->retval); - if (njs_slow_path(ret == NJS_ERROR)) { - return NJS_ERROR; - } - } - - array->start++; - } - - return NJS_OK; - } - ret = njs_object_length(vm, this, &length); if (njs_slow_path(ret == NJS_ERROR)) { return ret; } if (length == 0) { - goto done; - } - - ret = njs_value_property_i64_delete(vm, this, 0, &vm->retval); - if (njs_slow_path(ret == NJS_ERROR)) { - return ret; - } - - for (i = 1; i < length; i++) { - ret = njs_value_property_i64_delete(vm, this, i, &entry); + ret = njs_object_length_set(vm, this, length); if (njs_slow_path(ret == NJS_ERROR)) { return ret; } - if (ret == NJS_OK) { - ret = njs_value_property_i64_set(vm, this, i - 1, &entry); + njs_set_undefined(&vm->retval); + + return NJS_OK; + } + + ret = njs_value_property_i64(vm, this, 0, &vm->retval); + if (njs_slow_path(ret == NJS_ERROR)) { + return NJS_ERROR; + } + + if (njs_is_fast_array(this)) { + array = njs_array(this); + + array->start++; + array->length--; + + } else { + + ret = njs_value_property_i64_delete(vm, this, 0, &vm->retval); + if (njs_slow_path(ret == NJS_ERROR)) { + return ret; + } + + for (i = 1; i < length; i++) { + ret = njs_value_property_i64_delete(vm, this, i, &entry); if (njs_slow_path(ret == NJS_ERROR)) { return ret; } + + if (ret == NJS_OK) { + ret = njs_value_property_i64_set(vm, this, i - 1, &entry); + if (njs_slow_path(ret == NJS_ERROR)) { + return ret; + } + } } - } - - length--; - -done: - - ret = njs_object_length_set(vm, this, length); - if (njs_slow_path(ret == NJS_ERROR)) { - return ret; + + ret = njs_object_length_set(vm, this, length - 1); + if (njs_slow_path(ret == NJS_ERROR)) { + return ret; + } } return NJS_OK; From xeioex at nginx.com Fri Jan 14 14:57:29 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 14 Jan 2022 14:57:29 +0000 Subject: [njs] Fixed Array.prototype.reverse() when array is changed while iterating. Message-ID: details: https://hg.nginx.org/njs/rev/7467158d9f37 branches: changeset: 1807:7467158d9f37 user: Dmitry Volyntsev date: Fri Jan 14 14:40:27 2022 +0000 description: Fixed Array.prototype.reverse() when array is changed while iterating. Previously, the flat array may be converted to a slow one as a side-effect of a custom getter invocation for a proto array object. The function erroneously assumed that the this array remains flat while iterating. The fix is to eliminate the micro-optimization which uses direct pointers. The problem is similar to the previous commits. diffstat: src/njs_array.c | 47 ----------------------------------------------- src/test/njs_unit_test.c | 12 ++++++++++++ 2 files changed, 12 insertions(+), 47 deletions(-) diffs (86 lines): diff -r dfbde7620ced -r 7467158d9f37 src/njs_array.c --- a/src/njs_array.c Thu Jan 13 18:30:31 2022 +0000 +++ b/src/njs_array.c Fri Jan 14 14:40:27 2022 +0000 @@ -1367,7 +1367,6 @@ njs_array_prototype_reverse(njs_vm_t *vm int64_t length, l, h; njs_int_t ret, lret, hret; njs_value_t value, lvalue, hvalue, *this; - njs_array_t *array; this = njs_argument(args, 0); @@ -1386,52 +1385,6 @@ njs_array_prototype_reverse(njs_vm_t *vm return NJS_OK; } - if (njs_is_fast_array(this)) { - array = njs_array(this); - - for (l = 0, h = length - 1; l < h; l++, h--) { - if (njs_fast_path(njs_is_valid(&array->start[l]))) { - lvalue = array->start[l]; - lret = NJS_OK; - - } else { - lret = njs_value_property_i64(vm, this, l, &lvalue); - if (njs_slow_path(lret == NJS_ERROR)) { - return NJS_ERROR; - } - } - - if (njs_fast_path(njs_is_valid(&array->start[h]))) { - hvalue = array->start[h]; - hret = NJS_OK; - - } else { - hret = njs_value_property_i64(vm, this, h, &hvalue); - if (njs_slow_path(hret == NJS_ERROR)) { - return NJS_ERROR; - } - } - - if (lret == NJS_OK) { - array->start[h] = lvalue; - - if (hret == NJS_OK) { - array->start[l] = hvalue; - - } else { - array->start[l] = njs_value_invalid; - } - - } else if (hret == NJS_OK) { - array->start[l] = hvalue; - array->start[h] = njs_value_invalid; - } - } - - njs_set_array(&vm->retval, array); - return NJS_OK; - } - for (l = 0, h = length - 1; l < h; l++, h--) { lret = njs_value_property_i64(vm, this, l, &lvalue); if (njs_slow_path(lret == NJS_ERROR)) { diff -r dfbde7620ced -r 7467158d9f37 src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Thu Jan 13 18:30:31 2022 +0000 +++ b/src/test/njs_unit_test.c Fri Jan 14 14:40:27 2022 +0000 @@ -4883,6 +4883,18 @@ static njs_unit_test_t njs_test[] = { njs_str("[,,,3,2,1].reverse()"), njs_str("1,2,3,,,") }, + { njs_str("var a = [,,2,1];" + "Object.defineProperty(a.__proto__, 0, {" + " get: () => {" + " a.length = 10**6;" + " return 4;" + " }," + " set: (setval) => { Object.defineProperty(a, 0, { value: setval }); }," + "});" + "a.reverse();" + "a.slice(0, 4)"), + njs_str("1,2,,4") }, + { njs_str("var o = {1:true, 2:'', length:-2}; Array.prototype.reverse.call(o) === o"), njs_str("true") }, From hle at owl.eu.com Fri Jan 14 15:28:20 2022 From: hle at owl.eu.com (Hugo Lefeuvre) Date: Fri, 14 Jan 2022 15:28:20 +0000 Subject: test suite failure with 1.20.1 In-Reply-To: References: <20210703081853.xhj6huxl2tytvsih@behemoth.owl.eu.com.local> <20220110132852.r6d5xzaib4ztk5m5@behemoth.owl.eu.com.local> Message-ID: <20220114152820.56ax73p2askv56kd@behemoth.owl.eu.com.local> Hello! On Tue, Jan 11, 2022 at 02:05:31AM +0300, Maxim Dounin wrote: > The test suite can be run as root, but, given that nginx switches > to a non-privileged user by default (https://nginx.org/r/user), > and temporary directory is only readable by the owner, running > test suite as root requires some additional tuning for most of the > tests to work, e.g.: > > # TEST_NGINX_GLOBALS="user root wheel;" prove ssi_waited.t > > The fact that the test suite by default is expected to be run > under a normal user is already in the README, though may be in > somewhat obscure form: note the "$ " prompt in the usage example. That makes sense, thanks for the explanation! I wished this was on the README somewhere, would have spared me some time though :-) Best, Hugo -- Hugo Lefeuvre (hle) | www.owl.eu.com RSA4096_ 360B 03B3 BF27 4F4D 7A3F D5E8 14AA 1EB8 A247 3DFD ed25519_ 37B2 6D38 0B25 B8A2 6B9F 3A65 A36F 5357 5F2D DC4C -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 659 bytes Desc: not available URL: From hle at owl.eu.com Fri Jan 14 15:45:31 2022 From: hle at owl.eu.com (Hugo Lefeuvre) Date: Fri, 14 Jan 2022 15:45:31 +0000 Subject: On test-suite coverage Message-ID: <20220114154531.cq2be6hvnpt6ytnx@behemoth.owl.eu.com.local> Hello! As part of a research project, I measured the coverage achieved by Nginx's test-suite using gcov. Taking a look at the results, my colleagues and myself were somewhat surprised to realize that the coverage capped at about 70% line coverage and 81% function coverage; we expected something closer to 90% line coverage. core/nginx.c for example only gets 268 lines / 584 covered. Similarly, more than half of core/ngx_resover.c (1042 lines / 2138) is not covered. I was wondering if I did something wrong in my measurements, if this is a known weakness of the test-suite, and in the latter case, if this is something that the Nginx project is open to receiving contributions on. For the record, I built the Nginx server (version 1.20.1) with the following configure: ./configure --sbin-path=$(pwd)/nginx --conf-path=$(pwd)/conf/nginx.conf --pid-path=$(pwd)/nginx.pid --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module --with-http_image_filter_module --with-http_geoip_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gzip_static_module --with-http_gunzip_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-cc-opt="-fprofile-arcs -ftest-coverage" --with-ld-opt="-lgcov" --with-http_auth_request_module --with-http_v2_module Thanks for your work! Cheers, Hugo -- Hugo Lefeuvre (hle) | www.owl.eu.com RSA4096_ 360B 03B3 BF27 4F4D 7A3F D5E8 14AA 1EB8 A247 3DFD ed25519_ 37B2 6D38 0B25 B8A2 6B9F 3A65 A36F 5357 5F2D DC4C -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 659 bytes Desc: not available URL: From maxim at nginx.com Fri Jan 14 16:33:20 2022 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 14 Jan 2022 19:33:20 +0300 Subject: On test-suite coverage In-Reply-To: <20220114154531.cq2be6hvnpt6ytnx@behemoth.owl.eu.com.local> References: <20220114154531.cq2be6hvnpt6ytnx@behemoth.owl.eu.com.local> Message-ID: <3bb00e67-4ce8-7750-3e26-d0817d0c47a8@nginx.com> Hi Hugo, This is great to see your interest to nginx tests. On 14.01.2022 18:45, Hugo Lefeuvre wrote: > Hello! > > As part of a research project, I measured the coverage achieved by Nginx's > test-suite using gcov. Taking a look at the results, my colleagues and > myself were somewhat surprised to realize that the coverage capped at about > 70% line coverage and 81% function coverage; we expected something closer > to 90% line coverage. core/nginx.c for example only gets 268 lines / 584 This is more or less in line with our numbers. Just from today report for mainline: 77.8% for lines coverage and 93.1% for functions. These figures include njs code though. > covered. Similarly, more than half of core/ngx_resover.c (1042 lines / > 2138) is not covered. > > I was wondering if I did something wrong in my measurements, if this is a > known weakness of the test-suite, and in the latter case, if this is > something that the Nginx project is open to receiving contributions on. > Probably not a weakness but unjustified expectations? :-) I think the biggest non-covered part comes from various errors paths. Some of them could be hard to trigger. For example, for memory allocation failures we use separate nodes with modified ngx_palloc.c which enables random memory allocation errors. This is not a part of the standard test suite though. Anyway, any meaningful contributions in this area will be highly appreciated. http://nginx.org/en/docs/contributing_changes.html There are several things to keep in mind: we want to keep the suite compact as it is now and be able to have it integrated into CI/CD systems easily. Thanks, Maxim -- Maxim Konovalov From hle at owl.eu.com Sat Jan 15 09:50:34 2022 From: hle at owl.eu.com (Hugo Lefeuvre) Date: Sat, 15 Jan 2022 09:50:34 +0000 Subject: On test-suite coverage In-Reply-To: <3bb00e67-4ce8-7750-3e26-d0817d0c47a8@nginx.com> References: <20220114154531.cq2be6hvnpt6ytnx@behemoth.owl.eu.com.local> <3bb00e67-4ce8-7750-3e26-d0817d0c47a8@nginx.com> Message-ID: <20220115095034.ahhfuxzokwjzeyik@behemoth.owl.eu.com.local> Hi Maxim, On Fri, Jan 14, 2022 at 07:33:20PM +0300, Maxim Konovalov wrote: > This is more or less in line with our numbers. Just from today report for > mainline: 77.8% for lines coverage and 93.1% for functions. These figures > include njs code though. > > > I was wondering if I did something wrong in my measurements, if this is a > > known weakness of the test-suite, and in the latter case, if this is > > something that the Nginx project is open to receiving contributions on. > > Probably not a weakness but unjustified expectations? :-) > > I think the biggest non-covered part comes from various errors paths. > Some of them could be hard to trigger. For example, for memory > allocation failures we use separate nodes with modified ngx_palloc.c > which enables random memory allocation errors. This is not a part of > the standard test suite though. Thanks for confirming the numbers! Makes sense. We made the same observation with error paths, and it's nice to confirm it. I suppose that stimulating some of these error paths would require approaches (syscall interception, some degree of mocking) that might be in contradiction with the requirements you listed below (compactness, CI/CD). > Anyway, any meaningful contributions in this area will be highly > appreciated. > > http://nginx.org/en/docs/contributing_changes.html > > There are several things to keep in mind: we want to keep the suite compact > as it is now and be able to have it integrated into CI/CD systems easily. Alright, we will definitely attempt to upstream any test that we might have to write for this project. :-) Best, Hugo -- Hugo Lefeuvre (hle) | www.owl.eu.com RSA4096_ 360B 03B3 BF27 4F4D 7A3F D5E8 14AA 1EB8 A247 3DFD ed25519_ 37B2 6D38 0B25 B8A2 6B9F 3A65 A36F 5357 5F2D DC4C -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 659 bytes Desc: not available URL: From pluknet at nginx.com Mon Jan 17 14:27:03 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 17 Jan 2022 14:27:03 +0000 Subject: [nginx] SSL: free pkey on SSL_CTX_set0_tmp_dh_pkey() failure. Message-ID: details: https://hg.nginx.org/nginx/rev/aeab41dfd260 branches: changeset: 7994:aeab41dfd260 user: Sergey Kandaurov date: Mon Jan 17 17:05:12 2022 +0300 description: SSL: free pkey on SSL_CTX_set0_tmp_dh_pkey() failure. The behaviour was changed in OpenSSL 3.0.1: https://git.openssl.org/?p=openssl.git;a=commitdiff;h=bf17b7b diffstat: src/event/ngx_event_openssl.c | 3 +++ 1 files changed, 3 insertions(+), 0 deletions(-) diffs (13 lines): diff -r 96ae8e57b3dd -r aeab41dfd260 src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Tue Jan 11 02:23:49 2022 +0300 +++ b/src/event/ngx_event_openssl.c Mon Jan 17 17:05:12 2022 +0300 @@ -1383,6 +1383,9 @@ ngx_ssl_dhparam(ngx_conf_t *cf, ngx_ssl_ if (SSL_CTX_set0_tmp_dh_pkey(ssl->ctx, dh) != 1) { ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0, "SSL_CTX_set0_tmp_dh_pkey(\%s\") failed", file->data); +#if (OPENSSL_VERSION_NUMBER >= 0x3000001fL) + EVP_PKEY_free(dh); +#endif BIO_free(bio); return NGX_ERROR; } From xeioex at nginx.com Tue Jan 18 15:35:22 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 18 Jan 2022 15:35:22 +0000 Subject: [njs] Fixed Array.prototype.concat() with exotic argument object. Message-ID: details: https://hg.nginx.org/njs/rev/3e86977c253d branches: changeset: 1808:3e86977c253d user: Dmitry Volyntsev date: Tue Jan 18 15:35:00 2022 +0000 description: Fixed Array.prototype.concat() with exotic argument object. The issue was introduced in 2c1382bab643. diffstat: src/njs_array.c | 2 +- src/test/njs_unit_test.c | 7 +++++++ 2 files changed, 8 insertions(+), 1 deletions(-) diffs (29 lines): diff -r 7467158d9f37 -r 3e86977c253d src/njs_array.c --- a/src/njs_array.c Fri Jan 14 14:40:27 2022 +0000 +++ b/src/njs_array.c Tue Jan 18 15:35:00 2022 +0000 @@ -1743,7 +1743,7 @@ njs_array_prototype_concat(njs_vm_t *vm, return NJS_ERROR; } - if (njs_is_fast_array(e)) { + if (njs_is_fast_array(e) || njs_fast_object(len)) { for (k = 0; k < len; k++, length++) { ret = njs_value_property_i64(vm, e, k, &retval); if (njs_slow_path(ret != NJS_OK)) { diff -r 7467158d9f37 -r 3e86977c253d src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Fri Jan 14 14:40:27 2022 +0000 +++ b/src/test/njs_unit_test.c Tue Jan 18 15:35:00 2022 +0000 @@ -12857,6 +12857,13 @@ static njs_unit_test_t njs_test[] = "a.concat(b)"), njs_str("1,2,3,4,5") }, + { njs_str("Boolean.prototype.length = 2;" + "Boolean.prototype[0] = 'a';" + "Boolean.prototype[1] = 'b';" + "Boolean.prototype[Symbol.isConcatSpreadable] = true;" + "[].concat(new Boolean(true))"), + njs_str("a,b") }, + { njs_str("var o = {}, n = 5381 /* NJS_DJB_HASH_INIT */;" "while(n--) o[Symbol()] = 'test'; o[''];"), njs_str("undefined") }, From xeioex at nginx.com Wed Jan 19 13:12:48 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 19 Jan 2022 13:12:48 +0000 Subject: [njs] 2022 year. Message-ID: details: https://hg.nginx.org/njs/rev/1634e2b7a9e7 branches: changeset: 1809:1634e2b7a9e7 user: Dmitry Volyntsev date: Tue Jan 18 15:36:40 2022 +0000 description: 2022 year. diffstat: LICENSE | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diffs (15 lines): diff -r 3e86977c253d -r 1634e2b7a9e7 LICENSE --- a/LICENSE Tue Jan 18 15:35:00 2022 +0000 +++ b/LICENSE Tue Jan 18 15:36:40 2022 +0000 @@ -1,8 +1,8 @@ /* - * Copyright (C) 2015-2021 NGINX, Inc. + * Copyright (C) 2015-2022 NGINX, Inc. * Copyright (C) 2015-2021 Igor Sysoev - * Copyright (C) 2017-2021 Dmitry Volyntsev - * Copyright (C) 2019-2021 Alexander Borisov + * Copyright (C) 2017-2022 Dmitry Volyntsev + * Copyright (C) 2019-2022 Alexander Borisov * All rights reserved. * * Redistribution and use in source and binary forms, with or without From xeioex at nginx.com Wed Jan 19 13:12:50 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 19 Jan 2022 13:12:50 +0000 Subject: [njs] Fixed Buffer.concat() with subarrays. Message-ID: details: https://hg.nginx.org/njs/rev/30865ae17149 branches: changeset: 1810:30865ae17149 user: Sylvain Etienne date: Tue Jan 18 08:37:09 2022 +0100 description: Fixed Buffer.concat() with subarrays. This closes #458 issue on Github. diffstat: src/njs_buffer.c | 12 +++++++----- src/test/njs_unit_test.c | 9 +++++++++ 2 files changed, 16 insertions(+), 5 deletions(-) diffs (55 lines): diff -r 1634e2b7a9e7 -r 30865ae17149 src/njs_buffer.c --- a/src/njs_buffer.c Tue Jan 18 15:36:40 2022 +0000 +++ b/src/njs_buffer.c Tue Jan 18 08:37:09 2022 +0100 @@ -764,7 +764,7 @@ static njs_int_t njs_buffer_concat(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, njs_index_t unused) { - u_char *p; + u_char *p, *src; size_t n; int64_t i, len, list_len; njs_int_t ret; @@ -866,8 +866,9 @@ njs_buffer_concat(njs_vm_t *vm, njs_valu for (i = 0; len != 0 && i < list_len; i++) { arr = njs_typed_array(&array->start[i]); n = njs_min((size_t) len, arr->byte_length); - - p = njs_cpymem(p, njs_typed_array_buffer(arr)->u.u8, n); + src = &njs_typed_array_buffer(arr)->u.u8[arr->offset]; + + p = njs_cpymem(p, src, n); len -= n; } @@ -881,8 +882,9 @@ njs_buffer_concat(njs_vm_t *vm, njs_valu arr = njs_typed_array(&retval); n = njs_min((size_t) len, arr->byte_length); - - p = njs_cpymem(p, njs_typed_array_buffer(arr)->u.u8, n); + src = &njs_typed_array_buffer(arr)->u.u8[arr->offset]; + + p = njs_cpymem(p, src, n); len -= n; } diff -r 1634e2b7a9e7 -r 30865ae17149 src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Tue Jan 18 15:36:40 2022 +0000 +++ b/src/test/njs_unit_test.c Tue Jan 18 08:37:09 2022 +0100 @@ -20109,6 +20109,15 @@ static njs_unit_test_t njs_buffer_modul { njs_str("Buffer.concat([new Uint8Array(2), new Uint8Array(1)], 6).fill('abc')"), njs_str("abcabc") }, + { njs_str("Buffer.concat([Buffer.from('ABCD').slice(2,4), Buffer.from('ABCD').slice(0,2)])"), + njs_str("CDAB") }, + + { njs_str(njs_declare_sparse_array("list", 2) + "list[0] = Buffer.from('ABCD').slice(2,4);" + "list[1] = Buffer.from('ABCD').slice(0,2);" + "Buffer.concat(list);"), + njs_str("CDAB") }, + { njs_str(njs_declare_sparse_array("list", 2) "list[0] = new Uint8Array(2); list[1] = new Uint8Array(3);" "Buffer.concat(list).fill('ab');"), From xeioex at nginx.com Wed Jan 19 13:12:52 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 19 Jan 2022 13:12:52 +0000 Subject: [njs] Improved access to this argument. Message-ID: details: https://hg.nginx.org/njs/rev/79b109076c13 branches: changeset: 1811:79b109076c13 user: Dmitry Volyntsev date: Tue Jan 18 15:37:11 2022 +0000 description: Improved access to this argument. 'this' argument is always present, so it may be accessed without checking the number provided arguments. diffstat: external/njs_query_string_module.c | 4 ++-- src/njs_array_buffer.c | 4 ++-- src/njs_function.c | 2 +- src/njs_object.c | 6 +++--- src/njs_promise.c | 12 ++++++------ src/njs_string.c | 32 ++++++++++++++++---------------- 6 files changed, 30 insertions(+), 30 deletions(-) diffs (288 lines): diff -r 30865ae17149 -r 79b109076c13 external/njs_query_string_module.c --- a/external/njs_query_string_module.c Tue Jan 18 08:37:09 2022 +0100 +++ b/external/njs_query_string_module.c Tue Jan 18 15:37:11 2022 +0000 @@ -391,7 +391,7 @@ njs_query_string_parse(njs_vm_t *vm, njs njs_set_object(&obj, object); - this = njs_arg(args, nargs, 0); + this = njs_argument(args, 0); string = njs_arg(args, nargs, 1); if (njs_slow_path(!njs_is_string(string) @@ -703,7 +703,7 @@ njs_query_string_stringify(njs_vm_t *vm, (void) njs_string_prop(&eq, &val_eq); encode = NULL; - this = njs_arg(args, nargs, 0); + this = njs_argument(args, 0); object = njs_arg(args, nargs, 1); if (njs_slow_path(!njs_is_object(object))) { diff -r 30865ae17149 -r 79b109076c13 src/njs_array_buffer.c --- a/src/njs_array_buffer.c Tue Jan 18 08:37:09 2022 +0100 +++ b/src/njs_array_buffer.c Tue Jan 18 15:37:11 2022 +0000 @@ -195,7 +195,7 @@ njs_array_buffer_prototype_byte_length(n njs_value_t *value; njs_array_buffer_t *array; - value = njs_arg(args, nargs, 0); + value = njs_argument(args, 0); if (!njs_is_array_buffer(value)) { njs_type_error(vm, "Method ArrayBuffer.prototype.byteLength called " @@ -224,7 +224,7 @@ njs_array_buffer_prototype_slice(njs_vm_ njs_value_t *value; njs_array_buffer_t *this, *buffer; - value = njs_arg(args, nargs, 0); + value = njs_argument(args, 0); if (!njs_is_array_buffer(value)) { njs_type_error(vm, "Method ArrayBuffer.prototype.slice called " diff -r 30865ae17149 -r 79b109076c13 src/njs_function.c --- a/src/njs_function.c Tue Jan 18 08:37:09 2022 +0100 +++ b/src/njs_function.c Tue Jan 18 15:37:11 2022 +0000 @@ -1374,7 +1374,7 @@ njs_function_prototype_apply(njs_vm_t *v njs_array_t *arr; njs_function_t *func; - if (!njs_is_function(njs_arg(args, nargs, 0))) { + if (!njs_is_function(njs_argument(args, 0))) { njs_type_error(vm, "\"this\" argument is not a function"); return NJS_ERROR; } diff -r 30865ae17149 -r 79b109076c13 src/njs_object.c --- a/src/njs_object.c Tue Jan 18 08:37:09 2022 +0100 +++ b/src/njs_object.c Tue Jan 18 15:37:11 2022 +0000 @@ -2437,7 +2437,7 @@ njs_object_prototype_has_own_property(nj njs_value_t *value, *property; njs_property_query_t pq; - value = njs_arg(args, nargs, 0); + value = njs_argument(args, 0); if (njs_is_null_or_undefined(value)) { njs_type_error(vm, "cannot convert %s argument to object", @@ -2477,7 +2477,7 @@ njs_object_prototype_prop_is_enumerable( njs_object_prop_t *prop; njs_property_query_t pq; - value = njs_arg(args, nargs, 0); + value = njs_argument(args, 0); if (njs_is_null_or_undefined(value)) { njs_type_error(vm, "cannot convert %s argument to object", @@ -2520,7 +2520,7 @@ njs_object_prototype_is_prototype_of(njs njs_object_t *object, *proto; const njs_value_t *retval; - if (njs_slow_path(njs_is_null_or_undefined(njs_arg(args, nargs, 0)))) { + if (njs_slow_path(njs_is_null_or_undefined(njs_argument(args, 0)))) { njs_type_error(vm, "cannot convert undefined to object"); return NJS_ERROR; } diff -r 30865ae17149 -r 79b109076c13 src/njs_promise.c --- a/src/njs_promise.c Tue Jan 18 08:37:09 2022 +0100 +++ b/src/njs_promise.c Tue Jan 18 15:37:11 2022 +0000 @@ -749,7 +749,7 @@ njs_promise_object_resolve(njs_vm_t *vm, { njs_promise_t *promise; - if (njs_slow_path(!njs_is_object(njs_arg(args, nargs, 0)))) { + if (njs_slow_path(!njs_is_object(njs_argument(args, 0)))) { njs_type_error(vm, "this value is not an object"); return NJS_ERROR; } @@ -846,7 +846,7 @@ njs_promise_object_reject(njs_vm_t *vm, njs_value_t value; njs_promise_capability_t *capability; - if (njs_slow_path(!njs_is_object(njs_arg(args, nargs, 0)))) { + if (njs_slow_path(!njs_is_object(njs_argument(args, 0)))) { njs_type_error(vm, "this value is not an object"); return NJS_ERROR; } @@ -879,7 +879,7 @@ njs_promise_prototype_then(njs_vm_t *vm, njs_function_t *function; njs_promise_capability_t *capability; - promise = njs_arg(args, nargs, 0); + promise = njs_argument(args, 0); if (njs_slow_path(!njs_is_object(promise))) { goto failed; @@ -1018,7 +1018,7 @@ njs_promise_prototype_catch(njs_vm_t *vm arguments[0] = njs_value_undefined; arguments[1] = *njs_arg(args, nargs, 1); - return njs_promise_invoke_then(vm, njs_arg(args, nargs, 0), arguments, 2); + return njs_promise_invoke_then(vm, njs_argument(args, 0), arguments, 2); } @@ -1031,7 +1031,7 @@ njs_promise_prototype_finally(njs_vm_t * njs_function_t *function; njs_promise_context_t *context; - promise = njs_arg(args, nargs, 0); + promise = njs_argument(args, 0); if (njs_slow_path(!njs_is_object(promise))) { njs_type_error(vm, "required a object"); @@ -1779,7 +1779,7 @@ static njs_int_t njs_promise_species(njs_vm_t *vm, njs_value_t *args, njs_uint_t nargs, njs_index_t unused) { - njs_vm_retval_set(vm, njs_arg(args, nargs, 0)); + njs_vm_retval_set(vm, njs_argument(args, 0)); return NJS_OK; } diff -r 30865ae17149 -r 79b109076c13 src/njs_string.c --- a/src/njs_string.c Tue Jan 18 08:37:09 2022 +0100 +++ b/src/njs_string.c Tue Jan 18 15:37:11 2022 +0000 @@ -986,7 +986,7 @@ njs_string_prototype_from_utf8(njs_vm_t njs_slice_prop_t slice; njs_string_prop_t string; - ret = njs_string_object_validate(vm, njs_arg(args, nargs, 0)); + ret = njs_string_object_validate(vm, njs_argument(args, 0)); if (njs_slow_path(ret != NJS_OK)) { return ret; } @@ -1029,7 +1029,7 @@ njs_string_prototype_to_utf8(njs_vm_t *v njs_slice_prop_t slice; njs_string_prop_t string; - ret = njs_string_object_validate(vm, njs_arg(args, nargs, 0)); + ret = njs_string_object_validate(vm, njs_argument(args, 0)); if (njs_slow_path(ret != NJS_OK)) { return ret; } @@ -1063,7 +1063,7 @@ njs_string_prototype_from_bytes(njs_vm_t njs_slice_prop_t slice; njs_string_prop_t string; - ret = njs_string_object_validate(vm, njs_arg(args, nargs, 0)); + ret = njs_string_object_validate(vm, njs_argument(args, 0)); if (njs_slow_path(ret != NJS_OK)) { return ret; } @@ -1129,7 +1129,7 @@ njs_string_prototype_to_bytes(njs_vm_t * njs_string_prop_t string; njs_unicode_decode_t ctx; - ret = njs_string_object_validate(vm, njs_arg(args, nargs, 0)); + ret = njs_string_object_validate(vm, njs_argument(args, 0)); if (njs_slow_path(ret != NJS_OK)) { return ret; } @@ -1192,7 +1192,7 @@ njs_string_prototype_slice(njs_vm_t *vm, njs_slice_prop_t slice; njs_string_prop_t string; - ret = njs_string_object_validate(vm, njs_arg(args, nargs, 0)); + ret = njs_string_object_validate(vm, njs_argument(args, 0)); if (njs_slow_path(ret != NJS_OK)) { return ret; } @@ -1216,7 +1216,7 @@ njs_string_prototype_substring(njs_vm_t njs_slice_prop_t slice; njs_string_prop_t string; - ret = njs_string_object_validate(vm, njs_arg(args, nargs, 0)); + ret = njs_string_object_validate(vm, njs_argument(args, 0)); if (njs_slow_path(ret != NJS_OK)) { return ret; } @@ -1294,7 +1294,7 @@ njs_string_prototype_substr(njs_vm_t *vm njs_slice_prop_t slice; njs_string_prop_t string; - ret = njs_string_object_validate(vm, njs_arg(args, nargs, 0)); + ret = njs_string_object_validate(vm, njs_argument(args, 0)); if (njs_slow_path(ret != NJS_OK)) { return ret; } @@ -1372,7 +1372,7 @@ njs_string_prototype_char_at(njs_vm_t *v njs_slice_prop_t slice; njs_string_prop_t string; - ret = njs_string_object_validate(vm, njs_arg(args, nargs, 0)); + ret = njs_string_object_validate(vm, njs_argument(args, 0)); if (njs_slow_path(ret != NJS_OK)) { return ret; } @@ -1566,7 +1566,7 @@ njs_string_prototype_char_code_at(njs_vm njs_string_prop_t string; njs_unicode_decode_t ctx; - ret = njs_string_object_validate(vm, njs_arg(args, nargs, 0)); + ret = njs_string_object_validate(vm, njs_argument(args, 0)); if (njs_slow_path(ret != NJS_OK)) { return ret; } @@ -2332,7 +2332,7 @@ njs_string_prototype_includes(njs_vm_t * const njs_value_t *retval; njs_string_prop_t string, search; - ret = njs_string_object_validate(vm, njs_arg(args, nargs, 0)); + ret = njs_string_object_validate(vm, njs_argument(args, 0)); if (njs_slow_path(ret != NJS_OK)) { return ret; } @@ -2425,7 +2425,7 @@ njs_string_prototype_starts_or_ends_with retval = &njs_value_true; - ret = njs_string_object_validate(vm, njs_arg(args, nargs, 0)); + ret = njs_string_object_validate(vm, njs_argument(args, 0)); if (njs_slow_path(ret != NJS_OK)) { return ret; } @@ -2635,7 +2635,7 @@ njs_string_prototype_to_lower_case(njs_v const u_char *s, *end; njs_string_prop_t string; - ret = njs_string_object_validate(vm, njs_arg(args, nargs, 0)); + ret = njs_string_object_validate(vm, njs_argument(args, 0)); if (njs_slow_path(ret != NJS_OK)) { return ret; } @@ -2707,7 +2707,7 @@ njs_string_prototype_to_upper_case(njs_v const u_char *s, *end; njs_string_prop_t string; - ret = njs_string_object_validate(vm, njs_arg(args, nargs, 0)); + ret = njs_string_object_validate(vm, njs_argument(args, 0)); if (njs_slow_path(ret != NJS_OK)) { return ret; } @@ -2973,7 +2973,7 @@ njs_string_prototype_pad(njs_vm_t *vm, n static const njs_value_t string_space = njs_string(" "); - ret = njs_string_object_validate(vm, njs_arg(args, nargs, 0)); + ret = njs_string_object_validate(vm, njs_argument(args, 0)); if (njs_slow_path(ret != NJS_OK)) { return ret; } @@ -3093,7 +3093,7 @@ njs_string_prototype_search(njs_vm_t *vm njs_string_prop_t string; njs_regexp_pattern_t *pattern; - ret = njs_string_object_validate(vm, njs_arg(args, nargs, 0)); + ret = njs_string_object_validate(vm, njs_argument(args, 0)); if (njs_slow_path(ret != NJS_OK)) { return ret; } @@ -3171,7 +3171,7 @@ njs_string_prototype_match(njs_vm_t *vm, njs_value_t arguments[2]; njs_regexp_pattern_t *pattern; - ret = njs_string_object_validate(vm, njs_arg(args, nargs, 0)); + ret = njs_string_object_validate(vm, njs_argument(args, 0)); if (njs_slow_path(ret != NJS_OK)) { return ret; } From xeioex at nginx.com Wed Jan 19 13:12:54 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 19 Jan 2022 13:12:54 +0000 Subject: [njs] Fixed type confusion bug while resolving promises. Message-ID: details: https://hg.nginx.org/njs/rev/c419a4e34998 branches: changeset: 1812:c419a4e34998 user: Dmitry Volyntsev date: Wed Jan 19 13:12:09 2022 +0000 description: Fixed type confusion bug while resolving promises. Previously, the internal function njs_promise_perform_then() which implements PerformPromiseThen() expects its first argument to always be a promise instance. This assertion might be invalid because the functions corresponding to Promise.prototype.then() and Promise.resolve() incorrectly verified their arguments. Specifically, the functions recognized their first argument as promise if it was an object which was an Promise or had Promise object in its prototype chain. The later condition is not correct because internal slots are not inherited according to the spec. This closes #447 issue in Github. diffstat: src/njs_promise.c | 33 ++++++------------- src/njs_vmcode.c | 2 +- test/js/promise_prototype_reject_type_confusion.t.js | 11 ++++++ test/js/promise_prototype_then_type_confusion.t.js | 11 ++++++ 4 files changed, 34 insertions(+), 23 deletions(-) diffs (109 lines): diff -r 79b109076c13 -r c419a4e34998 src/njs_promise.c --- a/src/njs_promise.c Tue Jan 18 15:37:11 2022 +0000 +++ b/src/njs_promise.c Wed Jan 19 13:12:09 2022 +0000 @@ -771,25 +771,19 @@ njs_promise_resolve(njs_vm_t *vm, njs_va { njs_int_t ret; njs_value_t value; - njs_object_t *object; njs_promise_capability_t *capability; static const njs_value_t string_constructor = njs_string("constructor"); - if (njs_is_object(x)) { - object = njs_object_proto_lookup(njs_object(x), NJS_PROMISE, - njs_object_t); + if (njs_is_promise(x)) { + ret = njs_value_property(vm, x, njs_value_arg(&string_constructor), + &value); + if (njs_slow_path(ret == NJS_ERROR)) { + return NULL; + } - if (object != NULL) { - ret = njs_value_property(vm, x, njs_value_arg(&string_constructor), - &value); - if (njs_slow_path(ret == NJS_ERROR)) { - return NULL; - } - - if (njs_values_same(&value, constructor)) { - return njs_promise(x); - } + if (njs_values_same(&value, constructor)) { + return njs_promise(x); } } @@ -875,19 +869,12 @@ njs_promise_prototype_then(njs_vm_t *vm, { njs_int_t ret; njs_value_t *promise, *fulfilled, *rejected, constructor; - njs_object_t *object; njs_function_t *function; njs_promise_capability_t *capability; promise = njs_argument(args, 0); - if (njs_slow_path(!njs_is_object(promise))) { - goto failed; - } - - object = njs_object_proto_lookup(njs_object(promise), NJS_PROMISE, - njs_object_t); - if (njs_slow_path(object == NULL)) { + if (njs_slow_path(!njs_is_promise(promise))) { goto failed; } @@ -933,6 +920,8 @@ njs_promise_perform_then(njs_vm_t *vm, n njs_promise_data_t *data; njs_promise_reaction_t *fulfilled_reaction, *rejected_reaction; + njs_assert(njs_is_promise(value)); + if (!njs_is_function(fulfilled)) { fulfilled = njs_value_arg(&njs_value_undefined); } diff -r 79b109076c13 -r c419a4e34998 src/njs_vmcode.c --- a/src/njs_vmcode.c Tue Jan 18 15:37:11 2022 +0000 +++ b/src/njs_vmcode.c Wed Jan 19 13:12:09 2022 +0000 @@ -1895,7 +1895,7 @@ njs_vmcode_await(njs_vm_t *vm, njs_vmcod rejected->args_count = 1; rejected->u.native = njs_await_rejected; - njs_set_object(&val, &promise->object); + njs_set_promise(&val, promise); njs_set_function(&on_fulfilled, fulfilled); njs_set_function(&on_rejected, rejected); diff -r 79b109076c13 -r c419a4e34998 test/js/promise_prototype_reject_type_confusion.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/promise_prototype_reject_type_confusion.t.js Wed Jan 19 13:12:09 2022 +0000 @@ -0,0 +1,11 @@ +/*--- +includes: [] +flags: [async] +---*/ + +Symbol.__proto__ = new Promise(()=>{}); + +Promise.reject(Symbol) +.then(v => $DONOTEVALUATE()) +.catch(err => assert.sameValue(err.name, 'Symbol')) +.then($DONE, $DONE); diff -r 79b109076c13 -r c419a4e34998 test/js/promise_prototype_then_type_confusion.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/promise_prototype_then_type_confusion.t.js Wed Jan 19 13:12:09 2022 +0000 @@ -0,0 +1,11 @@ +/*--- +includes: [] +flags: [async] +---*/ + +Symbol.__proto__ = new Promise(()=>{}); + +Promise.resolve(Symbol) +.then(v => $DONOTEVALUATE()) +.catch(err => assert.sameValue(err.name, 'TypeError')) +.then($DONE, $DONE); From lukas at lihotzki.de Wed Jan 19 15:08:32 2022 From: lukas at lihotzki.de (=?iso-8859-1?q?Lukas_Lihotzki?=) Date: Wed, 19 Jan 2022 16:08:32 +0100 Subject: [PATCH] Prefer address family matching to local address in resolver (ticket #1535) Message-ID: # HG changeset patch # User Lukas Lihotzki # Date 1642576371 -3600 # Wed Jan 19 08:12:51 2022 +0100 # Node ID f922980f06b1162ae933c99c03bef09cfc12582f # Parent aeab41dfd2606dd36cabbf01f1472726e27e8aea Prefer address family matching to local address in resolver (ticket #1535). Without this change, upstream connections fail randomly for dual-stack host names when specifying a proxy_bind address (ticket #1535). This changeset adds two flags for avoiding resolving to either IPv4 or IPv6 addresses. stream and http set these flags based on the proxy_bind address. Avoided addresses are still returned if there are none of the other family, so the same error message as before is produced when connecting is impossible. diff -r aeab41dfd260 -r f922980f06b1 src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c Mon Jan 17 17:05:12 2022 +0300 +++ b/src/core/ngx_resolver.c Wed Jan 19 08:12:51 2022 +0100 @@ -113,7 +113,7 @@ static void ngx_resolver_free_locked(ngx_resolver_t *r, void *p); static void *ngx_resolver_dup(ngx_resolver_t *r, void *src, size_t size); static ngx_resolver_addr_t *ngx_resolver_export(ngx_resolver_t *r, - ngx_resolver_node_t *rn, ngx_uint_t rotate); + ngx_resolver_node_t *rn, ngx_uint_t rotate, sa_family_t af, ngx_uint_t *na); static void ngx_resolver_report_srv(ngx_resolver_t *r, ngx_resolver_ctx_t *ctx); static u_char *ngx_resolver_log_error(ngx_log_t *log, u_char *buf, size_t len); static void ngx_resolver_resolve_srv_names(ngx_resolver_ctx_t *ctx, @@ -580,6 +580,7 @@ ngx_str_t *name) { uint32_t hash; + sa_family_t sa; ngx_int_t rc; ngx_str_t cname; ngx_uint_t i, naddrs; @@ -634,7 +635,15 @@ addrs = NULL; } else { - addrs = ngx_resolver_export(r, rn, 1); + sa = AF_UNSPEC; +#if (NGX_HAVE_INET6) + if (ctx->avoid_ipv4 && rn->naddrs6) { + sa = AF_INET6; + } else if (ctx->avoid_ipv6 && rn->naddrs) { + sa = AF_INET; + } +#endif + addrs = ngx_resolver_export(r, rn, 1, sa, &naddrs); if (addrs == NULL) { return NGX_ERROR; } @@ -2403,7 +2412,7 @@ addrs = NULL; } else { - addrs = ngx_resolver_export(r, rn, 0); + addrs = ngx_resolver_export(r, rn, 0, AF_UNSPEC, &naddrs); if (addrs == NULL) { goto failed; } @@ -2425,7 +2434,7 @@ ctx = next; ctx->state = NGX_OK; ctx->valid = rn->valid; - ctx->naddrs = naddrs; + ctx->naddrs = (ctx->avoid_ipv6 && rn->naddrs) ? rn->naddrs : naddrs; if (addrs == NULL) { ctx->addrs = &ctx->addr; @@ -2437,6 +2446,12 @@ } else { ctx->addrs = addrs; +#if (NGX_HAVE_INET6) + if (ctx->avoid_ipv4 && rn->naddrs6) { + ctx->addrs = addrs + rn->naddrs; + ctx->naddrs = rn->naddrs6; + } +#endif } next = ctx->next; @@ -4183,7 +4198,7 @@ static ngx_resolver_addr_t * ngx_resolver_export(ngx_resolver_t *r, ngx_resolver_node_t *rn, - ngx_uint_t rotate) + ngx_uint_t rotate, sa_family_t af, ngx_uint_t *na) { ngx_uint_t d, i, j, n; in_addr_t *addr; @@ -4197,7 +4212,11 @@ n = rn->naddrs; #if (NGX_HAVE_INET6) - n += rn->naddrs6; + if (af == AF_INET6) { + n = rn->naddrs6; + } else if (af != AF_INET) { + n += rn->naddrs6; + } #endif dst = ngx_resolver_calloc(r, n * sizeof(ngx_resolver_addr_t)); @@ -4214,7 +4233,11 @@ i = 0; d = rotate ? ngx_random() % n : 0; - if (rn->naddrs) { + if (rn->naddrs +#if (NGX_HAVE_INET6) + && af != AF_INET6 +#endif + ) { j = rotate ? ngx_random() % rn->naddrs : 0; addr = (rn->naddrs == 1) ? &rn->u.addr : rn->u.addrs; @@ -4237,7 +4260,7 @@ } #if (NGX_HAVE_INET6) - if (rn->naddrs6) { + if (rn->naddrs6 && af != AF_INET) { j = rotate ? ngx_random() % rn->naddrs6 : 0; addr6 = (rn->naddrs6 == 1) ? &rn->u6.addr6 : rn->u6.addrs6; @@ -4260,6 +4283,7 @@ } #endif + *na = n; return dst; } diff -r aeab41dfd260 -r f922980f06b1 src/core/ngx_resolver.h --- a/src/core/ngx_resolver.h Mon Jan 17 17:05:12 2022 +0300 +++ b/src/core/ngx_resolver.h Wed Jan 19 08:12:51 2022 +0100 @@ -221,6 +221,8 @@ unsigned quick:1; unsigned async:1; unsigned cancelable:1; + unsigned avoid_ipv4:1; + unsigned avoid_ipv6:1; ngx_uint_t recursion; ngx_event_t *event; }; diff -r aeab41dfd260 -r f922980f06b1 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Mon Jan 17 17:05:12 2022 +0300 +++ b/src/http/ngx_http_upstream.c Wed Jan 19 08:12:51 2022 +0100 @@ -770,6 +770,18 @@ ctx->handler = ngx_http_upstream_resolve_handler; ctx->data = r; ctx->timeout = clcf->resolver_timeout; +#if (NGX_HAVE_INET6) + if (u->peer.local) { + switch (u->peer.local->sockaddr->sa_family) { + case AF_INET: + ctx->avoid_ipv6 = 1; + break; + case AF_INET6: + ctx->avoid_ipv4 = 1; + break; + } + } +#endif u->resolved->ctx = ctx; diff -r aeab41dfd260 -r f922980f06b1 src/stream/ngx_stream_proxy_module.c --- a/src/stream/ngx_stream_proxy_module.c Mon Jan 17 17:05:12 2022 +0300 +++ b/src/stream/ngx_stream_proxy_module.c Wed Jan 19 08:12:51 2022 +0100 @@ -547,6 +547,18 @@ ctx->handler = ngx_stream_proxy_resolve_handler; ctx->data = s; ctx->timeout = cscf->resolver_timeout; +#if (NGX_HAVE_INET6) + if (u->peer.local) { + switch (u->peer.local->sockaddr->sa_family) { + case AF_INET: + ctx->avoid_ipv6 = 1; + break; + case AF_INET6: + ctx->avoid_ipv4 = 1; + break; + } + } +#endif u->resolved->ctx = ctx; From mdounin at mdounin.ru Wed Jan 19 16:22:09 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Jan 2022 19:22:09 +0300 Subject: [PATCH] Prefer address family matching to local address in resolver (ticket #1535) In-Reply-To: References: Message-ID: Hello! On Wed, Jan 19, 2022 at 04:08:32PM +0100, Lukas Lihotzki via nginx-devel wrote: > # HG changeset patch > # User Lukas Lihotzki > # Date 1642576371 -3600 > # Wed Jan 19 08:12:51 2022 +0100 > # Node ID f922980f06b1162ae933c99c03bef09cfc12582f > # Parent aeab41dfd2606dd36cabbf01f1472726e27e8aea > Prefer address family matching to local address in resolver (ticket #1535). > > Without this change, upstream connections fail randomly for dual-stack > host names when specifying a proxy_bind address (ticket #1535). > > This changeset adds two flags for avoiding resolving to either IPv4 or IPv6 > addresses. stream and http set these flags based on the proxy_bind address. > Avoided addresses are still returned if there are none of the other family, so > the same error message as before is produced when connecting is impossible. Thank you for the patch. Suggested change looks wrong to me though, as it only addresses a particular use case when the name is resolved dynamically at run time using resolver. In particular, it won't address the exact configuration listed in ticket #1535. If the goal is to only address dynamic resolution, a better idea might be to use "resolver ... ipv6=off;" configured explicitly. Or, if you want to use only IPv6 addresses instead, introduce a similar option to only resolve IPv4 addresses: as previously discussed in ticket #1330, it might be beneficial for IPv6-only hosts in other cases as well. -- Maxim Dounin http://mdounin.ru/ From xeioex at nginx.com Wed Jan 19 17:18:03 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 19 Jan 2022 17:18:03 +0000 Subject: [njs] Fixed Function.prototype.apply() with slow arrays. Message-ID: details: https://hg.nginx.org/njs/rev/620418b1a641 branches: changeset: 1813:620418b1a641 user: Dmitry Volyntsev date: Wed Jan 19 14:03:49 2022 +0000 description: Fixed Function.prototype.apply() with slow arrays. Previously, the function had two issues: * array->start was referenced without checking for fast array flag * the created arguments list was not sanity-checked for its length, which can be very large. The fix is to remove micro-optimization for arrays and introduce limit size for arguments list. This closes #449 issue in Github. diffstat: src/njs_function.c | 17 +++++++---------- src/test/njs_unit_test.c | 4 ++++ 2 files changed, 11 insertions(+), 10 deletions(-) diffs (50 lines): diff -r c419a4e34998 -r 620418b1a641 src/njs_function.c --- a/src/njs_function.c Wed Jan 19 13:12:09 2022 +0000 +++ b/src/njs_function.c Wed Jan 19 14:03:49 2022 +0000 @@ -1385,18 +1385,10 @@ njs_function_prototype_apply(njs_vm_t *v if (njs_is_null_or_undefined(arr_like)) { length = 0; - goto activate; - - } else if (njs_is_array(arr_like)) { - arr = arr_like->data.u.array; + } - args = arr->start; - length = arr->length; - - goto activate; - - } else if (njs_slow_path(!njs_is_object(arr_like))) { + if (njs_slow_path(!njs_is_object(arr_like))) { njs_type_error(vm, "second argument is not an array-like object"); return NJS_ERROR; } @@ -1406,6 +1398,11 @@ njs_function_prototype_apply(njs_vm_t *v return ret; } + if (njs_slow_path(length > 1024)) { + njs_internal_error(vm, "argument list is too long"); + return NJS_ERROR; + } + arr = njs_array_alloc(vm, 1, length, NJS_ARRAY_SPARE); if (njs_slow_path(arr == NULL)) { return NJS_ERROR; diff -r c419a4e34998 -r 620418b1a641 src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Wed Jan 19 13:12:09 2022 +0000 +++ b/src/test/njs_unit_test.c Wed Jan 19 14:03:49 2022 +0000 @@ -10063,6 +10063,10 @@ static njs_unit_test_t njs_test[] = "f.apply(123, {})"), njs_str("123") }, + { njs_str("(function(index, ...rest){ return rest[index];})" + ".apply({}, [1022].concat(Array(1023).fill(1).map((v,i)=>i.toString(16))))"), + njs_str("3fe") }, + { njs_str("String.prototype.concat.apply('a', " "{length:2, 0:{toString:function() {return 'b'}}, 1:'c'})"), njs_str("abc") }, From lukas at lihotzki.de Wed Jan 19 18:47:44 2022 From: lukas at lihotzki.de (=?iso-8859-1?q?Lukas_Lihotzki?=) Date: Wed, 19 Jan 2022 19:47:44 +0100 Subject: [PATCH] Add ipv4=off option in resolver like ipv6=off (ticket #1330) Message-ID: # HG changeset patch # User Lukas Lihotzki # Date 1642618053 -3600 # Wed Jan 19 19:47:33 2022 +0100 # Node ID e9f06dc2d6a4a1aa61c15009b84ceedcaf5983b2 # Parent aeab41dfd2606dd36cabbf01f1472726e27e8aea Add ipv4=off option in resolver like ipv6=off (ticket #1330). IPv6-only hosts (ticket #1330) and upstreams with IPv6 bind address (ticket #1535) need to disable resolving to IPv4 addresses. Ticket #1330 mentions ipv4=off is the proper fix. diff -r aeab41dfd260 -r e9f06dc2d6a4 src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c Mon Jan 17 17:05:12 2022 +0300 +++ b/src/core/ngx_resolver.c Wed Jan 19 19:47:33 2022 +0100 @@ -122,10 +122,13 @@ static ngx_int_t ngx_resolver_cmp_srvs(const void *one, const void *two); #if (NGX_HAVE_INET6) +#define ngx_ipv4_enabled(r) r->ipv4 static void ngx_resolver_rbtree_insert_addr6_value(ngx_rbtree_node_t *temp, ngx_rbtree_node_t *node, ngx_rbtree_node_t *sentinel); static ngx_resolver_node_t *ngx_resolver_lookup_addr6(ngx_resolver_t *r, struct in6_addr *addr, uint32_t hash); +#else +#define ngx_ipv4_enabled(r) 1 #endif @@ -175,6 +178,7 @@ ngx_queue_init(&r->addr_expire_queue); #if (NGX_HAVE_INET6) + r->ipv4 = 1; r->ipv6 = 1; ngx_rbtree_init(&r->addr6_rbtree, &r->addr6_sentinel, @@ -225,6 +229,23 @@ } #if (NGX_HAVE_INET6) + if (ngx_strncmp(names[i].data, "ipv4=", 5) == 0) { + + if (ngx_strcmp(&names[i].data[5], "on") == 0) { + r->ipv4 = 1; + + } else if (ngx_strcmp(&names[i].data[5], "off") == 0) { + r->ipv4 = 0; + + } else { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "invalid parameter: %V", &names[i]); + return NULL; + } + + continue; + } + if (ngx_strncmp(names[i].data, "ipv6=", 5) == 0) { if (ngx_strcmp(&names[i].data[5], "on") == 0) { @@ -273,6 +294,13 @@ } } +#if (NGX_HAVE_INET6) + if (!r->ipv4 && !r->ipv6) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "both ipv4 and ipv6 disabled"); + return NULL; + } +#endif + if (n && r->connections.nelts == 0) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "no name servers defined"); return NULL; @@ -836,7 +864,7 @@ r->last_connection = 0; } - rn->naddrs = (u_short) -1; + rn->naddrs = ngx_ipv4_enabled(r) ? (u_short) -1 : 0; rn->tcp = 0; #if (NGX_HAVE_INET6) rn->naddrs6 = r->ipv6 ? (u_short) -1 : 0; @@ -3644,7 +3672,7 @@ len = sizeof(ngx_resolver_hdr_t) + nlen + sizeof(ngx_resolver_qs_t); #if (NGX_HAVE_INET6) - p = ngx_resolver_alloc(r, r->ipv6 ? len * 2 : len); + p = ngx_resolver_alloc(r, len * (r->ipv4 + r->ipv6)); #else p = ngx_resolver_alloc(r, len); #endif @@ -3657,19 +3685,21 @@ #if (NGX_HAVE_INET6) if (r->ipv6) { - rn->query6 = p + len; + rn->query6 = p + len * r->ipv4; } #endif query = (ngx_resolver_hdr_t *) p; - ident = ngx_random(); - - ngx_log_debug2(NGX_LOG_DEBUG_CORE, r->log, 0, - "resolve: \"%V\" A %i", name, ident & 0xffff); - - query->ident_hi = (u_char) ((ident >> 8) & 0xff); - query->ident_lo = (u_char) (ident & 0xff); + if (ngx_ipv4_enabled(r)) { + ident = ngx_random(); + + ngx_log_debug2(NGX_LOG_DEBUG_CORE, r->log, 0, + "resolve: \"%V\" A %i", name, ident & 0xffff); + + query->ident_hi = (u_char) ((ident >> 8) & 0xff); + query->ident_lo = (u_char) (ident & 0xff); + } /* recursion query */ query->flags_hi = 1; query->flags_lo = 0; diff -r aeab41dfd260 -r e9f06dc2d6a4 src/core/ngx_resolver.h --- a/src/core/ngx_resolver.h Mon Jan 17 17:05:12 2022 +0300 +++ b/src/core/ngx_resolver.h Wed Jan 19 19:47:33 2022 +0100 @@ -176,7 +176,8 @@ ngx_queue_t addr_expire_queue; #if (NGX_HAVE_INET6) - ngx_uint_t ipv6; /* unsigned ipv6:1; */ + unsigned ipv4:1; + unsigned ipv6:1; ngx_rbtree_t addr6_rbtree; ngx_rbtree_node_t addr6_sentinel; ngx_queue_t addr6_resend_queue; From devnexen at gmail.com Thu Jan 20 21:06:10 2022 From: devnexen at gmail.com (David CARLIER) Date: Thu, 20 Jan 2022 21:06:10 +0000 Subject: [PATCH] workers process making them traceable on FreeBSD 11.x and above Message-ID: >From 7f3194c36a9788d9b98630773ab907adb110cf6f Mon Sep 17 00:00:00 2001 From: David CARLIER Date: Thu, 20 Jan 2022 20:56:49 +0000 Subject: [PATCH] process worker, enabling process tracing and core dumping on FreeBSD 11.x and above using the procctl native API. Checking the version is enough as the functions and the flag we re interested in are available right off the bat. Signed-off-by: David CARLIER --- auto/os/freebsd | 11 +++++++++++ src/os/unix/ngx_freebsd_config.h | 4 ++++ src/os/unix/ngx_process_cycle.c | 10 ++++++++++ 3 files changed, 25 insertions(+) diff --git a/auto/os/freebsd b/auto/os/freebsd index 870bac4b..8bb086f0 100644 --- a/auto/os/freebsd +++ b/auto/os/freebsd @@ -103,3 +103,14 @@ if [ $version -ge 701000 ]; then echo " + cpuset_setaffinity() found" have=NGX_HAVE_CPUSET_SETAFFINITY . auto/have fi + +# procctl + + +# the procctl api and its PROC_TRACE_CTL* flags exists from +# FreeBSD 11.x + +if [ $version -ge 1100000 ]; then + echo " + procctl() found" + have=NGX_HAVE_PROCCTL . auto/have +fi diff --git a/src/os/unix/ngx_freebsd_config.h b/src/os/unix/ngx_freebsd_config.h index c641108b..04ed19ca 100644 --- a/src/os/unix/ngx_freebsd_config.h +++ b/src/os/unix/ngx_freebsd_config.h @@ -87,6 +87,10 @@ #include #endif +#if (NGX_HAVE_PROCCTL) +#include +#endif + #if (NGX_HAVE_FILE_AIO) diff --git a/src/os/unix/ngx_process_cycle.c b/src/os/unix/ngx_process_cycle.c index 07cd05e8..c0cf052f 100644 --- a/src/os/unix/ngx_process_cycle.c +++ b/src/os/unix/ngx_process_cycle.c @@ -869,6 +869,16 @@ ngx_worker_process_init(ngx_cycle_t *cycle, ngx_int_t worker) #endif +#if (NGX_HAVE_PROCCTL) + /* allow the process being traceable and producing a coredump in FreeBSD 11.x */ + ngx_int_t ctl = PROC_TRACE_CTL_ENABLE; + + if (procctl(P_PID, 0, PROC_TRACE_CTL, &ctl) == -1) { + ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, + "procctl(PROC_TRACE_CTL_ENABLE) failed"); + } +#endif + if (ccf->working_directory.len) { if (chdir((char *) ccf->working_directory.data) == -1) { ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, -- 2.34.1 From p.pautov at f5.com Thu Jan 20 23:59:54 2022 From: p.pautov at f5.com (Pavel Pautov) Date: Thu, 20 Jan 2022 23:59:54 +0000 Subject: [nginx] Core: simplify reader lock release. Message-ID: details: https://hg.nginx.org/nginx/rev/7752d8523066 branches: changeset: 7995:7752d8523066 user: Pavel Pautov date: Wed Jan 19 17:37:34 2022 -0800 description: Core: simplify reader lock release. diffstat: src/core/ngx_rwlock.c | 18 +++--------------- 1 files changed, 3 insertions(+), 15 deletions(-) diffs (29 lines): diff -r aeab41dfd260 -r 7752d8523066 src/core/ngx_rwlock.c --- a/src/core/ngx_rwlock.c Mon Jan 17 17:05:12 2022 +0300 +++ b/src/core/ngx_rwlock.c Wed Jan 19 17:37:34 2022 -0800 @@ -89,22 +89,10 @@ ngx_rwlock_rlock(ngx_atomic_t *lock) void ngx_rwlock_unlock(ngx_atomic_t *lock) { - ngx_atomic_uint_t readers; - - readers = *lock; - - if (readers == NGX_RWLOCK_WLOCK) { + if (*lock == NGX_RWLOCK_WLOCK) { (void) ngx_atomic_cmp_set(lock, NGX_RWLOCK_WLOCK, 0); - return; - } - - for ( ;; ) { - - if (ngx_atomic_cmp_set(lock, readers, readers - 1)) { - return; - } - - readers = *lock; + } else { + (void) ngx_atomic_fetch_add(lock, -1); } } From mdounin at mdounin.ru Fri Jan 21 03:57:42 2022 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Fri, 21 Jan 2022 06:57:42 +0300 Subject: [PATCH] SSL: always renewing tickets with TLSv1.3 (ticket #1892) Message-ID: # HG changeset patch # User Maxim Dounin # Date 1642737110 -10800 # Fri Jan 21 06:51:50 2022 +0300 # Node ID cff51689a4a182cb11cba2eb9303e2bc21815432 # Parent 96ae8e57b3dd1b10f29d3060bbad93b7f9357b92 SSL: always renewing tickets with TLSv1.3 (ticket #1892). Chrome only use TLS session tickets once with TLS 1.3, likely following RFC 8446 Appendix C.4 recommendation. With OpenSSL, this works fine with built-in session tickets, since these are explicitly renewed in case of TLS 1.3 on each session reuse, but results in only two connections being reused after an initial handshake when using ssl_session_ticket_key. Fix is to always renew TLS session tickets in case of TLS 1.3 when using ssl_session_ticket_key, similarly to how it is done by OpenSSL internally. diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -4448,7 +4448,21 @@ ngx_ssl_session_ticket_key_callback(ngx_ return -1; } - return (i == 0) ? 1 : 2 /* renew */; + /* renew if TLSv1.3 */ + +#ifdef TLS1_3_VERSION + if (SSL_version(ssl_conn) == TLS1_3_VERSION) { + return 2; + } +#endif + + /* renew if non-default key */ + + if (i != 0) { + return 2; + } + + return 1; } } From xeioex at nginx.com Fri Jan 21 14:33:26 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Fri, 21 Jan 2022 14:33:26 +0000 Subject: [njs] Fixed recursive async function calls. Message-ID: details: https://hg.nginx.org/njs/rev/d776b59196c5 branches: changeset: 1814:d776b59196c5 user: Dmitry Volyntsev date: Fri Jan 21 14:31:30 2022 +0000 description: Fixed recursive async function calls. Previously, PromiseCapability record was stored (function->context) directly in function object during a function invocation. This is not correct, because PromiseCapability record should be linked to current execution context. As a result, function->context is overwritten with consecutive recursive calls which results in use-after-free. This closes #451 issue on Github. diffstat: src/njs_async.c | 15 ++------------- src/njs_function.c | 8 +++++--- src/njs_function.h | 3 ++- src/njs_value.h | 1 - src/njs_vm.c | 2 +- src/njs_vmcode.c | 18 ++++++++---------- src/njs_vmcode.h | 3 ++- test/js/async_recursive_last.t.js | 26 ++++++++++++++++++++++++++ test/js/async_recursive_mid.t.js | 26 ++++++++++++++++++++++++++ 9 files changed, 72 insertions(+), 30 deletions(-) diffs (264 lines): diff -r 620418b1a641 -r d776b59196c5 src/njs_async.c --- a/src/njs_async.c Wed Jan 19 14:03:49 2022 +0000 +++ b/src/njs_async.c Fri Jan 21 14:31:30 2022 +0000 @@ -29,9 +29,7 @@ njs_async_function_frame_invoke(njs_vm_t return NJS_ERROR; } - frame->function->context = capability; - - ret = njs_function_lambda_call(vm); + ret = njs_function_lambda_call(vm, capability, NULL); if (ret == NJS_OK) { ret = njs_function_call(vm, njs_function(&capability->resolve), @@ -63,7 +61,6 @@ njs_await_fulfilled(njs_vm_t *vm, njs_va njs_int_t ret; njs_value_t **cur_local, **cur_closures, **cur_temp, *value; njs_frame_t *frame, *async_frame; - njs_function_t *function; njs_async_ctx_t *ctx; njs_native_frame_t *top, *async; @@ -78,8 +75,6 @@ njs_await_fulfilled(njs_vm_t *vm, njs_va async = &async_frame->native; async->previous = vm->top_frame; - function = async->function; - cur_local = vm->levels[NJS_LEVEL_LOCAL]; cur_closures = vm->levels[NJS_LEVEL_CLOSURE]; cur_temp = vm->levels[NJS_LEVEL_TEMP]; @@ -98,13 +93,7 @@ njs_await_fulfilled(njs_vm_t *vm, njs_va vm->top_frame->retval = &vm->retval; - function->context = ctx->capability; - function->await = ctx; - - ret = njs_vmcode_interpreter(vm, ctx->pc); - - function->context = NULL; - function->await = NULL; + ret = njs_vmcode_interpreter(vm, ctx->pc, ctx->capability, ctx); vm->levels[NJS_LEVEL_LOCAL] = cur_local; vm->levels[NJS_LEVEL_CLOSURE] = cur_closures; diff -r 620418b1a641 -r d776b59196c5 src/njs_function.c --- a/src/njs_function.c Wed Jan 19 14:03:49 2022 +0000 +++ b/src/njs_function.c Fri Jan 21 14:31:30 2022 +0000 @@ -608,7 +608,7 @@ njs_function_call2(njs_vm_t *vm, njs_fun njs_int_t -njs_function_lambda_call(njs_vm_t *vm) +njs_function_lambda_call(njs_vm_t *vm, void *promise_cap, void *async_ctx) { uint32_t n; njs_int_t ret; @@ -622,6 +622,8 @@ njs_function_lambda_call(njs_vm_t *vm) frame = (njs_frame_t *) vm->top_frame; function = frame->native.function; + njs_assert(function->context == NULL); + if (function->global && !function->closure_copied) { ret = njs_function_capture_global_closures(vm, function); if (njs_slow_path(ret != NJS_OK)) { @@ -698,7 +700,7 @@ njs_function_lambda_call(njs_vm_t *vm) } } - ret = njs_vmcode_interpreter(vm, lambda->start); + ret = njs_vmcode_interpreter(vm, lambda->start, promise_cap, async_ctx); /* Restore current level. */ vm->levels[NJS_LEVEL_LOCAL] = cur_local; @@ -775,7 +777,7 @@ njs_function_frame_invoke(njs_vm_t *vm, return njs_function_native_call(vm); } else { - return njs_function_lambda_call(vm); + return njs_function_lambda_call(vm, NULL, NULL); } } diff -r 620418b1a641 -r d776b59196c5 src/njs_function.h --- a/src/njs_function.h Wed Jan 19 14:03:49 2022 +0000 +++ b/src/njs_function.h Fri Jan 21 14:31:30 2022 +0000 @@ -112,7 +112,8 @@ njs_int_t njs_function_lambda_frame(njs_ njs_int_t njs_function_call2(njs_vm_t *vm, njs_function_t *function, const njs_value_t *this, const njs_value_t *args, njs_uint_t nargs, njs_value_t *retval, njs_bool_t ctor); -njs_int_t njs_function_lambda_call(njs_vm_t *vm); +njs_int_t njs_function_lambda_call(njs_vm_t *vm, void *promise_cap, + void *async_ctx); njs_int_t njs_function_native_call(njs_vm_t *vm); njs_native_frame_t *njs_function_frame_alloc(njs_vm_t *vm, size_t size); void njs_function_frame_free(njs_vm_t *vm, njs_native_frame_t *frame); diff -r 620418b1a641 -r d776b59196c5 src/njs_value.h --- a/src/njs_value.h Wed Jan 19 14:03:49 2022 +0000 +++ b/src/njs_value.h Fri Jan 21 14:31:30 2022 +0000 @@ -270,7 +270,6 @@ struct njs_function_s { } u; void *context; - void *await; njs_value_t *bound; }; diff -r 620418b1a641 -r d776b59196c5 src/njs_vm.c --- a/src/njs_vm.c Wed Jan 19 14:03:49 2022 +0000 +++ b/src/njs_vm.c Fri Jan 21 14:31:30 2022 +0000 @@ -490,7 +490,7 @@ njs_vm_start(njs_vm_t *vm) return ret; } - ret = njs_vmcode_interpreter(vm, vm->start); + ret = njs_vmcode_interpreter(vm, vm->start, NULL, NULL); return (ret == NJS_ERROR) ? NJS_ERROR : NJS_OK; } diff -r 620418b1a641 -r d776b59196c5 src/njs_vmcode.c --- a/src/njs_vmcode.c Wed Jan 19 14:03:49 2022 +0000 +++ b/src/njs_vmcode.c Fri Jan 21 14:31:30 2022 +0000 @@ -42,7 +42,8 @@ static njs_jump_off_t njs_vmcode_debugge static njs_jump_off_t njs_vmcode_return(njs_vm_t *vm, njs_value_t *invld, njs_value_t *retval); -static njs_jump_off_t njs_vmcode_await(njs_vm_t *vm, njs_vmcode_await_t *await); +static njs_jump_off_t njs_vmcode_await(njs_vm_t *vm, njs_vmcode_await_t *await, + njs_promise_capability_t *pcap, njs_async_ctx_t *actx); static njs_jump_off_t njs_vmcode_try_start(njs_vm_t *vm, njs_value_t *value, njs_value_t *offset, u_char *pc); @@ -77,7 +78,8 @@ static njs_jump_off_t njs_function_frame njs_int_t -njs_vmcode_interpreter(njs_vm_t *vm, u_char *pc) +njs_vmcode_interpreter(njs_vm_t *vm, u_char *pc, void *promise_cap, + void *async_ctx) { u_char *catch; double num, exponent; @@ -826,7 +828,7 @@ next: case NJS_VMCODE_AWAIT: await = (njs_vmcode_await_t *) pc; - return njs_vmcode_await(vm, await); + return njs_vmcode_await(vm, await, promise_cap, async_ctx); case NJS_VMCODE_TRY_START: ret = njs_vmcode_try_start(vm, value1, value2, pc); @@ -1812,7 +1814,8 @@ njs_vmcode_return(njs_vm_t *vm, njs_valu static njs_jump_off_t -njs_vmcode_await(njs_vm_t *vm, njs_vmcode_await_t *await) +njs_vmcode_await(njs_vm_t *vm, njs_vmcode_await_t *await, + njs_promise_capability_t *pcap, njs_async_ctx_t *ctx) { size_t size; njs_int_t ret; @@ -1820,7 +1823,6 @@ njs_vmcode_await(njs_vm_t *vm, njs_vmcod njs_value_t ctor, val, on_fulfilled, on_rejected, *value; njs_promise_t *promise; njs_function_t *fulfilled, *rejected; - njs_async_ctx_t *ctx; njs_native_frame_t *active; active = &vm->active_frame->native; @@ -1837,8 +1839,6 @@ njs_vmcode_await(njs_vm_t *vm, njs_vmcod return NJS_ERROR; } - ctx = active->function->await; - if (ctx == NULL) { ctx = njs_mp_alloc(vm->mem_pool, sizeof(njs_async_ctx_t)); if (njs_slow_path(ctx == NULL)) { @@ -1854,9 +1854,7 @@ njs_vmcode_await(njs_vm_t *vm, njs_vmcod } ctx->await = fulfilled->context; - ctx->capability = active->function->context; - - active->function->context = NULL; + ctx->capability = pcap; ret = njs_function_frame_save(vm, ctx->await, NULL); if (njs_slow_path(ret != NJS_OK)) { diff -r 620418b1a641 -r d776b59196c5 src/njs_vmcode.h --- a/src/njs_vmcode.h Wed Jan 19 14:03:49 2022 +0000 +++ b/src/njs_vmcode.h Fri Jan 21 14:31:30 2022 +0000 @@ -437,7 +437,8 @@ typedef struct { } njs_vmcode_await_t; -njs_int_t njs_vmcode_interpreter(njs_vm_t *vm, u_char *pc); +njs_int_t njs_vmcode_interpreter(njs_vm_t *vm, u_char *pc, + void *promise_cap, void *async_ctx); njs_object_t *njs_function_new_object(njs_vm_t *vm, njs_value_t *constructor); diff -r 620418b1a641 -r d776b59196c5 test/js/async_recursive_last.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/async_recursive_last.t.js Fri Jan 21 14:31:30 2022 +0000 @@ -0,0 +1,26 @@ +/*--- +includes: [compareArray.js] +flags: [async] +---*/ + +let stages = []; + +async function f(v) { + if (v == 3) { + return; + } + + stages.push(`f>${v}`); + + f(v + 1); + + stages.push(`f<${v}`); + + await "X"; +} + +f(0) +.then(v => { + assert.compareArray(stages, ['f>0', 'f>1', 'f>2', 'f<2', 'f<1', 'f<0']); +}) +.then($DONE, $DONE); diff -r 620418b1a641 -r d776b59196c5 test/js/async_recursive_mid.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/async_recursive_mid.t.js Fri Jan 21 14:31:30 2022 +0000 @@ -0,0 +1,26 @@ +/*--- +includes: [compareArray.js] +flags: [async] +---*/ + +let stages = []; + +async function f(v) { + if (v == 3) { + return; + } + + stages.push(`f>${v}`); + + await "X"; + + f(v + 1); + + stages.push(`f<${v}`); +} + +f(0) +.then(v => { + assert.compareArray(stages, ['f>0','f>1','f<0','f>2','f<1']); +}) +.then($DONE, $DONE); From pluknet at nginx.com Fri Jan 21 16:20:16 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Fri, 21 Jan 2022 19:20:16 +0300 Subject: [PATCH] workers process making them traceable on FreeBSD 11.x and above In-Reply-To: References: Message-ID: <0924ECCC-A1F0-42A5-BDD6-8CA76F84242A@nginx.com> > On 21 Jan 2022, at 00:06, David CARLIER wrote: > > From 7f3194c36a9788d9b98630773ab907adb110cf6f Mon Sep 17 00:00:00 2001 > From: David CARLIER > Date: Thu, 20 Jan 2022 20:56:49 +0000 > Subject: [PATCH] process worker, enabling process tracing and core dumping on > FreeBSD 11.x and above using the procctl native API. Checking the version is > enough as the functions and the flag we re interested in are available right > off the bat. > > Signed-off-by: David CARLIER Can you please elaborate, what is the purpose of the proposed change? In FreeBSD, tracing is enabled by default by means of PROC_TRACE_CTL (yet, it could be denied by other means, see p_candebug() impl. in kernel sources). Thus, enabling tracing explicitly makes no sense unless it was previously disabled, such as with proccontrol(1). That is a P2_NOTRACE kernel process flag, it controls. (In particular, the kern.coredump sysctl used to enable/disable coredump systemwide has a weak connection to PROC_TRACE_CTL; it is a distinct way to specifically deny coredump). Note that while API indeed first appeared in FreeBSD 11, it was merged in some form to stable/10, back to FreeBSD 10.2. OTOH, the patch uses semantics not originally present in 11. P_PID:0 is a shortcut added rather recently in f833ab9dd187, MFCed to stable/13 post 13.0. So, it doesn't present in neither of the released versions. In particular, on 13.0 the proposed call results in [EPERM], since procctl() naturally searches for PID 0, which doesn't match. With these thoughts, the patch doesn't look useful. See below for more. > --- > auto/os/freebsd | 11 +++++++++++ > src/os/unix/ngx_freebsd_config.h | 4 ++++ > src/os/unix/ngx_process_cycle.c | 10 ++++++++++ > 3 files changed, 25 insertions(+) > > diff --git a/auto/os/freebsd b/auto/os/freebsd > index 870bac4b..8bb086f0 100644 > --- a/auto/os/freebsd > +++ b/auto/os/freebsd > @@ -103,3 +103,14 @@ if [ $version -ge 701000 ]; then > echo " + cpuset_setaffinity() found" > have=NGX_HAVE_CPUSET_SETAFFINITY . auto/have > fi > + > +# procctl > + > + > +# the procctl api and its PROC_TRACE_CTL* flags exists from > +# FreeBSD 11.x > + > +if [ $version -ge 1100000 ]; then > + echo " + procctl() found" > + have=NGX_HAVE_PROCCTL . auto/have > +fi > diff --git a/src/os/unix/ngx_freebsd_config.h b/src/os/unix/ngx_freebsd_config.h > index c641108b..04ed19ca 100644 > --- a/src/os/unix/ngx_freebsd_config.h > +++ b/src/os/unix/ngx_freebsd_config.h > @@ -87,6 +87,10 @@ > #include > #endif > > +#if (NGX_HAVE_PROCCTL) > +#include > +#endif > + > > #if (NGX_HAVE_FILE_AIO) > > diff --git a/src/os/unix/ngx_process_cycle.c b/src/os/unix/ngx_process_cycle.c > index 07cd05e8..c0cf052f 100644 > --- a/src/os/unix/ngx_process_cycle.c > +++ b/src/os/unix/ngx_process_cycle.c > @@ -869,6 +869,16 @@ ngx_worker_process_init(ngx_cycle_t *cycle, > ngx_int_t worker) > > #endif > > +#if (NGX_HAVE_PROCCTL) > + /* allow the process being traceable and producing a coredump in > FreeBSD 11.x */ > + ngx_int_t ctl = PROC_TRACE_CTL_ENABLE; > + > + if (procctl(P_PID, 0, PROC_TRACE_CTL, &ctl) == -1) { > + ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, > + "procctl(PROC_TRACE_CTL_ENABLE) failed"); > + } > +#endif > + > if (ccf->working_directory.len) { > if (chdir((char *) ccf->working_directory.data) == -1) { > ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, Given the place for patching, I can try to guess that the actual intention is to be on par with Linux'ish PR_SET_DUMPABLE, that is, to allow tracing/coredump with UID/GID set. Indeed, ability to trace a process is denied with UID/GID set. This is controlled with a kernel process flag P_SUGID. But, P_SUGID has a higher precedence over P2_NOTRACE. -- Sergey Kandaurov From devnexen at gmail.com Fri Jan 21 16:29:16 2022 From: devnexen at gmail.com (David CARLIER) Date: Fri, 21 Jan 2022 16:29:16 +0000 Subject: [PATCH] workers process making them traceable on FreeBSD 11.x and above In-Reply-To: <0924ECCC-A1F0-42A5-BDD6-8CA76F84242A@nginx.com> References: <0924ECCC-A1F0-42A5-BDD6-8CA76F84242A@nginx.com> Message-ID: Hi and thanks for your review and time and understand your stance. On Fri, 21 Jan 2022 at 16:20, Sergey Kandaurov wrote: > > > > On 21 Jan 2022, at 00:06, David CARLIER wrote: > > > > From 7f3194c36a9788d9b98630773ab907adb110cf6f Mon Sep 17 00:00:00 2001 > > From: David CARLIER > > Date: Thu, 20 Jan 2022 20:56:49 +0000 > > Subject: [PATCH] process worker, enabling process tracing and core dumping on > > FreeBSD 11.x and above using the procctl native API. Checking the version is > > enough as the functions and the flag we re interested in are available right > > off the bat. > > > > Signed-off-by: David CARLIER > > Can you please elaborate, > what is the purpose of the proposed change? > > In FreeBSD, tracing is enabled by default by means of PROC_TRACE_CTL > (yet, it could be denied by other means, see p_candebug() impl. in > kernel sources). Thus, enabling tracing explicitly makes no sense > unless it was previously disabled, such as with proccontrol(1). > That is a P2_NOTRACE kernel process flag, it controls. > (In particular, the kern.coredump sysctl used to enable/disable > coredump systemwide has a weak connection to PROC_TRACE_CTL; > it is a distinct way to specifically deny coredump). > > Note that while API indeed first appeared in FreeBSD 11, > it was merged in some form to stable/10, back to FreeBSD 10.2. > OTOH, the patch uses semantics not originally present in 11. > P_PID:0 is a shortcut added rather recently in f833ab9dd187, > MFCed to stable/13 post 13.0. So, it doesn't present in > neither of the released versions. > In particular, on 13.0 the proposed call results in [EPERM], > since procctl() naturally searches for PID 0, which doesn't match. ah good point I did the dev on freebsd 14-snapshot. > > With these thoughts, the patch doesn't look useful. > See below for more. > > > --- > > auto/os/freebsd | 11 +++++++++++ > > src/os/unix/ngx_freebsd_config.h | 4 ++++ > > src/os/unix/ngx_process_cycle.c | 10 ++++++++++ > > 3 files changed, 25 insertions(+) > > > > diff --git a/auto/os/freebsd b/auto/os/freebsd > > index 870bac4b..8bb086f0 100644 > > --- a/auto/os/freebsd > > +++ b/auto/os/freebsd > > @@ -103,3 +103,14 @@ if [ $version -ge 701000 ]; then > > echo " + cpuset_setaffinity() found" > > have=NGX_HAVE_CPUSET_SETAFFINITY . auto/have > > fi > > + > > +# procctl > > + > > + > > +# the procctl api and its PROC_TRACE_CTL* flags exists from > > +# FreeBSD 11.x > > + > > +if [ $version -ge 1100000 ]; then > > + echo " + procctl() found" > > + have=NGX_HAVE_PROCCTL . auto/have > > +fi > > diff --git a/src/os/unix/ngx_freebsd_config.h b/src/os/unix/ngx_freebsd_config.h > > index c641108b..04ed19ca 100644 > > --- a/src/os/unix/ngx_freebsd_config.h > > +++ b/src/os/unix/ngx_freebsd_config.h > > @@ -87,6 +87,10 @@ > > #include > > #endif > > > > +#if (NGX_HAVE_PROCCTL) > > +#include > > +#endif > > + > > > > #if (NGX_HAVE_FILE_AIO) > > > > diff --git a/src/os/unix/ngx_process_cycle.c b/src/os/unix/ngx_process_cycle.c > > index 07cd05e8..c0cf052f 100644 > > --- a/src/os/unix/ngx_process_cycle.c > > +++ b/src/os/unix/ngx_process_cycle.c > > @@ -869,6 +869,16 @@ ngx_worker_process_init(ngx_cycle_t *cycle, > > ngx_int_t worker) > > > > #endif > > > > +#if (NGX_HAVE_PROCCTL) > > + /* allow the process being traceable and producing a coredump in > > FreeBSD 11.x */ > > + ngx_int_t ctl = PROC_TRACE_CTL_ENABLE; > > + > > + if (procctl(P_PID, 0, PROC_TRACE_CTL, &ctl) == -1) { > > + ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, > > + "procctl(PROC_TRACE_CTL_ENABLE) failed"); > > + } > > +#endif > > + > > if (ccf->working_directory.len) { > > if (chdir((char *) ccf->working_directory.data) == -1) { > > ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, > > Given the place for patching, I can try to guess that the actual > intention is to be on par with Linux'ish PR_SET_DUMPABLE, > that is, to allow tracing/coredump with UID/GID set. > > Indeed, ability to trace a process is denied with UID/GID set. > This is controlled with a kernel process flag P_SUGID. > But, P_SUGID has a higher precedence over P2_NOTRACE. > > -- > Sergey Kandaurov > > _______________________________________________ > nginx-devel mailing list -- nginx-devel at nginx.org > To unsubscribe send an email to nginx-devel-leave at nginx.org From mdounin at mdounin.ru Fri Jan 21 21:30:22 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 21 Jan 2022 21:30:22 +0000 Subject: [nginx] Contrib: vim syntax adjusted to save cpoptions (ticket #2276). Message-ID: details: https://hg.nginx.org/nginx/rev/5d88e2bf92b3 branches: changeset: 7996:5d88e2bf92b3 user: Maxim Dounin date: Sat Jan 22 00:28:51 2022 +0300 description: Contrib: vim syntax adjusted to save cpoptions (ticket #2276). Line continuation as used in the syntax file might be broken if "compatible" is set or "C" is added to cpoptions. Fix is to set the "cpoptions" option to vim default value at script start and restore it later, see ":help use-cpo-save". diffstat: contrib/vim/syntax/nginx.vim | 6 ++++++ 1 files changed, 6 insertions(+), 0 deletions(-) diffs (21 lines): diff -r 7752d8523066 -r 5d88e2bf92b3 contrib/vim/syntax/nginx.vim --- a/contrib/vim/syntax/nginx.vim Wed Jan 19 17:37:34 2022 -0800 +++ b/contrib/vim/syntax/nginx.vim Sat Jan 22 00:28:51 2022 +0300 @@ -5,6 +5,9 @@ if exists("b:current_syntax") finish end +let s:save_cpo = &cpo +set cpo&vim + " general syntax if has("patch-7.4.1142") @@ -2485,4 +2488,7 @@ hi def link ngxDirectiveThirdPartyDeprec hi def link ngxListenOptions Keyword hi def link ngxListenOptionsDeprecated Error +let &cpo = s:save_cpo +unlet s:save_cpo + let b:current_syntax = "nginx" From dnj0496 at gmail.com Sat Jan 22 00:07:26 2022 From: dnj0496 at gmail.com (Dk Jack) Date: Fri, 21 Jan 2022 16:07:26 -0800 Subject: internal redirect and module context Message-ID: Hi, I have a question related to internal redirect, I am hoping someone from this forum can clarify. The email is a bit long since I wanted to provide enough background for my situation. In my module, I am creating my module context and saving some state. Some of this state is allocated using ngx_palloc. I am releasing this memory by adding a pool clean up handler. In my module, for certain requests I am doing an internal redirect. My code for redirect looks something like this: ngx_http_internal_redirect(r, &new_uri, &r->args); ngx_http_finalize_request(r, NGX_DONE); According to the documentation http://nginx.org/en/docs/dev/development_guide.html#http_request_redirection it says, on calling internal_redirect, the module context will be erased to avoid inconsistencies. It also says, the processing returns to the NGX_HTTP_SERVER_REWRITE_PHASE. To understand the behavior better, I attached a debugger and added a breakpoint after the above two lines. When the debugger stopped at my breakpoint, my module context still seems to be valid. I added a second breakpoint in my rewrite-handler and allowed the debugger to continue. Now when the debugger stopped at the second breakpoint, my module context was erased which seems consistent with the documentation. So my question is, if my context is erased, what happens to the memory I allocated before my module invoked the internal redirect call? I put a log statement in my cleanup function and I observed that it is getting invoked only once at the end of the request processing. It is not getting called when my context is erased after an internal redirect. Since I need my context data after redirection, do I reallocate and recreate it? Since my clean up code is getting called only once. Would this lead to a memory leak if I reallocated after the internal redirect call because I'd be allocated once before redirect and once after redirect. Any help or clarification in this regard is greatly appreciated. Dk. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Jan 22 05:20:20 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 22 Jan 2022 08:20:20 +0300 Subject: internal redirect and module context In-Reply-To: References: Message-ID: Hello! On Fri, Jan 21, 2022 at 04:07:26PM -0800, Dk Jack wrote: > I have a question related to internal redirect, I am hoping someone from > this forum can clarify. The email is a bit long since I wanted to provide > enough background for my situation. > > In my module, I am creating my module context and saving some state. Some > of this state is allocated using ngx_palloc. I am releasing this memory by > adding a pool clean up handler. > > In my module, for certain requests I am doing an internal redirect. My code > for redirect looks something like this: > > ngx_http_internal_redirect(r, &new_uri, &r->args); > ngx_http_finalize_request(r, NGX_DONE); > > According to the documentation > http://nginx.org/en/docs/dev/development_guide.html#http_request_redirection > > it says, on calling internal_redirect, the module context will be erased to > avoid inconsistencies. It also says, the processing returns to the > NGX_HTTP_SERVER_REWRITE_PHASE. To understand the behavior better, I > attached a debugger and added a breakpoint after the above two lines. When > the debugger stopped at my breakpoint, my module context still seems to be > valid. I added a second breakpoint in my rewrite-handler and allowed the > debugger to continue. Now when the debugger stopped at the second > breakpoint, my module context was erased which seems consistent with the > documentation. > > So my question is, if my context is erased, what happens to the memory I > allocated before my module invoked the internal redirect call? I put a log > statement in my cleanup function and I observed that it is getting invoked > only once at the end of the request processing. It is not getting called > when my context is erased after an internal redirect. Since I need my > context data after redirection, do I reallocate and recreate it? Since my > clean up code is getting called only once. Would this lead to a memory leak > if I reallocated after the internal redirect call because I'd be allocated > once before redirect and once after redirect. Any help or clarification in > this regard is greatly appreciated. By saying "all request contexts are erased" the development guide means that module contexts will no longer be available via the ngx_http_get_module_ctx() macro. The ngx_http_internal_redirect() clears the r->ctx[] array of pointers: /* clear the modules contexts */ ngx_memzero(r->ctx, sizeof(void *) * ngx_http_max_module); That is, the actual memory you've used for your module context will be intact, but the pointer you've saved into request with ngx_http_set_module_ctx() macro will no longer be returned by the ngx_http_get_module_ctx() macro. Usually, modules will simply re-allocate context as needed if it's not present. As long as context memory is allocated from the request pool, this won't cause a memory leak: all memory allocated from the request pool is automatically released when the request pool is destroyed. If you are allocating some external resources, such as open file descriptors or memory allocated directly from OS, you'll have to use cleanup handler free these resources. In this case, you have to make sure that the cleanup handler you've installed will free all the external resources you've allocated. Usually this means that you'll have to add a cleanup handler per resource: for example, nginx adds a cleanup handler for each file it opens, see ngx_open_cached_file(). Or, if you are keeping pointers to the allocated resources in your module context, a cleanup handler per each allocated context might be a good option. Note well that in some cases it might not possible to re-create module context, for example, if some information is no longer available. In this case it is possible to preserve the module context by saving the pointer elsewhere, and restoring it if possible instead of re-allocating if ngx_http_get_module_ctx() returns NULL. For example, the realip module uses request pool cleanup handlers to save and restore its context when needed, see ngx_http_realip_get_module_ctx(). Hope this helps. -- Maxim Dounin http://mdounin.ru/ From dnj0496 at gmail.com Sat Jan 22 10:19:05 2022 From: dnj0496 at gmail.com (Dk Jack) Date: Sat, 22 Jan 2022 02:19:05 -0800 Subject: internal redirect and module context In-Reply-To: References: Message-ID: Maxim, Thanks for responding to my query. I am passing the original context pointer to the clean up handler. When my cleanup handler is called I am retrieving the context pointer to clean up external resources. Based on your response, the pointer saved in the cleanup handler should still be valid and should be still safe to use and no memory/resources will be leaked if I use that pointer to cleanup old allocations. This seems to be in agreement with what I observed in my debugging. A follow up question. After the redirect call, I am recreating the context and restoring some of the data. However, like you mentioned I cannot restore all the data. Currently, I am not accessing inaccessible data, it seems to be working fine. However, in case I need to access the lost data, is there another area in the request that is not disturbed by the redirect call where I can save the context data? Regards, Dk. --------------------------------------------------------------------------------------------- By saying "all request contexts are erased" the development guide means that module contexts will no longer be available via the ngx_http_get_module_ctx() macro. The ngx_http_internal_redirect() clears the r->ctx[] array of pointers: /* clear the modules contexts */ ngx_memzero(r->ctx, sizeof(void *) * ngx_http_max_module); That is, the actual memory you've used for your module context will be intact, but the pointer you've saved into request with ngx_http_set_module_ctx() macro will no longer be returned by the ngx_http_get_module_ctx() macro. Usually, modules will simply re-allocate context as needed if it's not present. As long as context memory is allocated from the request pool, this won't cause a memory leak: all memory allocated from the request pool is automatically released when the request pool is destroyed. If you are allocating some external resources, such as open file descriptors or memory allocated directly from OS, you'll have to use cleanup handler free these resources. In this case, you have to make sure that the cleanup handler you've installed will free all the external resources you've allocated. Usually this means that you'll have to add a cleanup handler per resource: for example, nginx adds a cleanup handler for each file it opens, see ngx_open_cached_file(). Or, if you are keeping pointers to the allocated resources in your module context, a cleanup handler per each allocated context might be a good option. Note well that in some cases it might not possible to re-create module context, for example, if some information is no longer available. In this case it is possible to preserve the module context by saving the pointer elsewhere, and restoring it if possible instead of re-allocating if ngx_http_get_module_ctx() returns NULL. For example, the realip module uses request pool cleanup handlers to save and restore its context when needed, see ngx_http_realip_get_module_ctx(). Hope this helps. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx-devel mailing list -- nginx-devel at nginx.org To unsubscribe send an email to nginx-devel-leave at nginx.org On Fri, Jan 21, 2022 at 4:07 PM Dk Jack wrote: > Hi, > I have a question related to internal redirect, I am hoping someone from > this forum can clarify. The email is a bit long since I wanted to provide > enough background for my situation. > > In my module, I am creating my module context and saving some state. Some > of this state is allocated using ngx_palloc. I am releasing this memory by > adding a pool clean up handler. > > In my module, for certain requests I am doing an internal redirect. My > code for redirect looks something like this: > > ngx_http_internal_redirect(r, &new_uri, &r->args); > ngx_http_finalize_request(r, NGX_DONE); > > According to the documentation > > http://nginx.org/en/docs/dev/development_guide.html#http_request_redirection > > it says, on calling internal_redirect, the module context will be erased > to avoid inconsistencies. It also says, the processing returns to the > NGX_HTTP_SERVER_REWRITE_PHASE. To understand the behavior better, I > attached a debugger and added a breakpoint after the above two lines. When > the debugger stopped at my breakpoint, my module context still seems to be > valid. I added a second breakpoint in my rewrite-handler and allowed the > debugger to continue. Now when the debugger stopped at the second > breakpoint, my module context was erased which seems consistent with the > documentation. > > So my question is, if my context is erased, what happens to the memory I > allocated before my module invoked the internal redirect call? I put a log > statement in my cleanup function and I observed that it is getting invoked > only once at the end of the request processing. It is not getting called > when my context is erased after an internal redirect. Since I need my > context data after redirection, do I reallocate and recreate it? Since my > clean up code is getting called only once. Would this lead to a memory leak > if I reallocated after the internal redirect call because I'd be allocated > once before redirect and once after redirect. Any help or clarification in > this regard is greatly appreciated. > > Dk. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sat Jan 22 19:08:21 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 22 Jan 2022 22:08:21 +0300 Subject: internal redirect and module context In-Reply-To: References: Message-ID: Hello! On Sat, Jan 22, 2022 at 02:19:05AM -0800, Dk Jack wrote: > Maxim, > Thanks for responding to my query. I am passing the original context > pointer to the clean up handler. When my cleanup handler is called I am > retrieving the context pointer to clean up external resources. Based on > your response, the pointer saved in the cleanup handler should still be > valid and should be still safe to use and no memory/resources will be > leaked if I use that pointer to cleanup old allocations. This seems to be > in agreement with what I observed in my debugging. Yes, that's look correct. Note that if you create a new context with new resources, you have to add another cleanup handler to free these new resources as well. > A follow up question. After the redirect call, I am recreating the context > and restoring some of the data. However, like you mentioned I cannot > restore all the data. Currently, I am not accessing inaccessible data, > it seems to be working fine. However, in case I need to access the lost > data, is there another area in the request that is not disturbed by the > redirect call where I can save the context data? Cleanup handlers is the best way go, check the realip module and the ngx_http_realip_get_module_ctx() function I've mentioned in the previous message. -- Maxim Dounin http://mdounin.ru/ From dnj0496 at gmail.com Sat Jan 22 19:40:10 2022 From: dnj0496 at gmail.com (Dk Jack) Date: Sat, 22 Jan 2022 11:40:10 -0800 Subject: internal redirect and module context In-Reply-To: References: Message-ID: That is exactly what I needed. Thank you. Dk. On Sat, Jan 22, 2022 at 11:08 AM Maxim Dounin wrote: > Hello! > > On Sat, Jan 22, 2022 at 02:19:05AM -0800, Dk Jack wrote: > > > Maxim, > > Thanks for responding to my query. I am passing the original context > > pointer to the clean up handler. When my cleanup handler is called I am > > retrieving the context pointer to clean up external resources. Based on > > your response, the pointer saved in the cleanup handler should still be > > valid and should be still safe to use and no memory/resources will be > > leaked if I use that pointer to cleanup old allocations. This seems to be > > in agreement with what I observed in my debugging. > > Yes, that's look correct. Note that if you create a new > context with new resources, you have to add another cleanup > handler to free these new resources as well. > > > A follow up question. After the redirect call, I am recreating the > context > > and restoring some of the data. However, like you mentioned I cannot > > restore all the data. Currently, I am not accessing inaccessible data, > > it seems to be working fine. However, in case I need to access the lost > > data, is there another area in the request that is not disturbed by the > > redirect call where I can save the context data? > > Cleanup handlers is the best way go, check the realip module and > the ngx_http_realip_get_module_ctx() function I've mentioned in > the previous message. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx-devel mailing list -- nginx-devel at nginx.org > To unsubscribe send an email to nginx-devel-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Mon Jan 24 12:35:18 2022 From: pluknet at nginx.com (Sergey Kandaurov) Date: Mon, 24 Jan 2022 15:35:18 +0300 Subject: [PATCH] SSL: always renewing tickets with TLSv1.3 (ticket #1892) In-Reply-To: References: Message-ID: <89A0D935-4844-423C-83F1-EAED14C04483@nginx.com> > On 21 Jan 2022, at 06:57, Maxim Dounin wrote: > > # HG changeset patch > # User Maxim Dounin > # Date 1642737110 -10800 > # Fri Jan 21 06:51:50 2022 +0300 > # Node ID cff51689a4a182cb11cba2eb9303e2bc21815432 > # Parent 96ae8e57b3dd1b10f29d3060bbad93b7f9357b92 > SSL: always renewing tickets with TLSv1.3 (ticket #1892). > > Chrome only use TLS session tickets once with TLS 1.3, likely following uses ? > RFC 8446 Appendix C.4 recommendation. Besides that, there's a study [1] that discusses 3rd-party tracking via session resumption. Although improvements in TLS 1.3 that provide different PSK identities in session tickets are used to protect against correlation by a passive observer, the study suggests to completely deactivate TLS 1.3 session resumption for privacy reasons. This might be also due to 0-RTT Anti-Replay guidance in case the selection from available tickets is agnostic to 0-RTT. Practical analysis in [2] demonstrates that Chrome(ium) indeed selects among tickets never used before. It doesn't make clear separation, though, whether this depends on sending 0-RTT. [1] https://arxiv.org/abs/1810.07304 [2] "A Survey of TLS 1.3 0-RTT Usage", Mihael Liskij > With OpenSSL, this works fine with > built-in session tickets, since these are explicitly renewed in case of > TLS 1.3 on each session reuse, but results in only two connections being > reused after an initial handshake when using ssl_session_ticket_key. > > Fix is to always renew TLS session tickets in case of TLS 1.3 when using > ssl_session_ticket_key, similarly to how it is done by OpenSSL internally. > > diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c > --- a/src/event/ngx_event_openssl.c > +++ b/src/event/ngx_event_openssl.c > @@ -4448,7 +4448,21 @@ ngx_ssl_session_ticket_key_callback(ngx_ > return -1; > } > > - return (i == 0) ? 1 : 2 /* renew */; > + /* renew if TLSv1.3 */ > + > +#ifdef TLS1_3_VERSION > + if (SSL_version(ssl_conn) == TLS1_3_VERSION) { > + return 2; > + } > +#endif > + > + /* renew if non-default key */ > + > + if (i != 0) { > + return 2; > + } > + > + return 1; > } > } > Looks good. -- Sergey Kandaurov From mdounin at mdounin.ru Mon Jan 24 14:24:01 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 24 Jan 2022 17:24:01 +0300 Subject: [PATCH] SSL: always renewing tickets with TLSv1.3 (ticket #1892) In-Reply-To: <89A0D935-4844-423C-83F1-EAED14C04483@nginx.com> References: <89A0D935-4844-423C-83F1-EAED14C04483@nginx.com> Message-ID: Hello! On Mon, Jan 24, 2022 at 03:35:18PM +0300, Sergey Kandaurov wrote: > > On 21 Jan 2022, at 06:57, Maxim Dounin wrote: > > > > # HG changeset patch > > # User Maxim Dounin > > # Date 1642737110 -10800 > > # Fri Jan 21 06:51:50 2022 +0300 > > # Node ID cff51689a4a182cb11cba2eb9303e2bc21815432 > > # Parent 96ae8e57b3dd1b10f29d3060bbad93b7f9357b92 > > SSL: always renewing tickets with TLSv1.3 (ticket #1892). > > > > Chrome only use TLS session tickets once with TLS 1.3, likely following > > uses ? Fixed, thnx. > > RFC 8446 Appendix C.4 recommendation. > > Besides that, there's a study [1] that discusses 3rd-party > tracking via session resumption. Although improvements > in TLS 1.3 that provide different PSK identities in session > tickets are used to protect against correlation by a passive > observer, the study suggests to completely deactivate TLS 1.3 > session resumption for privacy reasons. Sure, but this is certainly unrelated to Chrome behaviour, since it does accept and use new session tickets, thus allowing infinite tracking. > This might be also due to 0-RTT Anti-Replay guidance in case > the selection from available tickets is agnostic to 0-RTT. > Practical analysis in [2] demonstrates that Chrome(ium) indeed > selects among tickets never used before. It doesn't make clear > separation, though, whether this depends on sending 0-RTT. The particular behaviour was observed with 0-RTT disabled on the server ("ssl_early_data off;", the default), so browser knows in advance that 0-RTT is not going to be used. While it might be the reason, this would be suboptimal behaviour. > [1] https://arxiv.org/abs/1810.07304 > [2] "A Survey of TLS 1.3 0-RTT Usage", Mihael Liskij > > > With OpenSSL, this works fine with > > built-in session tickets, since these are explicitly renewed in case of > > TLS 1.3 on each session reuse, but results in only two connections being > > reused after an initial handshake when using ssl_session_ticket_key. > > > > Fix is to always renew TLS session tickets in case of TLS 1.3 when using > > ssl_session_ticket_key, similarly to how it is done by OpenSSL internally. > > > > diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c > > --- a/src/event/ngx_event_openssl.c > > +++ b/src/event/ngx_event_openssl.c > > @@ -4448,7 +4448,21 @@ ngx_ssl_session_ticket_key_callback(ngx_ > > return -1; > > } > > > > - return (i == 0) ? 1 : 2 /* renew */; > > + /* renew if TLSv1.3 */ > > + > > +#ifdef TLS1_3_VERSION > > + if (SSL_version(ssl_conn) == TLS1_3_VERSION) { > > + return 2; > > + } > > +#endif > > + > > + /* renew if non-default key */ > > + > > + if (i != 0) { > > + return 2; > > + } > > + > > + return 1; > > } > > } > > > > Looks good. -- Maxim Dounin http://mdounin.ru/ From yugo-horie at jocdn.co.jp Tue Jan 25 03:27:58 2022 From: yugo-horie at jocdn.co.jp (Yugo Horie) Date: Tue, 25 Jan 2022 12:27:58 +0900 Subject: Prioritize `X-Accel-Expires` than `Cache-Control` and `Expires` (#964) Message-ID: changeset: 7997:86f70e48a64a branch: issue-964 tag: tip user: Yugo Horie date: Tue Jan 25 12:16:05 2022 +0900 files: src/http/ngx_http_upstream.c src/http/ngx_http_upstream.h description: Prioritize `X-Accel-Expires` than `Cache-Control` and `Expires` (#964) We introduce 3 flags that indicate to be overwriting cache control behavior. * The `overwrite_noncache` switches on the case of not to be cached when processing `Cache-Control` and `Expires` headers from upstream. * The `overwrite_stale_xxx` flags also switch on when it's enabled to use stale cache behavior on processing those headers. * `process_accel_expires` watches these flags, which invalidates their non- cache and stale behavior which had been set in other headers to prioritize `X-Accel-Expires`. user: Yugo Horie changed src/http/ngx_http_upstream.c changed src/http/ngx_http_upstream.h diff -r 5d88e2bf92b3 -r 86f70e48a64a src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Sat Jan 22 00:28:51 2022 +0300 +++ b/src/http/ngx_http_upstream.c Tue Jan 25 12:16:05 2022 +0900 @@ -4747,6 +4747,7 @@ || ngx_strlcasestrn(start, last, (u_char *) "private", 7 - 1) != NULL) { u->cacheable = 0; + u->overwrite_noncache = 1; return NGX_OK; } @@ -4772,11 +4773,13 @@ } u->cacheable = 0; + u->overwrite_noncache = 1; return NGX_OK; } if (n == 0) { u->cacheable = 0; + u->overwrite_noncache = 1; return NGX_OK; } @@ -4800,9 +4803,12 @@ } u->cacheable = 0; + u->overwrite_noncache = 1; return NGX_OK; } + u->overwrite_stale_updating = 1; + u->overwrite_stale_error = 1; r->cache->updating_sec = n; r->cache->error_sec = n; } @@ -4822,10 +4828,12 @@ continue; } + u->overwrite_noncache = 1; u->cacheable = 0; return NGX_OK; } + u->overwrite_stale_error = 1; r->cache->error_sec = n; } } @@ -4863,6 +4871,7 @@ expires = ngx_parse_http_time(h->value.data, h->value.len); if (expires == NGX_ERROR || expires < ngx_time()) { + u->overwrite_noncache = 1; u->cacheable = 0; return NGX_OK; } @@ -4897,6 +4906,15 @@ if (r->cache == NULL) { return NGX_OK; } + if (u->overwrite_noncache) { + u->cacheable = 1; + } + if (u->overwrite_stale_updating) { + r->cache->updating_sec = 0; + } + if (u->overwrite_stale_error) { + r->cache->error_sec = 0; + } len = h->value.len; p = h->value.data; diff -r 5d88e2bf92b3 -r 86f70e48a64a src/http/ngx_http_upstream.h --- a/src/http/ngx_http_upstream.h Sat Jan 22 00:28:51 2022 +0300 +++ b/src/http/ngx_http_upstream.h Tue Jan 25 12:16:05 2022 +0900 @@ -386,6 +386,9 @@ unsigned store:1; unsigned cacheable:1; + unsigned overwrite_noncache:1; + unsigned overwrite_stale_updating:1; + unsigned overwrite_stale_error:1; unsigned accel:1; unsigned ssl:1; #if (NGX_HTTP_CACHE) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaoyan09 at baidu.com Tue Jan 25 08:49:52 2022 From: gaoyan09 at baidu.com (=?utf-8?B?R2FvLFlhbijlqpLkvZPkupEp?=) Date: Tue, 25 Jan 2022 08:49:52 +0000 Subject: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null Message-ID: Program terminated with signal SIGSEGV, Segmentation fault. #0 0x00000000004bc3f9 in ngx_quic_input_handler (rev=0x2a119170) at src/event/quic/ngx_event_quic.c:497 497 src/event/quic/ngx_event_quic.c: No such file or directory. (gdb) bt #0 0x00000000004bc3f9 in ngx_quic_input_handler (rev=0x2a119170) at src/event/quic/ngx_event_quic.c:497 #1 0x00000000004b011e in ngx_epoll_process_events (cycle=0x17011ab0, timer=, flags=) at src/event/modules/ngx_epoll_module.c:928 #2 0x00000000004a6ab1 in ngx_process_events_and_timers (cycle=cycle at entry=0x17011ab0) at src/event/ngx_event.c:262 #3 0x00000000004ae487 in ngx_worker_process_cycle (cycle=0x17011ab0, data=) at src/os/unix/ngx_process_cycle.c:727 #4 0x00000000004acc01 in ngx_spawn_process (cycle=cycle at entry=0x17011ab0, proc=proc at entry=0x4ae397 , data=data at entry=0x3, name=name at entry=0x9386ee "worker process", respawn=respawn at entry=-4) at src/os/unix/ngx_process.c:199 #5 0x00000000004ad723 in ngx_start_worker_processes (cycle=cycle at entry=0x17011ab0, n=16, type=type at entry=-4) at src/os/unix/ngx_process_cycle.c:350 #6 0x00000000004aefc0 in ngx_master_process_cycle (cycle=0x17011ab0, cycle at entry=0x289e7a0) at src/os/unix/ngx_process_cycle.c:235 #7 0x00000000004878e8 in main (argc=3, argv=) at src/core/nginx.c:397 (gdb) p c->udp->dgram $1 = (ngx_udp_dgram_t *) 0x0 Gao,Yan(ACG VCP) From gaoyan09 at baidu.com Tue Jan 25 09:20:25 2022 From: gaoyan09 at baidu.com (=?utf-8?B?R2FvLFlhbijlqpLkvZPkupEp?=) Date: Tue, 25 Jan 2022 09:20:25 +0000 Subject: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null Message-ID: <2FB833E0-5F4A-49C1-971C-0B85DB11787B@baidu.com> Program terminated with signal SIGSEGV, Segmentation fault. #0 0x00000000004bc3f9 in ngx_quic_input_handler (rev=0x2a119170) at src/event/quic/ngx_event_quic.c:497 497 src/event/quic/ngx_event_quic.c: No such file or directory. (gdb) bt #0 0x00000000004bc3f9 in ngx_quic_input_handler (rev=0x2a119170) at src/event/quic/ngx_event_quic.c:497 #1 0x00000000004b011e in ngx_epoll_process_events (cycle=0x17011ab0, timer=, flags=) at src/event/modules/ngx_epoll_module.c:928 #2 0x00000000004a6ab1 in ngx_process_events_and_timers (cycle=cycle at entry=0x17011ab0) at src/event/ngx_event.c:262 #3 0x00000000004ae487 in ngx_worker_process_cycle (cycle=0x17011ab0, data=) at src/os/unix/ngx_process_cycle.c:727 #4 0x00000000004acc01 in ngx_spawn_process (cycle=cycle at entry=0x17011ab0, proc=proc at entry=0x4ae397 , data=data at entry=0x3, name=name at entry=0x9386ee "worker process", respawn=respawn at entry=-4) at src/os/unix/ngx_process.c:199 #5 0x00000000004ad723 in ngx_start_worker_processes (cycle=cycle at entry=0x17011ab0, n=16, type=type at entry=-4) at src/os/unix/ngx_process_cycle.c:350 #6 0x00000000004aefc0 in ngx_master_process_cycle (cycle=0x17011ab0, cycle at entry=0x289e7a0) at src/os/unix/ngx_process_cycle.c:235 #7 0x00000000004878e8 in main (argc=3, argv=) at src/core/nginx.c:397 (gdb) p c->udp->dgram $1 = (ngx_udp_dgram_t *) 0x0 Gao,Yan(ACG VCP) From gaoyan09 at baidu.com Tue Jan 25 10:05:55 2022 From: gaoyan09 at baidu.com (=?utf-8?B?R2FvLFlhbijlqpLkvZPkupEp?=) Date: Tue, 25 Jan 2022 10:05:55 +0000 Subject: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null In-Reply-To: <2FB833E0-5F4A-49C1-971C-0B85DB11787B@baidu.com> References: <2FB833E0-5F4A-49C1-971C-0B85DB11787B@baidu.com> Message-ID: <919D3AB3-E292-45E7-B754-9A6321903E69@baidu.com> loggingHost:kfcm-jorcol-82 2022/01/24 09:09:13 [info] 2224#0: *1007558770 quic unknown transport param id:0x20, skipped while SSL handshaking, client: 223.90.188.154, server: 0.0.0.0:7232 loggingHost:kfcm-jorcol-82 2022/01/24 09:09:13 [info] 2224#0: *1007558770 quic unknown transport param id:0x3127, skipped while SSL handshaking, client: 223.90.188.154, server: 0.0.0.0:7232 loggingHost:kfcm-jorcol-82 2022/01/24 09:09:13 [info] 2224#0: *1007558770 quic unknown transport param id:0x4752, skipped while SSL handshaking, client: 223.90.188.154, server: 0.0.0.0:7232 loggingHost:kfcm-jorcol-82 2022/01/24 09:09:13 [info] 2224#0: *1007558770 quic reserved transport param id:0x3a86dd60d110621a, skipped while SSL handshaking, client: 223.90.188.154, server: 0.0.0.0:7232 Gao,Yan(ACG VCP) 在 2022/1/25 下午5:20,“Gao,Yan(媒体云)” 写入: Program terminated with signal SIGSEGV, Segmentation fault. #0 0x00000000004bc3f9 in ngx_quic_input_handler (rev=0x2a119170) at src/event/quic/ngx_event_quic.c:497 497 src/event/quic/ngx_event_quic.c: No such file or directory. (gdb) bt #0 0x00000000004bc3f9 in ngx_quic_input_handler (rev=0x2a119170) at src/event/quic/ngx_event_quic.c:497 #1 0x00000000004b011e in ngx_epoll_process_events (cycle=0x17011ab0, timer=, flags=) at src/event/modules/ngx_epoll_module.c:928 #2 0x00000000004a6ab1 in ngx_process_events_and_timers (cycle=cycle at entry=0x17011ab0) at src/event/ngx_event.c:262 #3 0x00000000004ae487 in ngx_worker_process_cycle (cycle=0x17011ab0, data=) at src/os/unix/ngx_process_cycle.c:727 #4 0x00000000004acc01 in ngx_spawn_process (cycle=cycle at entry=0x17011ab0, proc=proc at entry=0x4ae397 , data=data at entry=0x3, name=name at entry=0x9386ee "worker process", respawn=respawn at entry=-4) at src/os/unix/ngx_process.c:199 #5 0x00000000004ad723 in ngx_start_worker_processes (cycle=cycle at entry=0x17011ab0, n=16, type=type at entry=-4) at src/os/unix/ngx_process_cycle.c:350 #6 0x00000000004aefc0 in ngx_master_process_cycle (cycle=0x17011ab0, cycle at entry=0x289e7a0) at src/os/unix/ngx_process_cycle.c:235 #7 0x00000000004878e8 in main (argc=3, argv=) at src/core/nginx.c:397 (gdb) p c->udp->dgram $1 = (ngx_udp_dgram_t *) 0x0 Gao,Yan(ACG VCP) From vl at nginx.com Tue Jan 25 10:11:57 2022 From: vl at nginx.com (Vladimir Homutov) Date: Tue, 25 Jan 2022 13:11:57 +0300 Subject: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null In-Reply-To: <919D3AB3-E292-45E7-B754-9A6321903E69@baidu.com> References: <2FB833E0-5F4A-49C1-971C-0B85DB11787B@baidu.com> <919D3AB3-E292-45E7-B754-9A6321903E69@baidu.com> Message-ID: <917eb6cd-e508-f3c6-7094-1be3fbf7cec1@nginx.com> On 1/25/22 13:05, Gao,Yan(媒体云) wrote: > loggingHost:kfcm-jorcol-82 2022/01/24 09:09:13 [info] 2224#0: *1007558770 quic unknown transport param id:0x20, skipped while SSL handshaking, client: 223.90.188.154, server: 0.0.0.0:7232 > loggingHost:kfcm-jorcol-82 2022/01/24 09:09:13 [info] 2224#0: *1007558770 quic unknown transport param id:0x3127, skipped while SSL handshaking, client: 223.90.188.154, server: 0.0.0.0:7232 > loggingHost:kfcm-jorcol-82 2022/01/24 09:09:13 [info] 2224#0: *1007558770 quic unknown transport param id:0x4752, skipped while SSL handshaking, client: 223.90.188.154, server: 0.0.0.0:7232 > loggingHost:kfcm-jorcol-82 2022/01/24 09:09:13 [info] 2224#0: *1007558770 quic reserved transport param id:0x3a86dd60d110621a, skipped while SSL handshaking, client: 223.90.188.154, server: 0.0.0.0:7232 > > Gao,Yan(ACG VCP) > > 在 2022/1/25 下午5:20,“Gao,Yan(媒体云)” 写入: > > Program terminated with signal SIGSEGV, Segmentation fault. > #0 0x00000000004bc3f9 in ngx_quic_input_handler (rev=0x2a119170) at src/event/quic/ngx_event_quic.c:497 > 497 src/event/quic/ngx_event_quic.c: No such file or directory. > (gdb) bt > #0 0x00000000004bc3f9 in ngx_quic_input_handler (rev=0x2a119170) at src/event/quic/ngx_event_quic.c:497 > #1 0x00000000004b011e in ngx_epoll_process_events (cycle=0x17011ab0, timer=, flags=) at src/event/modules/ngx_epoll_module.c:928 > #2 0x00000000004a6ab1 in ngx_process_events_and_timers (cycle=cycle at entry=0x17011ab0) at src/event/ngx_event.c:262 > #3 0x00000000004ae487 in ngx_worker_process_cycle (cycle=0x17011ab0, data=) at src/os/unix/ngx_process_cycle.c:727 > #4 0x00000000004acc01 in ngx_spawn_process (cycle=cycle at entry=0x17011ab0, proc=proc at entry=0x4ae397 , data=data at entry=0x3, name=name at entry=0x9386ee "worker process", > respawn=respawn at entry=-4) at src/os/unix/ngx_process.c:199 > #5 0x00000000004ad723 in ngx_start_worker_processes (cycle=cycle at entry=0x17011ab0, n=16, type=type at entry=-4) at src/os/unix/ngx_process_cycle.c:350 > #6 0x00000000004aefc0 in ngx_master_process_cycle (cycle=0x17011ab0, cycle at entry=0x289e7a0) at src/os/unix/ngx_process_cycle.c:235 > #7 0x00000000004878e8 in main (argc=3, argv=) at src/core/nginx.c:397 > (gdb) p c->udp->dgram > $1 = (ngx_udp_dgram_t *) 0x0 > > Gao,Yan(ACG VCP) > Thank you for report! Can you please enable debug and provide debug log? From xeioex at nginx.com Tue Jan 25 13:38:41 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 25 Jan 2022 13:38:41 +0000 Subject: [njs] Fixed function redeclaration. Message-ID: details: https://hg.nginx.org/njs/rev/d29cddd07a32 branches: changeset: 1815:d29cddd07a32 user: Dmitry Volyntsev date: Tue Jan 25 13:18:20 2022 +0000 description: Fixed function redeclaration. Previously, the existing lambda structure was reused resulting in the properties of the previously defined function was merged into a new one. The bug was introduced in 66bd2cc7fd87 (0.7.0). diffstat: src/njs_generator.c | 8 +++----- src/njs_variable.c | 41 ++++++++++++++++++++--------------------- src/test/njs_unit_test.c | 10 ++++++++++ 3 files changed, 33 insertions(+), 26 deletions(-) diffs (96 lines): diff -r d776b59196c5 -r d29cddd07a32 src/njs_generator.c --- a/src/njs_generator.c Fri Jan 21 14:31:30 2022 +0000 +++ b/src/njs_generator.c Tue Jan 25 13:18:20 2022 +0000 @@ -3684,11 +3684,9 @@ njs_generate_function_scope(njs_vm_t *vm lambda->nlocal = node->scope->items; lambda->temp = node->scope->temp; - if (node->scope->declarations != NULL) { - arr = node->scope->declarations; - lambda->declarations = arr->start; - lambda->ndeclarations = arr->items; - } + arr = node->scope->declarations; + lambda->declarations = (arr != NULL) ? arr->start : NULL; + lambda->ndeclarations = (arr != NULL) ? arr->items : 0; return NJS_OK; } diff -r d776b59196c5 -r d29cddd07a32 src/njs_variable.c --- a/src/njs_variable.c Fri Jan 21 14:31:30 2022 +0000 +++ b/src/njs_variable.c Tue Jan 25 13:18:20 2022 +0000 @@ -56,34 +56,33 @@ njs_variable_function_add(njs_parser_t * return NULL; } - if (var->index == NJS_INDEX_ERROR || !var->function) { - root = njs_function_scope(scope); - if (njs_slow_path(scope == NULL)) { - return NULL; - } + root = njs_function_scope(scope); + if (njs_slow_path(scope == NULL)) { + return NULL; + } - ctor = parser->node->token_type != NJS_TOKEN_ASYNC_FUNCTION_DECLARATION; + ctor = parser->node->token_type != NJS_TOKEN_ASYNC_FUNCTION_DECLARATION; - lambda = njs_function_lambda_alloc(parser->vm, ctor); - if (lambda == NULL) { - return NULL; - } + lambda = njs_function_lambda_alloc(parser->vm, ctor); + if (lambda == NULL) { + return NULL; + } - var->value.data.u.lambda = lambda; + njs_set_invalid(&var->value); + var->value.data.u.lambda = lambda; - declr = njs_variable_scope_function_add(parser, root); - if (njs_slow_path(declr == NULL)) { - return NULL; - } + declr = njs_variable_scope_function_add(parser, root); + if (njs_slow_path(declr == NULL)) { + return NULL; + } - var->index = njs_scope_index(root->type, root->items, NJS_LEVEL_LOCAL, - type); + var->index = njs_scope_index(root->type, root->items, NJS_LEVEL_LOCAL, + type); - declr->value = &var->value; - declr->index = var->index; + declr->value = &var->value; + declr->index = var->index; - root->items++; - } + root->items++; var->type = NJS_VARIABLE_FUNCTION; var->function = 1; diff -r d776b59196c5 -r d29cddd07a32 src/test/njs_unit_test.c --- a/src/test/njs_unit_test.c Fri Jan 21 14:31:30 2022 +0000 +++ b/src/test/njs_unit_test.c Tue Jan 25 13:18:20 2022 +0000 @@ -10568,6 +10568,16 @@ static njs_unit_test_t njs_test[] = "myFoo(1,2);" ), njs_str("") }, + { njs_str("function f(...rest) {};" + "function f(a, b) {return a + b};" + "f(1,2)"), + njs_str("3") }, + + { njs_str("function f() { function q() {} };" + "function f() { };" + "f()"), + njs_str("undefined") }, + /* arrow functions. */ { njs_str("()"), From xeioex at nginx.com Tue Jan 25 13:38:43 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 25 Jan 2022 13:38:43 +0000 Subject: [njs] Version 0.7.2. Message-ID: details: https://hg.nginx.org/njs/rev/3dd315b80bab branches: changeset: 1816:3dd315b80bab user: Dmitry Volyntsev date: Tue Jan 25 13:37:45 2022 +0000 description: Version 0.7.2. diffstat: CHANGES | 29 +++++++++++++++++++++++++++++ 1 files changed, 29 insertions(+), 0 deletions(-) diffs (36 lines): diff -r d29cddd07a32 -r 3dd315b80bab CHANGES --- a/CHANGES Tue Jan 25 13:18:20 2022 +0000 +++ b/CHANGES Tue Jan 25 13:37:45 2022 +0000 @@ -1,3 +1,32 @@ +Changes with njs 0.7.2 25 Jan 2022 + + Core: + + *) Bugfix: fixed Array.prototype.join() when array is changed + while iterating. + + *) Bugfix: fixed Array.prototype.slice() when array is changed + while iterating. + + *) Bugfix: fixed Array.prototype.concat() when array is changed + while iterating. + + *) Bugfix: fixed Array.prototype.reverse() when array is changed + while iterating. + + *) Bugfix: fixed Buffer.concat() with subarrays. + Thanks to Sylvain Etienne. + + *) Bugfix: fixed type confusion bug while resolving promises. + + *) Bugfix: fixed Function.prototype.apply() with large array + arguments. + + *) Bugfix: fixed recursive async function calls. + + *) Bugfix: fixed function redeclaration. The bug was introduced + in 0.7.0. + Changes with njs 0.7.1 28 Dec 2021 nginx modules: From xeioex at nginx.com Tue Jan 25 13:38:44 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 25 Jan 2022 13:38:44 +0000 Subject: [njs] Added tag 0.7.2 for changeset 3dd315b80bab Message-ID: details: https://hg.nginx.org/njs/rev/d73e9c106a97 branches: changeset: 1817:d73e9c106a97 user: Dmitry Volyntsev date: Tue Jan 25 13:38:25 2022 +0000 description: Added tag 0.7.2 for changeset 3dd315b80bab diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff -r 3dd315b80bab -r d73e9c106a97 .hgtags --- a/.hgtags Tue Jan 25 13:37:45 2022 +0000 +++ b/.hgtags Tue Jan 25 13:38:25 2022 +0000 @@ -47,3 +47,4 @@ 4adbe67b292af2adc0a6fde4ec6cb95dbba9470a dfba7f61745c7454ffdd55303a793206d0a9a84a 0.6.2 8418bd4a4ce3114d57b4d75f913e8c4912bf4b5d 0.7.0 35aca5cc5ea7582b80947caa1e3f4a4fb8ee232d 0.7.1 +3dd315b80bab10b6ac475ee25dd207d2eb759881 0.7.2 From mdounin at mdounin.ru Tue Jan 25 15:08:00 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 Jan 2022 15:08:00 +0000 Subject: [nginx] SSL: always renewing tickets with TLSv1.3 (ticket #1892). Message-ID: details: https://hg.nginx.org/nginx/rev/e30f7dc7f143 branches: changeset: 7997:e30f7dc7f143 user: Maxim Dounin date: Mon Jan 24 17:18:50 2022 +0300 description: SSL: always renewing tickets with TLSv1.3 (ticket #1892). Chrome only uses TLS session tickets once with TLS 1.3, likely following RFC 8446 Appendix C.4 recommendation. With OpenSSL, this works fine with built-in session tickets, since these are explicitly renewed in case of TLS 1.3 on each session reuse, but results in only two connections being reused after an initial handshake when using ssl_session_ticket_key. Fix is to always renew TLS session tickets in case of TLS 1.3 when using ssl_session_ticket_key, similarly to how it is done by OpenSSL internally. diffstat: src/event/ngx_event_openssl.c | 16 +++++++++++++++- 1 files changed, 15 insertions(+), 1 deletions(-) diffs (26 lines): diff -r 5d88e2bf92b3 -r e30f7dc7f143 src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Sat Jan 22 00:28:51 2022 +0300 +++ b/src/event/ngx_event_openssl.c Mon Jan 24 17:18:50 2022 +0300 @@ -4451,7 +4451,21 @@ ngx_ssl_session_ticket_key_callback(ngx_ return -1; } - return (i == 0) ? 1 : 2 /* renew */; + /* renew if TLSv1.3 */ + +#ifdef TLS1_3_VERSION + if (SSL_version(ssl_conn) == TLS1_3_VERSION) { + return 2; + } +#endif + + /* renew if non-default key */ + + if (i != 0) { + return 2; + } + + return 1; } } From mdounin at mdounin.ru Tue Jan 25 15:08:03 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 Jan 2022 15:08:03 +0000 Subject: [nginx] nginx-1.21.6-RELEASE Message-ID: details: https://hg.nginx.org/nginx/rev/714eb4b2c09e branches: changeset: 7998:714eb4b2c09e user: Maxim Dounin date: Tue Jan 25 18:03:51 2022 +0300 description: nginx-1.21.6-RELEASE diffstat: docs/xml/nginx/changes.xml | 38 ++++++++++++++++++++++++++++++++++++++ 1 files changed, 38 insertions(+), 0 deletions(-) diffs (48 lines): diff -r e30f7dc7f143 -r 714eb4b2c09e docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml Mon Jan 24 17:18:50 2022 +0300 +++ b/docs/xml/nginx/changes.xml Tue Jan 25 18:03:51 2022 +0300 @@ -5,6 +5,44 @@ + + + + +при использование EPOLLEXCLUSIVE на Linux +распределение клиентских соединений между рабочими процессами +было неравномерным. + + +when using EPOLLEXCLUSIVE on Linux +client connections were unevenly distributed +among worker processes. + + + + + +во время плавного завершения старых рабочих процессов +nginx возвращал в ответах строку заголовка "Connection: keep-alive". + + +nginx returned the "Connection: keep-alive" header line in responses +during graceful shutdown of old worker processes. + + + + + +в директиве ssl_session_ticket_key при использовании TLSv1.3. + + +in the "ssl_session_ticket_key" when using TLSv1.3. + + + + + + From mdounin at mdounin.ru Tue Jan 25 15:08:05 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 Jan 2022 15:08:05 +0000 Subject: [nginx] release-1.21.6 tag Message-ID: details: https://hg.nginx.org/nginx/rev/56ead48cfe88 branches: changeset: 7999:56ead48cfe88 user: Maxim Dounin date: Tue Jan 25 18:03:52 2022 +0300 description: release-1.21.6 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff -r 714eb4b2c09e -r 56ead48cfe88 .hgtags --- a/.hgtags Tue Jan 25 18:03:51 2022 +0300 +++ b/.hgtags Tue Jan 25 18:03:52 2022 +0300 @@ -466,3 +466,4 @@ bfbc52374adcbf2f9060afd62de940f6fab3bba5 2217a9c1d0b86026f22700b3c089545db1964f55 release-1.21.3 39be8a682c58308d9399cddd57e37f9fdb7bdf3e release-1.21.4 d986378168fd4d70e0121cabac274c560cca9bdf release-1.21.5 +714eb4b2c09e712fb2572a2164ce2bf67638ccac release-1.21.6 From gaoyan09 at baidu.com Wed Jan 26 04:56:28 2022 From: gaoyan09 at baidu.com (=?utf-8?B?R2FvLFlhbijlqpLkvZPkupEp?=) Date: Wed, 26 Jan 2022 04:56:28 +0000 Subject: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null In-Reply-To: <797C7B9D-D6B9-4EBF-ADAB-37AA62903B56@contoso.com> References: <797C7B9D-D6B9-4EBF-ADAB-37AA62903B56@contoso.com> Message-ID: <1CA525F5-EB7A-4F39-9F53-938A47CFDAEF@baidu.com> > Thank you for report! > Can you please enable debug and provide debug log? Sorry, this is a very rare case, and do not know how to trigger this bug steadily here is more data from the stack p *c $1 = {data = 0x7efd695c74c0, read = 0xf2aa990, write = 0xfa72ca0, fd = 5547, recv = 0x4a7c9a , send = 0x4ab5b9 , recv_chain = 0x0, send_chain = 0x4ab7a7 , listening = 0x29cf140, sent = 0, log = 0x7efd695c73f0, pool = 0x7efd695c7330, type = 2, sockaddr = 0x7efd695c7380, socklen = 16, addr_text = {len = 15, data = 0x7efd695c74b0 "123.101.125.168.H\270(\v"}, proxy_protocol = 0x0, quic = 0x0, ssl = 0x1e491e8, udp = 0x1e49150, local_sockaddr = 0x7efd695c7440, local_socklen = 16, buffer = 0x7efd695c7450, queue = {prev = 0x0, next = 0x0}, number = 433923428, start_time = 3194843312, requests = 0, buffered = 0, log_error = 2, timedout = 0, error = 0, destroyed = 0, idle = 0, reusable = 0, close = 0, shared = 1, sendfile = 0, sndlowat = 0, tcp_nodelay = 0, tcp_nopush = 0, need_last_buf = 0} p *c->ssl $2 = {connection = 0x7efd708fdb00, session_ctx = 0x7efd69052970, last = 0, buf = 0x0, buffer_size = 16384, handler = 0x0, session = 0x0, save_session = 0x0, saved_read_handler = 0x0, saved_write_handler = 0x0, ocsp = 0x0, early_buf = 0 '\000', handshaked = 0, handshake_rejected = 0, renegotiation = 0, buffer = 1, sendfile = 0, no_wait_shutdown = 1, no_send_shutdown = 0, shutdown_without_free = 0, handshake_buffer_set = 0, try_early_data = 0, in_early = 0, in_ocsp = 0, early_preread = 0, write_blocked = 0} And you can see it happened before handshaked Gao,Yan(ACG VCP) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaoyan09 at baidu.com Wed Jan 26 06:13:54 2022 From: gaoyan09 at baidu.com (=?utf-8?B?R2FvLFlhbijlqpLkvZPkupEp?=) Date: Wed, 26 Jan 2022 06:13:54 +0000 Subject: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null In-Reply-To: <1CA525F5-EB7A-4F39-9F53-938A47CFDAEF@baidu.com> References: <797C7B9D-D6B9-4EBF-ADAB-37AA62903B56@contoso.com> <1CA525F5-EB7A-4F39-9F53-938A47CFDAEF@baidu.com> Message-ID: I guess the problem function call chain: final_early_data(openssl)-> quic_set_encryption_secrets-> ngx_quic_set_encryption_secrets -> ngx_quic_init_streams -> ngx_ssl_ocsp_validate-> ngx_handle_read_event But this connection->quic would always be null, and cannot jump to quic if branch in ngx_handle_read_event Gao,Yan(ACG VCP) 发件人: "Gao,Yan(媒体云)" 日期: 2022年1月26日 星期三 下午12:56 收件人: "nginx-devel at nginx.org" 主题: Re: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null > Thank you for report! > Can you please enable debug and provide debug log? Sorry, this is a very rare case, and do not know how to trigger this bug steadily here is more data from the stack p *c $1 = {data = 0x7efd695c74c0, read = 0xf2aa990, write = 0xfa72ca0, fd = 5547, recv = 0x4a7c9a , send = 0x4ab5b9 , recv_chain = 0x0, send_chain = 0x4ab7a7 , listening = 0x29cf140, sent = 0, log = 0x7efd695c73f0, pool = 0x7efd695c7330, type = 2, sockaddr = 0x7efd695c7380, socklen = 16, addr_text = {len = 15, data = 0x7efd695c74b0 "123.101.125.168.H\270(\v"}, proxy_protocol = 0x0, quic = 0x0, ssl = 0x1e491e8, udp = 0x1e49150, local_sockaddr = 0x7efd695c7440, local_socklen = 16, buffer = 0x7efd695c7450, queue = {prev = 0x0, next = 0x0}, number = 433923428, start_time = 3194843312, requests = 0, buffered = 0, log_error = 2, timedout = 0, error = 0, destroyed = 0, idle = 0, reusable = 0, close = 0, shared = 1, sendfile = 0, sndlowat = 0, tcp_nodelay = 0, tcp_nopush = 0, need_last_buf = 0} p *c->ssl $2 = {connection = 0x7efd708fdb00, session_ctx = 0x7efd69052970, last = 0, buf = 0x0, buffer_size = 16384, handler = 0x0, session = 0x0, save_session = 0x0, saved_read_handler = 0x0, saved_write_handler = 0x0, ocsp = 0x0, early_buf = 0 '\000', handshaked = 0, handshake_rejected = 0, renegotiation = 0, buffer = 1, sendfile = 0, no_wait_shutdown = 1, no_send_shutdown = 0, shutdown_without_free = 0, handshake_buffer_set = 0, try_early_data = 0, in_early = 0, in_ocsp = 0, early_preread = 0, write_blocked = 0} And you can see it happened before handshaked Gao,Yan(ACG VCP) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaoyan09 at baidu.com Wed Jan 26 06:38:13 2022 From: gaoyan09 at baidu.com (=?utf-8?B?R2FvLFlhbijlqpLkvZPkupEp?=) Date: Wed, 26 Jan 2022 06:38:13 +0000 Subject: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null In-Reply-To: References: <797C7B9D-D6B9-4EBF-ADAB-37AA62903B56@contoso.com> <1CA525F5-EB7A-4F39-9F53-938A47CFDAEF@baidu.com> Message-ID: <7C889653-E7E9-442C-AA80-77877CDD5C09@baidu.com> Why sc->type = SOCK_STREAM in ngx_quic_create_stream? Should it be SOCK_DGRAM? Gao,Yan(ACG VCP) 发件人: "Gao,Yan(媒体云)" 日期: 2022年1月26日 星期三 下午2:13 收件人: "nginx-devel at nginx.org" 主题: Re: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null I guess the problem function call chain: final_early_data(openssl)-> quic_set_encryption_secrets-> ngx_quic_set_encryption_secrets -> ngx_quic_init_streams -> ngx_ssl_ocsp_validate-> ngx_handle_read_event But this connection->quic would always be null, and cannot jump to quic if branch in ngx_handle_read_event Gao,Yan(ACG VCP) 发件人: "Gao,Yan(媒体云)" 日期: 2022年1月26日 星期三 下午12:56 收件人: "nginx-devel at nginx.org" 主题: Re: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null > Thank you for report! > Can you please enable debug and provide debug log? Sorry, this is a very rare case, and do not know how to trigger this bug steadily here is more data from the stack p *c $1 = {data = 0x7efd695c74c0, read = 0xf2aa990, write = 0xfa72ca0, fd = 5547, recv = 0x4a7c9a , send = 0x4ab5b9 , recv_chain = 0x0, send_chain = 0x4ab7a7 , listening = 0x29cf140, sent = 0, log = 0x7efd695c73f0, pool = 0x7efd695c7330, type = 2, sockaddr = 0x7efd695c7380, socklen = 16, addr_text = {len = 15, data = 0x7efd695c74b0 "123.101.125.168.H\270(\v"}, proxy_protocol = 0x0, quic = 0x0, ssl = 0x1e491e8, udp = 0x1e49150, local_sockaddr = 0x7efd695c7440, local_socklen = 16, buffer = 0x7efd695c7450, queue = {prev = 0x0, next = 0x0}, number = 433923428, start_time = 3194843312, requests = 0, buffered = 0, log_error = 2, timedout = 0, error = 0, destroyed = 0, idle = 0, reusable = 0, close = 0, shared = 1, sendfile = 0, sndlowat = 0, tcp_nodelay = 0, tcp_nopush = 0, need_last_buf = 0} p *c->ssl $2 = {connection = 0x7efd708fdb00, session_ctx = 0x7efd69052970, last = 0, buf = 0x0, buffer_size = 16384, handler = 0x0, session = 0x0, save_session = 0x0, saved_read_handler = 0x0, saved_write_handler = 0x0, ocsp = 0x0, early_buf = 0 '\000', handshaked = 0, handshake_rejected = 0, renegotiation = 0, buffer = 1, sendfile = 0, no_wait_shutdown = 1, no_send_shutdown = 0, shutdown_without_free = 0, handshake_buffer_set = 0, try_early_data = 0, in_early = 0, in_ocsp = 0, early_preread = 0, write_blocked = 0} And you can see it happened before handshaked Gao,Yan(ACG VCP) -------------- next part -------------- An HTML attachment was scrubbed... URL: From vl at nginx.com Wed Jan 26 08:01:52 2022 From: vl at nginx.com (Vladimir Homutov) Date: Wed, 26 Jan 2022 11:01:52 +0300 Subject: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null In-Reply-To: <7C889653-E7E9-442C-AA80-77877CDD5C09@baidu.com> References: <797C7B9D-D6B9-4EBF-ADAB-37AA62903B56@contoso.com> <1CA525F5-EB7A-4F39-9F53-938A47CFDAEF@baidu.com> <7C889653-E7E9-442C-AA80-77877CDD5C09@baidu.com> Message-ID: On Wed, Jan 26, 2022 at 06:38:13AM +0000, Gao,Yan(媒体云) wrote: > Why sc->type = SOCK_STREAM in ngx_quic_create_stream? Should it be SOCK_DGRAM? no, SOCK_STREAM is a correct setting for the quic streams. SOCK_DGRAM is only used for main quic connection which actually handles UDP datagrams and deals with QUIC protocol. Streams is an abstract layer that utilizes ngx_connection_t with custom event handling. > I guess the problem function call chain: final_early_data(openssl)-> > quic_set_encryption_secrets-> ngx_quic_set_encryption_secrets -> > ngx_quic_init_streams -> ngx_ssl_ocsp_validate-> ngx_handle_read_event > But this connection->quic would always be null, and cannot jump to > quic if branch in ngx_handle_read_event the case you are describing is not what see in backtrace. And in described case connection is main quic connection which has process c->quic pointer set. > > Thank you for report! > > Can you please enable debug and provide debug log? > > Sorry, this is a very rare case, and do not know how to trigger this bug steadily > here is more data from the stack ok, what exactly code revision are you running ? Line numbers (if correct) guess that it's something quite different from the current. normally, you only see c->udp->dgram = NULL only in packets that were not dispatched by dcid to any existing connection, and the handler is ngx_quic_run(). If packet goes to known connection, c->udp->dgram is initialized and the handler is ngx_quic_input_handler(). Hope this helps. > p *c > $1 = {data = 0x7efd695c74c0, read = 0xf2aa990, write = 0xfa72ca0, fd = 5547, recv = 0x4a7c9a , send = 0x4ab5b9 , recv_chain = 0x0, > send_chain = 0x4ab7a7 , listening = 0x29cf140, sent = 0, log = 0x7efd695c73f0, pool = 0x7efd695c7330, type = 2, sockaddr = 0x7efd695c7380, socklen = 16, > addr_text = {len = 15, data = 0x7efd695c74b0 "123.101.125.168.H\270(\v"}, proxy_protocol = 0x0, quic = 0x0, ssl = 0x1e491e8, udp = 0x1e49150, local_sockaddr = 0x7efd695c7440, local_socklen = 16, > buffer = 0x7efd695c7450, queue = {prev = 0x0, next = 0x0}, number = 433923428, start_time = 3194843312, requests = 0, buffered = 0, log_error = 2, timedout = 0, error = 0, > destroyed = 0, idle = 0, reusable = 0, close = 0, shared = 1, sendfile = 0, sndlowat = 0, tcp_nodelay = 0, tcp_nopush = 0, need_last_buf = 0} > > p *c->ssl > $2 = {connection = 0x7efd708fdb00, session_ctx = 0x7efd69052970, last = 0, buf = 0x0, buffer_size = 16384, > handler = 0x0, session = 0x0, save_session = 0x0, saved_read_handler = 0x0, saved_write_handler = 0x0, ocsp = 0x0, early_buf = 0 '\000', handshaked = 0, handshake_rejected = 0, renegotiation = 0, > buffer = 1, sendfile = 0, no_wait_shutdown = 1, no_send_shutdown = 0, shutdown_without_free = 0, handshake_buffer_set = 0, try_early_data = 0, in_early = 0, in_ocsp = 0, early_preread = 0, write_blocked = 0} > > And you can see it happened before handshaked > > Gao,Yan(ACG VCP) > _______________________________________________ > nginx-devel mailing list -- nginx-devel at nginx.org > To unsubscribe send an email to nginx-devel-leave at nginx.org From arut at nginx.com Wed Jan 26 09:06:52 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 26 Jan 2022 12:06:52 +0300 Subject: [PATCH 0 of 2] QUIC stream states and events Message-ID: - patch #1 introduces QUIC stream event states - patch #2 introduces QUIC stream event setting function From arut at nginx.com Wed Jan 26 09:06:53 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 26 Jan 2022 12:06:53 +0300 Subject: [PATCH 1 of 2] QUIC: introduced explicit stream states In-Reply-To: References: Message-ID: <42776baa77bea8da4cc0.1643188013@arut-laptop> # HG changeset patch # User Roman Arutyunyan # Date 1643181129 -10800 # Wed Jan 26 10:12:09 2022 +0300 # Branch quic # Node ID 42776baa77bea8da4cc0e3a157f122f0803b1e97 # Parent 6c86685941a8fe90b7dfd41bb150abca1c345087 QUIC: introduced explicit stream states. This allows to eliminate the usage of stream connection event flags for tracking stream state. diff --git a/src/event/quic/ngx_event_quic.h b/src/event/quic/ngx_event_quic.h --- a/src/event/quic/ngx_event_quic.h +++ b/src/event/quic/ngx_event_quic.h @@ -28,6 +28,26 @@ #define NGX_QUIC_STREAM_UNIDIRECTIONAL 0x02 +typedef enum { + NGX_QUIC_STREAM_SEND_READY = 0, + NGX_QUIC_STREAM_SEND_SEND, + NGX_QUIC_STREAM_SEND_DATA_SENT, + NGX_QUIC_STREAM_SEND_DATA_RECVD, + NGX_QUIC_STREAM_SEND_RESET_SENT, + NGX_QUIC_STREAM_SEND_RESET_RECVD +} ngx_quic_stream_send_state_e; + + +typedef enum { + NGX_QUIC_STREAM_RECV_RECV = 0, + NGX_QUIC_STREAM_RECV_SIZE_KNOWN, + NGX_QUIC_STREAM_RECV_DATA_RECVD, + NGX_QUIC_STREAM_RECV_DATA_READ, + NGX_QUIC_STREAM_RECV_RESET_RECVD, + NGX_QUIC_STREAM_RECV_RESET_READ +} ngx_quic_stream_recv_state_e; + + typedef struct { ngx_ssl_t *ssl; @@ -66,6 +86,8 @@ struct ngx_quic_stream_s { ngx_chain_t *in; ngx_chain_t *out; ngx_uint_t cancelable; /* unsigned cancelable:1; */ + ngx_quic_stream_send_state_e send_state; + ngx_quic_stream_recv_state_e recv_state; }; diff --git a/src/event/quic/ngx_event_quic_ack.c b/src/event/quic/ngx_event_quic_ack.c --- a/src/event/quic/ngx_event_quic_ack.c +++ b/src/event/quic/ngx_event_quic_ack.c @@ -617,10 +617,13 @@ ngx_quic_resend_frames(ngx_connection_t case NGX_QUIC_FT_STREAM: qs = ngx_quic_find_stream(&qc->streams.tree, f->u.stream.stream_id); - if (qs && qs->connection->write->error) { - /* RESET_STREAM was sent */ - ngx_quic_free_frame(c, f); - break; + if (qs) { + if (qs->send_state == NGX_QUIC_STREAM_SEND_RESET_SENT + || qs->send_state == NGX_QUIC_STREAM_SEND_RESET_RECVD) + { + ngx_quic_free_frame(c, f); + break; + } } /* fall through */ diff --git a/src/event/quic/ngx_event_quic_streams.c b/src/event/quic/ngx_event_quic_streams.c --- a/src/event/quic/ngx_event_quic_streams.c +++ b/src/event/quic/ngx_event_quic_streams.c @@ -220,19 +220,22 @@ ngx_quic_close_streams(ngx_connection_t ngx_int_t ngx_quic_reset_stream(ngx_connection_t *c, ngx_uint_t err) { - ngx_event_t *wev; ngx_connection_t *pc; ngx_quic_frame_t *frame; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; - wev = c->write; + qs = c->quic; - if (wev->error) { + if (qs->send_state == NGX_QUIC_STREAM_SEND_DATA_RECVD + || qs->send_state == NGX_QUIC_STREAM_SEND_RESET_SENT + || qs->send_state == NGX_QUIC_STREAM_SEND_RESET_RECVD) + { return NGX_OK; } - qs = c->quic; + qs->send_state = NGX_QUIC_STREAM_SEND_RESET_SENT; + pc = qs->parent; qc = ngx_quic_get_connection(pc); @@ -249,9 +252,6 @@ ngx_quic_reset_stream(ngx_connection_t * ngx_quic_queue_frame(qc, frame); - wev->error = 1; - wev->ready = 1; - return NGX_OK; } @@ -259,27 +259,15 @@ ngx_quic_reset_stream(ngx_connection_t * ngx_int_t ngx_quic_shutdown_stream(ngx_connection_t *c, int how) { - ngx_quic_stream_t *qs; - - qs = c->quic; - if (how == NGX_RDWR_SHUTDOWN || how == NGX_WRITE_SHUTDOWN) { - if ((qs->id & NGX_QUIC_STREAM_SERVER_INITIATED) - || (qs->id & NGX_QUIC_STREAM_UNIDIRECTIONAL) == 0) - { - if (ngx_quic_shutdown_stream_send(c) != NGX_OK) { - return NGX_ERROR; - } + if (ngx_quic_shutdown_stream_send(c) != NGX_OK) { + return NGX_ERROR; } } if (how == NGX_RDWR_SHUTDOWN || how == NGX_READ_SHUTDOWN) { - if ((qs->id & NGX_QUIC_STREAM_SERVER_INITIATED) == 0 - || (qs->id & NGX_QUIC_STREAM_UNIDIRECTIONAL) == 0) - { - if (ngx_quic_shutdown_stream_recv(c) != NGX_OK) { - return NGX_ERROR; - } + if (ngx_quic_shutdown_stream_recv(c) != NGX_OK) { + return NGX_ERROR; } } @@ -290,19 +278,21 @@ ngx_quic_shutdown_stream(ngx_connection_ static ngx_int_t ngx_quic_shutdown_stream_send(ngx_connection_t *c) { - ngx_event_t *wev; ngx_connection_t *pc; ngx_quic_frame_t *frame; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; - wev = c->write; + qs = c->quic; - if (wev->error) { + if (qs->send_state != NGX_QUIC_STREAM_SEND_READY + && qs->send_state != NGX_QUIC_STREAM_SEND_SEND) + { return NGX_OK; } - qs = c->quic; + qs->send_state = NGX_QUIC_STREAM_SEND_DATA_SENT; + pc = qs->parent; qc = ngx_quic_get_connection(pc); @@ -326,8 +316,6 @@ ngx_quic_shutdown_stream_send(ngx_connec ngx_quic_queue_frame(qc, frame); - wev->error = 1; - return NGX_OK; } @@ -335,19 +323,19 @@ ngx_quic_shutdown_stream_send(ngx_connec static ngx_int_t ngx_quic_shutdown_stream_recv(ngx_connection_t *c) { - ngx_event_t *rev; ngx_connection_t *pc; ngx_quic_frame_t *frame; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; - rev = c->read; + qs = c->quic; - if (rev->pending_eof || rev->error) { + if (qs->recv_state != NGX_QUIC_STREAM_RECV_RECV + && qs->recv_state != NGX_QUIC_STREAM_RECV_SIZE_KNOWN) + { return NGX_OK; } - qs = c->quic; pc = qs->parent; qc = ngx_quic_get_connection(pc); @@ -370,8 +358,6 @@ ngx_quic_shutdown_stream_recv(ngx_connec ngx_quic_queue_frame(qc, frame); - rev->error = 1; - return NGX_OK; } @@ -689,9 +675,13 @@ ngx_quic_create_stream(ngx_connection_t if (id & NGX_QUIC_STREAM_UNIDIRECTIONAL) { if (id & NGX_QUIC_STREAM_SERVER_INITIATED) { qs->send_max_data = qc->ctp.initial_max_stream_data_uni; + qs->recv_state = NGX_QUIC_STREAM_RECV_DATA_READ; + qs->send_state = NGX_QUIC_STREAM_SEND_READY; } else { qs->recv_max_data = qc->tp.initial_max_stream_data_uni; + qs->recv_state = NGX_QUIC_STREAM_RECV_RECV; + qs->send_state = NGX_QUIC_STREAM_SEND_DATA_RECVD; } } else { @@ -703,6 +693,9 @@ ngx_quic_create_stream(ngx_connection_t qs->send_max_data = qc->ctp.initial_max_stream_data_bidi_local; qs->recv_max_data = qc->tp.initial_max_stream_data_bidi_remote; } + + qs->recv_state = NGX_QUIC_STREAM_RECV_RECV; + qs->send_state = NGX_QUIC_STREAM_SEND_READY; } qs->recv_window = qs->recv_max_data; @@ -743,26 +736,16 @@ ngx_quic_stream_recv(ngx_connection_t *c pc = qs->parent; rev = c->read; - if (rev->error) { + if (qs->recv_state == NGX_QUIC_STREAM_RECV_RESET_RECVD + || qs->recv_state == NGX_QUIC_STREAM_RECV_RESET_READ) + { + qs->recv_state = NGX_QUIC_STREAM_RECV_RESET_READ; + rev->error = 1; return NGX_ERROR; } - ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, - "quic stream id:0x%xL recv eof:%d buf:%uz", - qs->id, rev->pending_eof, size); - - if (qs->in == NULL || qs->in->buf->sync) { - rev->ready = 0; - - if (qs->recv_offset == qs->final_size) { - rev->eof = 1; - return 0; - } - - ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, - "quic stream id:0x%xL recv() not ready", qs->id); - return NGX_AGAIN; - } + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, + "quic stream id:0x%xL recv buf:%uz", qs->id, size); in = ngx_quic_read_chain(pc, &qs->in, size); if (in == NGX_CHAIN_ERROR) { @@ -779,8 +762,23 @@ ngx_quic_stream_recv(ngx_connection_t *c ngx_quic_free_chain(pc, in); - if (qs->in == NULL) { - rev->ready = rev->pending_eof; + if (len == 0) { + rev->ready = 0; + + if (qs->recv_state == NGX_QUIC_STREAM_RECV_SIZE_KNOWN + && qs->recv_offset == qs->final_size) + { + qs->recv_state = NGX_QUIC_STREAM_RECV_DATA_READ; + } + + if (qs->recv_state == NGX_QUIC_STREAM_RECV_DATA_READ) { + rev->eof = 1; + return 0; + } + + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, + "quic stream id:0x%xL recv() not ready", qs->id); + return NGX_AGAIN; } ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, @@ -838,10 +836,15 @@ ngx_quic_stream_send_chain(ngx_connectio qc = ngx_quic_get_connection(pc); wev = c->write; - if (wev->error) { + if (qs->send_state != NGX_QUIC_STREAM_SEND_READY + && qs->send_state != NGX_QUIC_STREAM_SEND_SEND) + { + wev->error = 1; return NGX_CHAIN_ERROR; } + qs->send_state = NGX_QUIC_STREAM_SEND_SEND; + flow = ngx_quic_max_stream_flow(c); if (flow == 0) { wev->ready = 0; @@ -1050,9 +1053,9 @@ ngx_quic_handle_stream_frame(ngx_connect sc = qs->connection; - rev = sc->read; - - if (rev->error) { + if (qs->recv_state != NGX_QUIC_STREAM_RECV_RECV + && qs->recv_state != NGX_QUIC_STREAM_RECV_SIZE_KNOWN) + { return NGX_OK; } @@ -1085,8 +1088,8 @@ ngx_quic_handle_stream_frame(ngx_connect return NGX_ERROR; } - rev->pending_eof = 1; qs->final_size = last; + qs->recv_state = NGX_QUIC_STREAM_RECV_SIZE_KNOWN; } if (ngx_quic_write_chain(c, &qs->in, frame->data, f->length, @@ -1097,6 +1100,7 @@ ngx_quic_handle_stream_frame(ngx_connect } if (f->offset == qs->recv_offset) { + rev = sc->read; rev->ready = 1; if (rev->active) { @@ -1272,11 +1276,15 @@ ngx_quic_handle_reset_stream_frame(ngx_c return NGX_OK; } - sc = qs->connection; + if (qs->recv_state == NGX_QUIC_STREAM_RECV_RESET_RECVD + || qs->recv_state == NGX_QUIC_STREAM_RECV_RESET_READ) + { + return NGX_OK; + } - rev = sc->read; - rev->error = 1; - rev->ready = 1; + qs->recv_state = NGX_QUIC_STREAM_RECV_RESET_RECVD; + + sc = qs->connection; if (ngx_quic_control_flow(sc, f->final_size) != NGX_OK) { return NGX_ERROR; @@ -1298,6 +1306,9 @@ ngx_quic_handle_reset_stream_frame(ngx_c return NGX_ERROR; } + rev = sc->read; + rev->ready = 1; + if (rev->active) { ngx_post_event(rev, &ngx_posted_events); } @@ -1340,6 +1351,7 @@ ngx_quic_handle_stop_sending_frame(ngx_c wev = qs->connection->write; if (wev->active) { + wev->ready = 1; ngx_post_event(wev, &ngx_posted_events); } @@ -1412,11 +1424,9 @@ static ngx_int_t ngx_quic_control_flow(ngx_connection_t *c, uint64_t last) { uint64_t len; - ngx_event_t *rev; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; - rev = c->read; qs = c->quic; qc = ngx_quic_get_connection(qs->parent); @@ -1433,7 +1443,7 @@ ngx_quic_control_flow(ngx_connection_t * qs->recv_last += len; - if (!rev->error && qs->recv_last > qs->recv_max_data) { + if (qs->recv_last > qs->recv_max_data) { qc->error = NGX_QUIC_ERR_FLOW_CONTROL_ERROR; return NGX_ERROR; } @@ -1453,12 +1463,10 @@ static ngx_int_t ngx_quic_update_flow(ngx_connection_t *c, uint64_t last) { uint64_t len; - ngx_event_t *rev; ngx_connection_t *pc; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; - rev = c->read; qs = c->quic; pc = qs->parent; qc = ngx_quic_get_connection(pc); @@ -1474,9 +1482,7 @@ ngx_quic_update_flow(ngx_connection_t *c qs->recv_offset += len; - if (!rev->pending_eof && !rev->error - && qs->recv_max_data <= qs->recv_offset + qs->recv_window / 2) - { + if (qs->recv_max_data <= qs->recv_offset + qs->recv_window / 2) { if (ngx_quic_update_max_stream_data(c) != NGX_OK) { return NGX_ERROR; } @@ -1509,6 +1515,10 @@ ngx_quic_update_max_stream_data(ngx_conn pc = qs->parent; qc = ngx_quic_get_connection(pc); + if (qs->recv_state != NGX_QUIC_STREAM_RECV_RECV) { + return NGX_OK; + } + recv_max_data = qs->recv_offset + qs->recv_window; if (qs->recv_max_data == recv_max_data) { From arut at nginx.com Wed Jan 26 09:06:54 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 26 Jan 2022 12:06:54 +0300 Subject: [PATCH 2 of 2] QUIC: stream event setting function In-Reply-To: References: Message-ID: # HG changeset patch # User Roman Arutyunyan # Date 1643187691 -10800 # Wed Jan 26 12:01:31 2022 +0300 # Branch quic # Node ID b019e99bc72b2fe0027af25c9cc1bf33c287eb22 # Parent 42776baa77bea8da4cc0e3a157f122f0803b1e97 QUIC: stream event setting function. The function ngx_quic_set_event() is now called instead of posting events directly. diff --git a/src/event/quic/ngx_event_quic_streams.c b/src/event/quic/ngx_event_quic_streams.c --- a/src/event/quic/ngx_event_quic_streams.c +++ b/src/event/quic/ngx_event_quic_streams.c @@ -34,6 +34,7 @@ static ngx_int_t ngx_quic_control_flow(n static ngx_int_t ngx_quic_update_flow(ngx_connection_t *c, uint64_t last); static ngx_int_t ngx_quic_update_max_stream_data(ngx_connection_t *c); static ngx_int_t ngx_quic_update_max_data(ngx_connection_t *c); +static void ngx_quic_set_event(ngx_event_t *ev); ngx_connection_t * @@ -1022,7 +1023,6 @@ ngx_quic_handle_stream_frame(ngx_connect ngx_quic_frame_t *frame) { uint64_t last; - ngx_event_t *rev; ngx_connection_t *sc; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; @@ -1100,12 +1100,7 @@ ngx_quic_handle_stream_frame(ngx_connect } if (f->offset == qs->recv_offset) { - rev = sc->read; - rev->ready = 1; - - if (rev->active) { - ngx_post_event(rev, &ngx_posted_events); - } + ngx_quic_set_event(sc->read); } return NGX_OK; @@ -1116,7 +1111,6 @@ ngx_int_t ngx_quic_handle_max_data_frame(ngx_connection_t *c, ngx_quic_max_data_frame_t *f) { - ngx_event_t *wev; ngx_rbtree_t *tree; ngx_rbtree_node_t *node; ngx_quic_stream_t *qs; @@ -1138,12 +1132,7 @@ ngx_quic_handle_max_data_frame(ngx_conne node = ngx_rbtree_next(tree, node)) { qs = (ngx_quic_stream_t *) node; - wev = qs->connection->write; - - if (wev->active) { - wev->ready = 1; - ngx_post_event(wev, &ngx_posted_events); - } + ngx_quic_set_event(qs->connection->write); } } @@ -1204,7 +1193,6 @@ ngx_quic_handle_max_stream_data_frame(ng ngx_quic_header_t *pkt, ngx_quic_max_stream_data_frame_t *f) { uint64_t sent; - ngx_event_t *wev; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; @@ -1234,12 +1222,7 @@ ngx_quic_handle_max_stream_data_frame(ng sent = qs->connection->sent; if (sent >= qs->send_max_data) { - wev = qs->connection->write; - - if (wev->active) { - wev->ready = 1; - ngx_post_event(wev, &ngx_posted_events); - } + ngx_quic_set_event(qs->connection->write); } qs->send_max_data = f->limit; @@ -1252,7 +1235,6 @@ ngx_int_t ngx_quic_handle_reset_stream_frame(ngx_connection_t *c, ngx_quic_header_t *pkt, ngx_quic_reset_stream_frame_t *f) { - ngx_event_t *rev; ngx_connection_t *sc; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; @@ -1306,12 +1288,7 @@ ngx_quic_handle_reset_stream_frame(ngx_c return NGX_ERROR; } - rev = sc->read; - rev->ready = 1; - - if (rev->active) { - ngx_post_event(rev, &ngx_posted_events); - } + ngx_quic_set_event(qs->connection->read); return NGX_OK; } @@ -1321,7 +1298,6 @@ ngx_int_t ngx_quic_handle_stop_sending_frame(ngx_connection_t *c, ngx_quic_header_t *pkt, ngx_quic_stop_sending_frame_t *f) { - ngx_event_t *wev; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; @@ -1348,12 +1324,7 @@ ngx_quic_handle_stop_sending_frame(ngx_c return NGX_ERROR; } - wev = qs->connection->write; - - if (wev->active) { - wev->ready = 1; - ngx_post_event(wev, &ngx_posted_events); - } + ngx_quic_set_event(qs->connection->write); return NGX_OK; } @@ -1392,7 +1363,6 @@ void ngx_quic_handle_stream_ack(ngx_connection_t *c, ngx_quic_frame_t *f) { uint64_t sent, unacked; - ngx_event_t *wev; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; @@ -1403,13 +1373,11 @@ ngx_quic_handle_stream_ack(ngx_connectio return; } - wev = qs->connection->write; sent = qs->connection->sent; unacked = sent - qs->acked; - if (unacked >= qc->conf->stream_buffer_size && wev->active) { - wev->ready = 1; - ngx_post_event(wev, &ngx_posted_events); + if (unacked >= qc->conf->stream_buffer_size) { + ngx_quic_set_event(qs->connection->write); } qs->acked += f->u.stream.length; @@ -1581,6 +1549,17 @@ ngx_quic_update_max_data(ngx_connection_ } +static void +ngx_quic_set_event(ngx_event_t *ev) +{ + ev->ready = 1; + + if (ev->active) { + ngx_post_event(ev, &ngx_posted_events); + } +} + + ngx_int_t ngx_quic_handle_read_event(ngx_event_t *rev, ngx_uint_t flags) { From gaoyan09 at baidu.com Wed Jan 26 10:00:06 2022 From: gaoyan09 at baidu.com (=?utf-8?B?R2FvLFlhbijlqpLkvZPkupEp?=) Date: Wed, 26 Jan 2022 10:00:06 +0000 Subject: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null In-Reply-To: <7C889653-E7E9-442C-AA80-77877CDD5C09@baidu.com> References: <797C7B9D-D6B9-4EBF-ADAB-37AA62903B56@contoso.com> <1CA525F5-EB7A-4F39-9F53-938A47CFDAEF@baidu.com> <7C889653-E7E9-442C-AA80-77877CDD5C09@baidu.com> Message-ID: <024CFE99-B384-429D-984D-ADAFB9FB5F47@baidu.com> > the case you are describing is not what see in backtrace. And in > described case connection is main quic connection which has process > c->quic pointer set. I only find sc->quic = qs; in ngx_quic_create_stream,and this is stream connection, not the main quic connection. How the main quic connection c->quic set? And the local code at this position: changeset: 8813:c37ea624c307 branch: quic tag: tip user: Roman Arutyunyan date: Fri Jan 21 11:20:18 2022 +0300 summary: QUIC: changed debug message. Gao,Yan(ACG VCP) -------------- next part -------------- An HTML attachment was scrubbed... URL: From vl at nginx.com Wed Jan 26 12:14:37 2022 From: vl at nginx.com (Vladimir Homutov) Date: Wed, 26 Jan 2022 15:14:37 +0300 Subject: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null In-Reply-To: <024CFE99-B384-429D-984D-ADAFB9FB5F47@baidu.com> References: <797C7B9D-D6B9-4EBF-ADAB-37AA62903B56@contoso.com> <1CA525F5-EB7A-4F39-9F53-938A47CFDAEF@baidu.com> <7C889653-E7E9-442C-AA80-77877CDD5C09@baidu.com> <024CFE99-B384-429D-984D-ADAFB9FB5F47@baidu.com> Message-ID: On Wed, Jan 26, 2022 at 10:00:06AM +0000, Gao,Yan(媒体云) wrote: > > the case you are describing is not what see in backtrace. And in > > described case connection is main quic connection which has process > > c->quic pointer set. > > I only find sc->quic = qs; in ngx_quic_create_stream,and this is stream connection, not the main quic connection. > How the main quic connection c->quic set? The main quic connection is created in ngx_quic_new_connection(), which calls ngx_quic_open_sockets() and it sets c->udp for the first time. When packet arrives, c->udp is updated by ngx_lookup_udp_connection(). The main connection does not have c->quic set; this is used in stream connections. To access main connection from quic stream, c->quic->parent may be used. > > And the local code at this position: > changeset: 8813:c37ea624c307 > branch: quic > tag: tip > user: Roman Arutyunyan > date: Fri Jan 21 11:20:18 2022 +0300 > summary: QUIC: changed debug message. can you confirm that the problem occured using this code and no other patches? In any case, it would be useful to enable debug and get debug log or at least reproduce on a binary without optimization to get meaninigful backtrace. From mdounin at mdounin.ru Thu Jan 27 00:10:04 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 27 Jan 2022 03:10:04 +0300 Subject: Prioritize `X-Accel-Expires` than `Cache-Control` and `Expires` (#964) In-Reply-To: References: Message-ID: Hello! On Tue, Jan 25, 2022 at 12:27:58PM +0900, Yugo Horie wrote: > changeset: 7997:86f70e48a64a > branch: issue-964 > tag: tip > user: Yugo Horie > date: Tue Jan 25 12:16:05 2022 +0900 > files: src/http/ngx_http_upstream.c src/http/ngx_http_upstream.h > description: > Prioritize `X-Accel-Expires` than `Cache-Control` and `Expires` (#964) > > We introduce 3 flags that indicate to be overwriting cache control behavior. > > * The `overwrite_noncache` switches on the case of not to be cached when > processing `Cache-Control` and `Expires` headers from upstream. > > * The `overwrite_stale_xxx` flags also switch on when it's enabled to use > stale cache behavior on processing those headers. > > * `process_accel_expires` watches these flags, which invalidates their non- > cache > and stale behavior which had been set in other headers to prioritize > `X-Accel-Expires`. > > user: Yugo Horie > changed src/http/ngx_http_upstream.c > changed src/http/ngx_http_upstream.h > > > diff -r 5d88e2bf92b3 -r 86f70e48a64a src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c Sat Jan 22 00:28:51 2022 +0300 > +++ b/src/http/ngx_http_upstream.c Tue Jan 25 12:16:05 2022 +0900 > @@ -4747,6 +4747,7 @@ > || ngx_strlcasestrn(start, last, (u_char *) "private", 7 - 1) != > NULL) > { > u->cacheable = 0; > + u->overwrite_noncache = 1; > return NGX_OK; > } > > @@ -4772,11 +4773,13 @@ > } > > u->cacheable = 0; > + u->overwrite_noncache = 1; > return NGX_OK; > } > > if (n == 0) { > u->cacheable = 0; > + u->overwrite_noncache = 1; > return NGX_OK; > } > > @@ -4800,9 +4803,12 @@ > } > > u->cacheable = 0; > + u->overwrite_noncache = 1; > return NGX_OK; > } > > + u->overwrite_stale_updating = 1; > + u->overwrite_stale_error = 1; > r->cache->updating_sec = n; > r->cache->error_sec = n; > } > @@ -4822,10 +4828,12 @@ > continue; > } > > + u->overwrite_noncache = 1; > u->cacheable = 0; > return NGX_OK; > } > > + u->overwrite_stale_error = 1; > r->cache->error_sec = n; > } > } > @@ -4863,6 +4871,7 @@ > expires = ngx_parse_http_time(h->value.data, h->value.len); > > if (expires == NGX_ERROR || expires < ngx_time()) { > + u->overwrite_noncache = 1; > u->cacheable = 0; > return NGX_OK; > } > @@ -4897,6 +4906,15 @@ > if (r->cache == NULL) { > return NGX_OK; > } > + if (u->overwrite_noncache) { > + u->cacheable = 1; > + } > + if (u->overwrite_stale_updating) { > + r->cache->updating_sec = 0; > + } > + if (u->overwrite_stale_error) { > + r->cache->error_sec = 0; > + } > > len = h->value.len; > p = h->value.data; > diff -r 5d88e2bf92b3 -r 86f70e48a64a src/http/ngx_http_upstream.h > --- a/src/http/ngx_http_upstream.h Sat Jan 22 00:28:51 2022 +0300 > +++ b/src/http/ngx_http_upstream.h Tue Jan 25 12:16:05 2022 +0900 > @@ -386,6 +386,9 @@ > > unsigned store:1; > unsigned cacheable:1; > + unsigned overwrite_noncache:1; > + unsigned overwrite_stale_updating:1; > + unsigned overwrite_stale_error:1; > unsigned accel:1; > unsigned ssl:1; > #if (NGX_HTTP_CACHE) Thank you for the patch. As already suggested in ticket #2309, the approach taken looks too fragile. For example, the following set of headers will result in caching being incorrectly enabled (while it should be disabled due to Set-Cookie header): Set-Cookie: foo=bar Cache-Control: no-cache X-Accel-Expires: 100 A better solution might be to save parsing results somewhere in u->headers_in, and apply these parsing results in a separate step after parsing all headers, probably somewhere in ngx_http_upstream_process_headers(). Similar implementation can be seen, for example, in Content-Length and Transfer-Encoding parsing. -- Maxim Dounin http://mdounin.ru/ From dnj0496 at gmail.com Thu Jan 27 00:55:52 2022 From: dnj0496 at gmail.com (Dk Jack) Date: Wed, 26 Jan 2022 16:55:52 -0800 Subject: request body filter last_buf Message-ID: Hi, in my module I am inspecting the request body and making certain decisions such as sending a 403 based on the content in the body. I based my implementation based on the examples in the documentation and other nginx modules. http://nginx.org/en/docs/dev/development_guide.html#http_request_body_filters Sometimes, when my body_filter handler is invoked, I accumulate the body into a single buffer for processing in my module. To do this, I have to first get the length of the body. To get the length, I cycle through the body buffer chain. I also look for the last_buf to be set for the last buffer in the chain. The presence of the last_buf tells me that I have the complete body. However, sometimes I've noticed that the last_buf flag is not set (I log such requests), in which case I cannot process the body. Under what conditions would the flag be not set when the body_filter handler is invoked? Does the body filter handler get invoked multiple times or only once? Is my assumption that the last_buf flag will always be set when the body-filter handler is invoked correct? Any help is appreciated. regards, Dk -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Jan 27 01:22:52 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 27 Jan 2022 04:22:52 +0300 Subject: request body filter last_buf In-Reply-To: References: Message-ID: Hello! On Wed, Jan 26, 2022 at 04:55:52PM -0800, Dk Jack wrote: > Hi, > in my module I am inspecting the request body and making certain > decisions such as sending a 403 based on the content in the body. I based > my implementation based on the examples in the documentation and other > nginx modules. > > http://nginx.org/en/docs/dev/development_guide.html#http_request_body_filters > > Sometimes, when my body_filter handler is invoked, I accumulate the body > into a single buffer for processing in my module. To do this, I have to > first get the length of the body. To get the length, I cycle through the > body buffer chain. I also look for the last_buf to be set for the last > buffer in the chain. The presence of the last_buf tells me that I have the > complete body. However, sometimes I've noticed that the last_buf flag is > not set (I log such requests), in which case I cannot process the body. > > Under what conditions would the flag be not set when the body_filter > handler is invoked? Does the body filter handler get invoked multiple times > or only once? Is my assumption that the last_buf flag will always be set > when the body-filter handler is invoked correct? Any help is appreciated. The last_buf flag is only guaranteed following 7913:185c86b830ef (http://hg.nginx.org/nginx/rev/185c86b830ef, nginx 1.21.2). Before this, it wasn't present, for example, in requests with empty request body (with "Content-Length: 0"). -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Thu Jan 27 01:24:01 2022 From: mdounin at mdounin.ru (=?utf-8?q?Maxim_Dounin?=) Date: Thu, 27 Jan 2022 04:24:01 +0300 Subject: [PATCH] HTTP/2: made it possible to flush response headers (ticket #1743) Message-ID: # HG changeset patch # User Maxim Dounin # Date 1643225036 -10800 # Wed Jan 26 22:23:56 2022 +0300 # Node ID f76e0cf8e525996563e5f0092fa48a4fee873e93 # Parent 56ead48cfe885e8b89b30017459bf621b21d95f5 HTTP/2: made it possible to flush response headers (ticket #1743). Response headers can be buffered in the SSL buffer. But stream's fake connection buffered flag did not reflect this, so any attempts to flush the buffer without sending additional data were stopped by the write filter. It does not seem to be possible to reflect this in fc->buffered though, as we never known if main connection's c->buffered corresponds to the particular stream or not. As such, fc->buffered might prevent request finalization due to sending data on some other stream. Fix is to implement handling of flush buffers when c->need_last_buf is set, similarly to the existing last buffer handling. diff --git a/src/http/ngx_http_write_filter_module.c b/src/http/ngx_http_write_filter_module.c --- a/src/http/ngx_http_write_filter_module.c +++ b/src/http/ngx_http_write_filter_module.c @@ -227,7 +227,7 @@ ngx_http_write_filter(ngx_http_request_t if (size == 0 && !(c->buffered & NGX_LOWLEVEL_BUFFERED) - && !(last && c->need_last_buf)) + && !((last || flush) && c->need_last_buf)) { if (last || flush || sync) { for (cl = r->out; cl; /* void */) { diff --git a/src/http/v2/ngx_http_v2_filter_module.c b/src/http/v2/ngx_http_v2_filter_module.c --- a/src/http/v2/ngx_http_v2_filter_module.c +++ b/src/http/v2/ngx_http_v2_filter_module.c @@ -1815,7 +1815,11 @@ ngx_http_v2_waiting_queue(ngx_http_v2_co static ngx_inline ngx_int_t ngx_http_v2_filter_send(ngx_connection_t *fc, ngx_http_v2_stream_t *stream) { - if (stream->queued == 0) { + ngx_connection_t *c; + + c = stream->connection->connection; + + if (stream->queued == 0 && !c->buffered) { fc->buffered &= ~NGX_HTTP_V2_BUFFERED; return NGX_OK; } From gaoyan09 at baidu.com Thu Jan 27 04:33:08 2022 From: gaoyan09 at baidu.com (=?utf-8?B?R2FvLFlhbijlqpLkvZPkupEp?=) Date: Thu, 27 Jan 2022 04:33:08 +0000 Subject: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null In-Reply-To: <024CFE99-B384-429D-984D-ADAFB9FB5F47@baidu.com> References: <797C7B9D-D6B9-4EBF-ADAB-37AA62903B56@contoso.com> <1CA525F5-EB7A-4F39-9F53-938A47CFDAEF@baidu.com> <7C889653-E7E9-442C-AA80-77877CDD5C09@baidu.com> <024CFE99-B384-429D-984D-ADAFB9FB5F47@baidu.com> Message-ID: > The main quic connection is created in ngx_quic_new_connection(), which > calls ngx_quic_open_sockets() and it sets c->udp for the first time. > When packet arrives, c->udp is updated by ngx_lookup_udp_connection(). > The main connection does not have c->quic set; this is used in stream > connections. To access main connection from quic stream, c->quic->parent > may be used. ngx_event_recvmsg->(ls->handler) ngx_http_init_connection->ngx_http_v3_init: if (c->quic == NULL) { h3scf->quic.timeout = clcf->keepalive_timeout; ngx_quic_run(c, &h3scf->quic); return; } And, why check c->quic == NULL, as it is never set c->read->handler = ngx_quic_input_handler; in ngx_quic_run. ngx_quic_close_connection maybe called in ngx_quic_input_handler, and it finally call ngx_ssl_shutdown(c), which cannot return immediately as c->quic is never set. And maybe ngx_handle_read_event add c->read to events group finally Gao,Yan(ACG VCP) 发件人: "Gao,Yan(媒体云)" 日期: 2022年1月26日 星期三 下午6:00 收件人: "nginx-devel at nginx.org" 主题: Re: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null > the case you are describing is not what see in backtrace. And in > described case connection is main quic connection which has process > c->quic pointer set. I only find sc->quic = qs; in ngx_quic_create_stream,and this is stream connection, not the main quic connection. How the main quic connection c->quic set? And the local code at this position: changeset: 8813:c37ea624c307 branch: quic tag: tip user: Roman Arutyunyan date: Fri Jan 21 11:20:18 2022 +0300 summary: QUIC: changed debug message. Gao,Yan(ACG VCP) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dnj0496 at gmail.com Thu Jan 27 04:37:25 2022 From: dnj0496 at gmail.com (Dk Jack) Date: Wed, 26 Jan 2022 20:37:25 -0800 Subject: request body filter last_buf In-Reply-To: References: Message-ID: Thanks Maxim, Are there any other situations where last_buf would not be set besides the case of content-length being zero? On Wed, Jan 26, 2022 at 5:23 PM Maxim Dounin wrote: > Hello! > > On Wed, Jan 26, 2022 at 04:55:52PM -0800, Dk Jack wrote: > > > Hi, > > in my module I am inspecting the request body and making certain > > decisions such as sending a 403 based on the content in the body. I based > > my implementation based on the examples in the documentation and other > > nginx modules. > > > > > http://nginx.org/en/docs/dev/development_guide.html#http_request_body_filters > > > > Sometimes, when my body_filter handler is invoked, I accumulate the body > > into a single buffer for processing in my module. To do this, I have to > > first get the length of the body. To get the length, I cycle through the > > body buffer chain. I also look for the last_buf to be set for the last > > buffer in the chain. The presence of the last_buf tells me that I have > the > > complete body. However, sometimes I've noticed that the last_buf flag is > > not set (I log such requests), in which case I cannot process the body. > > > > Under what conditions would the flag be not set when the body_filter > > handler is invoked? Does the body filter handler get invoked multiple > times > > or only once? Is my assumption that the last_buf flag will always be set > > when the body-filter handler is invoked correct? Any help is > appreciated. > > The last_buf flag is only guaranteed following 7913:185c86b830ef > (http://hg.nginx.org/nginx/rev/185c86b830ef, nginx 1.21.2). > Before this, it wasn't present, for example, in requests > with empty request body (with "Content-Length: 0"). > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx-devel mailing list -- nginx-devel at nginx.org > To unsubscribe send an email to nginx-devel-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vl at nginx.com Thu Jan 27 11:43:19 2022 From: vl at nginx.com (Vladimir Homutov) Date: Thu, 27 Jan 2022 11:43:19 +0000 Subject: [nginx] Version bump. Message-ID: details: https://hg.nginx.org/nginx/rev/60b8f529db13 branches: changeset: 8000:60b8f529db13 user: Vladimir Homutov date: Thu Jan 27 13:44:09 2022 +0300 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff -r 56ead48cfe88 -r 60b8f529db13 src/core/nginx.h --- a/src/core/nginx.h Tue Jan 25 18:03:52 2022 +0300 +++ b/src/core/nginx.h Thu Jan 27 13:44:09 2022 +0300 @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1021006 -#define NGINX_VERSION "1.21.6" +#define nginx_version 1021007 +#define NGINX_VERSION "1.21.7" #define NGINX_VER "nginx/" NGINX_VERSION #ifdef NGX_BUILD From vl at nginx.com Thu Jan 27 11:43:22 2022 From: vl at nginx.com (Vladimir Homutov) Date: Thu, 27 Jan 2022 11:43:22 +0000 Subject: [nginx] Core: the ngx_event_udp.h header file. Message-ID: details: https://hg.nginx.org/nginx/rev/8206ecdcd837 branches: changeset: 8001:8206ecdcd837 user: Vladimir Homutov date: Tue Jan 25 15:41:48 2022 +0300 description: Core: the ngx_event_udp.h header file. diffstat: auto/sources | 3 ++- src/event/ngx_event.h | 7 +------ src/event/ngx_event_udp.h | 24 ++++++++++++++++++++++++ 3 files changed, 27 insertions(+), 7 deletions(-) diffs (65 lines): diff -r 60b8f529db13 -r 8206ecdcd837 auto/sources --- a/auto/sources Thu Jan 27 13:44:09 2022 +0300 +++ b/auto/sources Tue Jan 25 15:41:48 2022 +0300 @@ -89,7 +89,8 @@ EVENT_DEPS="src/event/ngx_event.h \ src/event/ngx_event_timer.h \ src/event/ngx_event_posted.h \ src/event/ngx_event_connect.h \ - src/event/ngx_event_pipe.h" + src/event/ngx_event_pipe.h \ + src/event/ngx_event_udp.h" EVENT_SRCS="src/event/ngx_event.c \ src/event/ngx_event_timer.c \ diff -r 60b8f529db13 -r 8206ecdcd837 src/event/ngx_event.h --- a/src/event/ngx_event.h Thu Jan 27 13:44:09 2022 +0300 +++ b/src/event/ngx_event.h Tue Jan 25 15:41:48 2022 +0300 @@ -494,12 +494,6 @@ extern ngx_module_t ngx_event_ void ngx_event_accept(ngx_event_t *ev); -#if !(NGX_WIN32) -void ngx_event_recvmsg(ngx_event_t *ev); -void ngx_udp_rbtree_insert_value(ngx_rbtree_node_t *temp, - ngx_rbtree_node_t *node, ngx_rbtree_node_t *sentinel); -#endif -void ngx_delete_udp_connection(void *data); ngx_int_t ngx_trylock_accept_mutex(ngx_cycle_t *cycle); ngx_int_t ngx_enable_accept_events(ngx_cycle_t *cycle); u_char *ngx_accept_log_error(ngx_log_t *log, u_char *buf, size_t len); @@ -529,6 +523,7 @@ ngx_int_t ngx_send_lowat(ngx_connection_ #include #include +#include #if (NGX_WIN32) #include diff -r 60b8f529db13 -r 8206ecdcd837 src/event/ngx_event_udp.h --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/src/event/ngx_event_udp.h Tue Jan 25 15:41:48 2022 +0300 @@ -0,0 +1,24 @@ + +/* + * Copyright (C) Nginx, Inc. + */ + + +#ifndef _NGX_EVENT_UDP_H_INCLUDED_ +#define _NGX_EVENT_UDP_H_INCLUDED_ + + +#include +#include + + +#if !(NGX_WIN32) +void ngx_event_recvmsg(ngx_event_t *ev); +void ngx_udp_rbtree_insert_value(ngx_rbtree_node_t *temp, + ngx_rbtree_node_t *node, ngx_rbtree_node_t *sentinel); +#endif + +void ngx_delete_udp_connection(void *data); + + +#endif /* _NGX_EVENT_UDP_H_INCLUDED_ */ From vl at nginx.com Thu Jan 27 11:43:25 2022 From: vl at nginx.com (Vladimir Homutov) Date: Thu, 27 Jan 2022 11:43:25 +0000 Subject: [nginx] Core: made the ngx_sendmsg() function non-static. Message-ID: details: https://hg.nginx.org/nginx/rev/cfe1284e5d1d branches: changeset: 8002:cfe1284e5d1d user: Vladimir Homutov date: Tue Jan 25 15:48:56 2022 +0300 description: Core: made the ngx_sendmsg() function non-static. The NGX_HAVE_ADDRINFO_CMSG macro is defined when at least one of methods to deal with corresponding control message is available. diffstat: src/event/ngx_event_udp.h | 32 ++++++ src/os/unix/ngx_udp_sendmsg_chain.c | 169 ++++++++++++++++++++--------------- 2 files changed, 129 insertions(+), 72 deletions(-) diffs (279 lines): diff -r 8206ecdcd837 -r cfe1284e5d1d src/event/ngx_event_udp.h --- a/src/event/ngx_event_udp.h Tue Jan 25 15:41:48 2022 +0300 +++ b/src/event/ngx_event_udp.h Tue Jan 25 15:48:56 2022 +0300 @@ -13,7 +13,39 @@ #if !(NGX_WIN32) + +#if ((NGX_HAVE_MSGHDR_MSG_CONTROL) \ + && (NGX_HAVE_IP_SENDSRCADDR || NGX_HAVE_IP_RECVDSTADDR \ + || NGX_HAVE_IP_PKTINFO \ + || (NGX_HAVE_INET6 && NGX_HAVE_IPV6_RECVPKTINFO))) +#define NGX_HAVE_ADDRINFO_CMSG 1 + +#endif + + +#if (NGX_HAVE_ADDRINFO_CMSG) + +typedef union { +#if (NGX_HAVE_IP_SENDSRCADDR || NGX_HAVE_IP_RECVDSTADDR) + struct in_addr addr; +#endif + +#if (NGX_HAVE_IP_PKTINFO) + struct in_pktinfo pkt; +#endif + +#if (NGX_HAVE_INET6 && NGX_HAVE_IPV6_RECVPKTINFO) + struct in6_pktinfo pkt6; +#endif +} ngx_addrinfo_t; + +size_t ngx_set_srcaddr_cmsg(struct cmsghdr *cmsg, + struct sockaddr *local_sockaddr); + +#endif + void ngx_event_recvmsg(ngx_event_t *ev); +ssize_t ngx_sendmsg(ngx_connection_t *c, struct msghdr *msg, int flags); void ngx_udp_rbtree_insert_value(ngx_rbtree_node_t *temp, ngx_rbtree_node_t *node, ngx_rbtree_node_t *sentinel); #endif diff -r 8206ecdcd837 -r cfe1284e5d1d src/os/unix/ngx_udp_sendmsg_chain.c --- a/src/os/unix/ngx_udp_sendmsg_chain.c Tue Jan 25 15:41:48 2022 +0300 +++ b/src/os/unix/ngx_udp_sendmsg_chain.c Tue Jan 25 15:48:56 2022 +0300 @@ -12,7 +12,7 @@ static ngx_chain_t *ngx_udp_output_chain_to_iovec(ngx_iovec_t *vec, ngx_chain_t *in, ngx_log_t *log); -static ssize_t ngx_sendmsg(ngx_connection_t *c, ngx_iovec_t *vec); +static ssize_t ngx_sendmsg_vec(ngx_connection_t *c, ngx_iovec_t *vec); ngx_chain_t * @@ -88,7 +88,7 @@ ngx_udp_unix_sendmsg_chain(ngx_connectio send += vec.size; - n = ngx_sendmsg(c, &vec); + n = ngx_sendmsg_vec(c, &vec); if (n == NGX_ERROR) { return NGX_CHAIN_ERROR; @@ -204,24 +204,13 @@ ngx_udp_output_chain_to_iovec(ngx_iovec_ static ssize_t -ngx_sendmsg(ngx_connection_t *c, ngx_iovec_t *vec) +ngx_sendmsg_vec(ngx_connection_t *c, ngx_iovec_t *vec) { - ssize_t n; - ngx_err_t err; - struct msghdr msg; - -#if (NGX_HAVE_MSGHDR_MSG_CONTROL) + struct msghdr msg; -#if (NGX_HAVE_IP_SENDSRCADDR) - u_char msg_control[CMSG_SPACE(sizeof(struct in_addr))]; -#elif (NGX_HAVE_IP_PKTINFO) - u_char msg_control[CMSG_SPACE(sizeof(struct in_pktinfo))]; -#endif - -#if (NGX_HAVE_INET6 && NGX_HAVE_IPV6_RECVPKTINFO) - u_char msg_control6[CMSG_SPACE(sizeof(struct in6_pktinfo))]; -#endif - +#if (NGX_HAVE_ADDRINFO_CMSG) + struct cmsghdr *cmsg; + u_char msg_control[CMSG_SPACE(sizeof(ngx_addrinfo_t))]; #endif ngx_memzero(&msg, sizeof(struct msghdr)); @@ -234,88 +223,115 @@ ngx_sendmsg(ngx_connection_t *c, ngx_iov msg.msg_iov = vec->iovs; msg.msg_iovlen = vec->count; -#if (NGX_HAVE_MSGHDR_MSG_CONTROL) +#if (NGX_HAVE_ADDRINFO_CMSG) + if (c->listening && c->listening->wildcard && c->local_sockaddr) { + + msg.msg_control = msg_control; + msg.msg_controllen = sizeof(msg_control); + ngx_memzero(msg_control, sizeof(msg_control)); + + cmsg = CMSG_FIRSTHDR(&msg); + + msg.msg_controllen = ngx_set_srcaddr_cmsg(cmsg, c->local_sockaddr); + } +#endif + + return ngx_sendmsg(c, &msg, 0); +} + + +#if (NGX_HAVE_ADDRINFO_CMSG) - if (c->listening && c->listening->wildcard && c->local_sockaddr) { +size_t +ngx_set_srcaddr_cmsg(struct cmsghdr *cmsg, struct sockaddr *local_sockaddr) +{ + size_t len; +#if (NGX_HAVE_IP_SENDSRCADDR) + struct in_addr *addr; + struct sockaddr_in *sin; +#elif (NGX_HAVE_IP_PKTINFO) + struct in_pktinfo *pkt; + struct sockaddr_in *sin; +#endif + +#if (NGX_HAVE_INET6 && NGX_HAVE_IPV6_RECVPKTINFO) + struct in6_pktinfo *pkt6; + struct sockaddr_in6 *sin6; +#endif + + +#if (NGX_HAVE_IP_SENDSRCADDR) || (NGX_HAVE_IP_PKTINFO) + + if (local_sockaddr->sa_family == AF_INET) { + + cmsg->cmsg_level = IPPROTO_IP; #if (NGX_HAVE_IP_SENDSRCADDR) - if (c->local_sockaddr->sa_family == AF_INET) { - struct cmsghdr *cmsg; - struct in_addr *addr; - struct sockaddr_in *sin; - - msg.msg_control = &msg_control; - msg.msg_controllen = sizeof(msg_control); + cmsg->cmsg_type = IP_SENDSRCADDR; + cmsg->cmsg_len = CMSG_LEN(sizeof(struct in_addr)); + len = CMSG_SPACE(sizeof(struct in_addr)); - cmsg = CMSG_FIRSTHDR(&msg); - cmsg->cmsg_level = IPPROTO_IP; - cmsg->cmsg_type = IP_SENDSRCADDR; - cmsg->cmsg_len = CMSG_LEN(sizeof(struct in_addr)); + sin = (struct sockaddr_in *) local_sockaddr; - sin = (struct sockaddr_in *) c->local_sockaddr; - - addr = (struct in_addr *) CMSG_DATA(cmsg); - *addr = sin->sin_addr; - } + addr = (struct in_addr *) CMSG_DATA(cmsg); + *addr = sin->sin_addr; #elif (NGX_HAVE_IP_PKTINFO) - if (c->local_sockaddr->sa_family == AF_INET) { - struct cmsghdr *cmsg; - struct in_pktinfo *pkt; - struct sockaddr_in *sin; + cmsg->cmsg_type = IP_PKTINFO; + cmsg->cmsg_len = CMSG_LEN(sizeof(struct in_pktinfo)); + len = CMSG_SPACE(sizeof(struct in_pktinfo)); - msg.msg_control = &msg_control; - msg.msg_controllen = sizeof(msg_control); + sin = (struct sockaddr_in *) local_sockaddr; - cmsg = CMSG_FIRSTHDR(&msg); - cmsg->cmsg_level = IPPROTO_IP; - cmsg->cmsg_type = IP_PKTINFO; - cmsg->cmsg_len = CMSG_LEN(sizeof(struct in_pktinfo)); + pkt = (struct in_pktinfo *) CMSG_DATA(cmsg); + ngx_memzero(pkt, sizeof(struct in_pktinfo)); + pkt->ipi_spec_dst = sin->sin_addr; - sin = (struct sockaddr_in *) c->local_sockaddr; - - pkt = (struct in_pktinfo *) CMSG_DATA(cmsg); - ngx_memzero(pkt, sizeof(struct in_pktinfo)); - pkt->ipi_spec_dst = sin->sin_addr; - } +#endif + return len; + } #endif #if (NGX_HAVE_INET6 && NGX_HAVE_IPV6_RECVPKTINFO) + if (local_sockaddr->sa_family == AF_INET6) { - if (c->local_sockaddr->sa_family == AF_INET6) { - struct cmsghdr *cmsg; - struct in6_pktinfo *pkt6; - struct sockaddr_in6 *sin6; + cmsg->cmsg_level = IPPROTO_IPV6; + cmsg->cmsg_type = IPV6_PKTINFO; + cmsg->cmsg_len = CMSG_LEN(sizeof(struct in6_pktinfo)); + len = CMSG_SPACE(sizeof(struct in6_pktinfo)); - msg.msg_control = &msg_control6; - msg.msg_controllen = sizeof(msg_control6); + sin6 = (struct sockaddr_in6 *) local_sockaddr; - cmsg = CMSG_FIRSTHDR(&msg); - cmsg->cmsg_level = IPPROTO_IPV6; - cmsg->cmsg_type = IPV6_PKTINFO; - cmsg->cmsg_len = CMSG_LEN(sizeof(struct in6_pktinfo)); + pkt6 = (struct in6_pktinfo *) CMSG_DATA(cmsg); + ngx_memzero(pkt6, sizeof(struct in6_pktinfo)); + pkt6->ipi6_addr = sin6->sin6_addr; - sin6 = (struct sockaddr_in6 *) c->local_sockaddr; + return len; + } +#endif - pkt6 = (struct in6_pktinfo *) CMSG_DATA(cmsg); - ngx_memzero(pkt6, sizeof(struct in6_pktinfo)); - pkt6->ipi6_addr = sin6->sin6_addr; - } + return 0; +} #endif - } + +ssize_t +ngx_sendmsg(ngx_connection_t *c, struct msghdr *msg, int flags) +{ + ssize_t n; + ngx_err_t err; +#if (NGX_DEBUG) + size_t size; + ngx_uint_t i; #endif eintr: - n = sendmsg(c->fd, &msg, 0); - - ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, - "sendmsg: %z of %uz", n, vec->size); + n = sendmsg(c->fd, msg, flags); if (n == -1) { err = ngx_errno; @@ -338,5 +354,14 @@ eintr: } } +#if (NGX_DEBUG) + for (i = 0, size = 0; i < (size_t) msg->msg_iovlen; i++) { + size += msg->msg_iov[i].iov_len; + } + + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, + "sendmsg: %z of %uz", n, size); +#endif + return n; } From vl at nginx.com Thu Jan 27 11:43:28 2022 From: vl at nginx.com (Vladimir Homutov) Date: Thu, 27 Jan 2022 11:43:28 +0000 Subject: [nginx] Core: added function for local source address cmsg. Message-ID: details: https://hg.nginx.org/nginx/rev/0f6cc8f73744 branches: changeset: 8003:0f6cc8f73744 user: Vladimir Homutov date: Tue Jan 25 15:48:58 2022 +0300 description: Core: added function for local source address cmsg. diffstat: src/event/ngx_event_udp.c | 92 ++++-------------------------------- src/event/ngx_event_udp.h | 2 + src/os/unix/ngx_udp_sendmsg_chain.c | 65 ++++++++++++++++++++++++++ 3 files changed, 77 insertions(+), 82 deletions(-) diffs (221 lines): diff -r cfe1284e5d1d -r 0f6cc8f73744 src/event/ngx_event_udp.c --- a/src/event/ngx_event_udp.c Tue Jan 25 15:48:56 2022 +0300 +++ b/src/event/ngx_event_udp.c Tue Jan 25 15:48:58 2022 +0300 @@ -46,18 +46,8 @@ ngx_event_recvmsg(ngx_event_t *ev) ngx_connection_t *c, *lc; static u_char buffer[65535]; -#if (NGX_HAVE_MSGHDR_MSG_CONTROL) - -#if (NGX_HAVE_IP_RECVDSTADDR) - u_char msg_control[CMSG_SPACE(sizeof(struct in_addr))]; -#elif (NGX_HAVE_IP_PKTINFO) - u_char msg_control[CMSG_SPACE(sizeof(struct in_pktinfo))]; -#endif - -#if (NGX_HAVE_INET6 && NGX_HAVE_IPV6_RECVPKTINFO) - u_char msg_control6[CMSG_SPACE(sizeof(struct in6_pktinfo))]; -#endif - +#if (NGX_HAVE_ADDRINFO_CMSG) + u_char msg_control[CMSG_SPACE(sizeof(ngx_addrinfo_t))]; #endif if (ev->timedout) { @@ -92,25 +82,13 @@ ngx_event_recvmsg(ngx_event_t *ev) msg.msg_iov = iov; msg.msg_iovlen = 1; -#if (NGX_HAVE_MSGHDR_MSG_CONTROL) - +#if (NGX_HAVE_ADDRINFO_CMSG) if (ls->wildcard) { + msg.msg_control = &msg_control; + msg.msg_controllen = sizeof(msg_control); -#if (NGX_HAVE_IP_RECVDSTADDR || NGX_HAVE_IP_PKTINFO) - if (ls->sockaddr->sa_family == AF_INET) { - msg.msg_control = &msg_control; - msg.msg_controllen = sizeof(msg_control); - } -#endif - -#if (NGX_HAVE_INET6 && NGX_HAVE_IPV6_RECVPKTINFO) - if (ls->sockaddr->sa_family == AF_INET6) { - msg.msg_control = &msg_control6; - msg.msg_controllen = sizeof(msg_control6); - } -#endif - } - + ngx_memzero(&msg_control, sizeof(msg_control)); + } #endif n = recvmsg(lc->fd, &msg, 0); @@ -129,7 +107,7 @@ ngx_event_recvmsg(ngx_event_t *ev) return; } -#if (NGX_HAVE_MSGHDR_MSG_CONTROL) +#if (NGX_HAVE_ADDRINFO_CMSG) if (msg.msg_flags & (MSG_TRUNC|MSG_CTRUNC)) { ngx_log_error(NGX_LOG_ALERT, ev->log, 0, "recvmsg() truncated data"); @@ -159,7 +137,7 @@ ngx_event_recvmsg(ngx_event_t *ev) local_sockaddr = ls->sockaddr; local_socklen = ls->socklen; -#if (NGX_HAVE_MSGHDR_MSG_CONTROL) +#if (NGX_HAVE_ADDRINFO_CMSG) if (ls->wildcard) { struct cmsghdr *cmsg; @@ -171,59 +149,9 @@ ngx_event_recvmsg(ngx_event_t *ev) cmsg != NULL; cmsg = CMSG_NXTHDR(&msg, cmsg)) { - -#if (NGX_HAVE_IP_RECVDSTADDR) - - if (cmsg->cmsg_level == IPPROTO_IP - && cmsg->cmsg_type == IP_RECVDSTADDR - && local_sockaddr->sa_family == AF_INET) - { - struct in_addr *addr; - struct sockaddr_in *sin; - - addr = (struct in_addr *) CMSG_DATA(cmsg); - sin = (struct sockaddr_in *) local_sockaddr; - sin->sin_addr = *addr; - + if (ngx_get_srcaddr_cmsg(cmsg, local_sockaddr) == NGX_OK) { break; } - -#elif (NGX_HAVE_IP_PKTINFO) - - if (cmsg->cmsg_level == IPPROTO_IP - && cmsg->cmsg_type == IP_PKTINFO - && local_sockaddr->sa_family == AF_INET) - { - struct in_pktinfo *pkt; - struct sockaddr_in *sin; - - pkt = (struct in_pktinfo *) CMSG_DATA(cmsg); - sin = (struct sockaddr_in *) local_sockaddr; - sin->sin_addr = pkt->ipi_addr; - - break; - } - -#endif - -#if (NGX_HAVE_INET6 && NGX_HAVE_IPV6_RECVPKTINFO) - - if (cmsg->cmsg_level == IPPROTO_IPV6 - && cmsg->cmsg_type == IPV6_PKTINFO - && local_sockaddr->sa_family == AF_INET6) - { - struct in6_pktinfo *pkt6; - struct sockaddr_in6 *sin6; - - pkt6 = (struct in6_pktinfo *) CMSG_DATA(cmsg); - sin6 = (struct sockaddr_in6 *) local_sockaddr; - sin6->sin6_addr = pkt6->ipi6_addr; - - break; - } - -#endif - } } diff -r cfe1284e5d1d -r 0f6cc8f73744 src/event/ngx_event_udp.h --- a/src/event/ngx_event_udp.h Tue Jan 25 15:48:56 2022 +0300 +++ b/src/event/ngx_event_udp.h Tue Jan 25 15:48:58 2022 +0300 @@ -41,6 +41,8 @@ typedef union { size_t ngx_set_srcaddr_cmsg(struct cmsghdr *cmsg, struct sockaddr *local_sockaddr); +ngx_int_t ngx_get_srcaddr_cmsg(struct cmsghdr *cmsg, + struct sockaddr *local_sockaddr); #endif diff -r cfe1284e5d1d -r 0f6cc8f73744 src/os/unix/ngx_udp_sendmsg_chain.c --- a/src/os/unix/ngx_udp_sendmsg_chain.c Tue Jan 25 15:48:56 2022 +0300 +++ b/src/os/unix/ngx_udp_sendmsg_chain.c Tue Jan 25 15:48:58 2022 +0300 @@ -316,6 +316,71 @@ ngx_set_srcaddr_cmsg(struct cmsghdr *cms return 0; } + +ngx_int_t +ngx_get_srcaddr_cmsg(struct cmsghdr *cmsg, struct sockaddr *local_sockaddr) +{ + +#if (NGX_HAVE_IP_RECVDSTADDR) + struct in_addr *addr; + struct sockaddr_in *sin; +#elif (NGX_HAVE_IP_PKTINFO) + struct in_pktinfo *pkt; + struct sockaddr_in *sin; +#endif + +#if (NGX_HAVE_INET6 && NGX_HAVE_IPV6_RECVPKTINFO) + struct in6_pktinfo *pkt6; + struct sockaddr_in6 *sin6; +#endif + + + #if (NGX_HAVE_IP_RECVDSTADDR) + + if (cmsg->cmsg_level == IPPROTO_IP + && cmsg->cmsg_type == IP_RECVDSTADDR + && local_sockaddr->sa_family == AF_INET) + { + addr = (struct in_addr *) CMSG_DATA(cmsg); + sin = (struct sockaddr_in *) local_sockaddr; + sin->sin_addr = *addr; + + return NGX_OK; + } + +#elif (NGX_HAVE_IP_PKTINFO) + + if (cmsg->cmsg_level == IPPROTO_IP + && cmsg->cmsg_type == IP_PKTINFO + && local_sockaddr->sa_family == AF_INET) + { + pkt = (struct in_pktinfo *) CMSG_DATA(cmsg); + sin = (struct sockaddr_in *) local_sockaddr; + sin->sin_addr = pkt->ipi_addr; + + return NGX_OK; + } + +#endif + +#if (NGX_HAVE_INET6 && NGX_HAVE_IPV6_RECVPKTINFO) + + if (cmsg->cmsg_level == IPPROTO_IPV6 + && cmsg->cmsg_type == IPV6_PKTINFO + && local_sockaddr->sa_family == AF_INET6) + { + pkt6 = (struct in6_pktinfo *) CMSG_DATA(cmsg); + sin6 = (struct sockaddr_in6 *) local_sockaddr; + sin6->sin6_addr = pkt6->ipi6_addr; + + return NGX_OK; + } + +#endif + + return NGX_DECLINED; +} + #endif From vl at nginx.com Thu Jan 27 11:43:30 2022 From: vl at nginx.com (Vladimir Homutov) Date: Thu, 27 Jan 2022 11:43:30 +0000 Subject: [nginx] Core: added autotest for UDP segmentation offloading. Message-ID: details: https://hg.nginx.org/nginx/rev/c0a432c0301b branches: changeset: 8004:c0a432c0301b user: Vladimir Homutov date: Wed Jan 26 20:40:00 2022 +0300 description: Core: added autotest for UDP segmentation offloading. diffstat: auto/os/linux | 16 ++++++++++++++++ src/os/unix/ngx_linux_config.h | 4 ++++ 2 files changed, 20 insertions(+), 0 deletions(-) diffs (38 lines): diff -r 0f6cc8f73744 -r c0a432c0301b auto/os/linux --- a/auto/os/linux Tue Jan 25 15:48:58 2022 +0300 +++ b/auto/os/linux Wed Jan 26 20:40:00 2022 +0300 @@ -232,4 +232,20 @@ ngx_feature_test="struct crypt_data cd; ngx_include="sys/vfs.h"; . auto/include +# UDP segmentation offloading + +ngx_feature="UDP_SEGMENT" +ngx_feature_name="NGX_HAVE_UDP_SEGMENT" +ngx_feature_run=no +ngx_feature_incs="#include + #include + #include " +ngx_feature_path= +ngx_feature_libs= +ngx_feature_test="socklen_t optlen = sizeof(int); + int val; + getsockopt(0, SOL_UDP, UDP_SEGMENT, &val, &optlen)" +. auto/feature + + CC_AUX_FLAGS="$cc_aux_flags -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64" diff -r 0f6cc8f73744 -r c0a432c0301b src/os/unix/ngx_linux_config.h --- a/src/os/unix/ngx_linux_config.h Tue Jan 25 15:48:58 2022 +0300 +++ b/src/os/unix/ngx_linux_config.h Wed Jan 26 20:40:00 2022 +0300 @@ -103,6 +103,10 @@ typedef struct iocb ngx_aiocb_t; #include #endif +#if (NGX_HAVE_UDP_SEGMENT) +#include +#endif + #define NGX_LISTEN_BACKLOG 511 From xeioex at nginx.com Thu Jan 27 13:03:22 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 27 Jan 2022 13:03:22 +0000 Subject: [njs] Version bump. Message-ID: details: https://hg.nginx.org/njs/rev/d5d9605a6c55 branches: changeset: 1818:d5d9605a6c55 user: Dmitry Volyntsev date: Tue Jan 25 19:06:58 2022 +0000 description: Version bump. diffstat: src/njs.h | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r d73e9c106a97 -r d5d9605a6c55 src/njs.h --- a/src/njs.h Tue Jan 25 13:38:25 2022 +0000 +++ b/src/njs.h Tue Jan 25 19:06:58 2022 +0000 @@ -11,7 +11,7 @@ #include -#define NJS_VERSION "0.7.2" +#define NJS_VERSION "0.7.3" #include /* STDOUT_FILENO, STDERR_FILENO */ From xeioex at nginx.com Thu Jan 27 13:03:23 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 27 Jan 2022 13:03:23 +0000 Subject: [njs] Shell: added options for custom exit failure code. Message-ID: details: https://hg.nginx.org/njs/rev/328bb7c20642 branches: changeset: 1819:328bb7c20642 user: Dmitry Volyntsev date: Wed Jan 26 17:24:57 2022 +0000 description: Shell: added options for custom exit failure code. diffstat: src/njs_shell.c | 19 ++++++++++++++++++- 1 files changed, 18 insertions(+), 1 deletions(-) diffs (59 lines): diff -r d5d9605a6c55 -r 328bb7c20642 src/njs_shell.c --- a/src/njs_shell.c Tue Jan 25 19:06:58 2022 +0000 +++ b/src/njs_shell.c Wed Jan 26 17:24:57 2022 +0000 @@ -36,6 +36,7 @@ typedef struct { uint8_t version; uint8_t ast; uint8_t unhandled_rejection; + int exit_code; char *file; char *command; @@ -319,7 +320,7 @@ done: njs_options_free(&opts); - return (ret == NJS_OK) ? EXIT_SUCCESS : EXIT_FAILURE; + return (ret == NJS_OK) ? EXIT_SUCCESS : opts.exit_code; } @@ -344,6 +345,7 @@ njs_options_parse(njs_opts_t *opts, int " -a print AST.\n" " -c specify the command to execute.\n" " -d print disassembled code.\n" + " -e set failure exit code.\n" " -f disabled denormals mode.\n" " -p set path prefix for modules.\n" " -q disable interactive introduction prompt.\n" @@ -357,8 +359,14 @@ njs_options_parse(njs_opts_t *opts, int ret = NJS_DONE; opts->denormals = 1; + opts->exit_code = EXIT_FAILURE; opts->unhandled_rejection = NJS_VM_OPT_UNHANDLED_REJECTION_THROW; + p = getenv("NJS_EXIT_CODE"); + if (p != NULL) { + opts->exit_code = atoi(p); + } + for (i = 1; i < argc; i++) { p = argv[i]; @@ -396,6 +404,15 @@ njs_options_parse(njs_opts_t *opts, int opts->disassemble = 1; break; + case 'e': + if (++i < argc) { + opts->exit_code = atoi(argv[i]); + break; + } + + njs_stderror("option \"-e\" requires argument\n"); + return NJS_ERROR; + case 'f': #if !(NJS_HAVE_DENORMALS_CONTROL) From xeioex at nginx.com Thu Jan 27 13:03:25 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 27 Jan 2022 13:03:25 +0000 Subject: [njs] Types: added async/await support for TS files. Message-ID: details: https://hg.nginx.org/njs/rev/403f7fe07fe8 branches: changeset: 1820:403f7fe07fe8 user: Jakub Jirutka date: Wed Jan 26 02:44:18 2022 +0100 description: Types: added async/await support for TS files. Since 0.7.0 async/await support was added. This closes #461 issue on Github. diffstat: test/ts/test.ts | 5 ++++- ts/tsconfig.json | 2 +- 2 files changed, 5 insertions(+), 2 deletions(-) diffs (41 lines): diff -r 328bb7c20642 -r 403f7fe07fe8 test/ts/test.ts --- a/test/ts/test.ts Wed Jan 26 17:24:57 2022 +0000 +++ b/test/ts/test.ts Wed Jan 26 02:44:18 2022 +0100 @@ -2,7 +2,7 @@ import fs from 'fs'; import qs from 'querystring'; import cr from 'crypto'; -function http_module(r: NginxHTTPRequest) { +async function http_module(r: NginxHTTPRequest) { var bs: NjsByteString; var s: string; var vod: void; @@ -68,6 +68,7 @@ function http_module(r: NginxHTTPRequest // Warning: vod = r.subrequest('/p/sub9', {detached:true}, reply => r.return(reply.status)); r.subrequest('/p/sub6', 'a=1&b=2').then(reply => r.return(reply.status, JSON.stringify(JSON.parse(reply.responseBody ?? '')))); + let body = await r.subrequest('/p/sub7'); // r.requestText r.requestText == 'a'; @@ -94,6 +95,8 @@ function http_module(r: NginxHTTPRequest .then(body => r.return(200, body)) .catch(e => r.return(501, e.message)) + let response = await ngx.fetch('http://nginx.org/'); + // js_body_filter r.sendBuffer(Buffer.from("xxx"), {last:true}); r.sendBuffer("xxx", {flush: true}); diff -r 328bb7c20642 -r 403f7fe07fe8 ts/tsconfig.json --- a/ts/tsconfig.json Wed Jan 26 17:24:57 2022 +0000 +++ b/ts/tsconfig.json Wed Jan 26 02:44:18 2022 +0100 @@ -1,7 +1,7 @@ { "compilerOptions": { "target": "es5", - "module": "es2015", + "module": "ES2017", "lib": [ "ES2015", "ES2016.Array.Include", From xeioex at nginx.com Thu Jan 27 13:03:27 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 27 Jan 2022 13:03:27 +0000 Subject: [njs] Tests: added support for proper negative test262 tests. Message-ID: details: https://hg.nginx.org/njs/rev/b4998a4f82dc branches: changeset: 1821:b4998a4f82dc user: Dmitry Volyntsev date: Wed Jan 26 17:24:58 2022 +0000 description: Tests: added support for proper negative test262 tests. diffstat: test/options | 5 +++++ test/prepare | 3 +++ test/test262 | 21 +++++++++++++-------- 3 files changed, 21 insertions(+), 8 deletions(-) diffs (68 lines): diff -r 403f7fe07fe8 -r b4998a4f82dc test/options --- a/test/options Wed Jan 26 02:44:18 2022 +0100 +++ b/test/options Wed Jan 26 17:24:58 2022 +0000 @@ -59,3 +59,8 @@ do done NJS_TEST_PATHS=${@:-test} + +NJS_TEST_EXIT_CODE=1 +if echo $NJS_TEST_BINARY | grep -q njs; then + NJS_TEST_EXIT_CODE=23 +fi diff -r 403f7fe07fe8 -r b4998a4f82dc test/prepare --- a/test/prepare Wed Jan 26 02:44:18 2022 +0100 +++ b/test/prepare Wed Jan 26 17:24:58 2022 +0000 @@ -7,6 +7,9 @@ njs_includes=`grep 'includes: \[[^]]*]' | sed -e 's/includes: \[//' | sed -e 's/,/ /g' | sed -e 's/\]//'` njs_includes="assert.js sta.js $njs_includes" +njs_paths=`grep 'paths: \[[^]]*]' $njs_test \ + | sed -e 's/paths: \[//' | sed -e 's/ *, */:/g' | sed -e 's/\]//'` + njs_flags=`grep 'flags: \[[^]]*]' $njs_test \ | sed -e 's/flags: \[//' | sed -e 's/,/ /g' | sed -e 's/\]//'` diff -r 403f7fe07fe8 -r b4998a4f82dc test/test262 --- a/test/test262 Wed Jan 26 02:44:18 2022 +0100 +++ b/test/test262 Wed Jan 26 17:24:58 2022 +0000 @@ -19,18 +19,18 @@ for njs_test in $NJS_TESTS; do running $njs_test $njs_log END - if /bin/sh -c "(NJS_TEST_DIR=$NJS_TEST_DIR $NJS_TEST_BINARY $NJS_TEST_DIR/$njs_test)" > $njs_log 2>&1; then - njs_success=yes - else - njs_success=no - fi + status=0 + + NJS_PATH=$njs_paths \ + NJS_EXIT_CODE=$NJS_TEST_EXIT_CODE \ + $NJS_TEST_BINARY $NJS_TEST_DIR/$njs_test > $njs_log 2>&1 || status=$? cat $njs_log >> $NJS_TEST_LOG njs_out=`cat $njs_log` - if [ $njs_success = yes ]; then + if [ "$status" -eq 0 ]; then if [ -n "$njs_negative" ]; then - failed $njs_test + failed $njs_test $njs_log elif [ $njs_async = yes ]; then if [ "$njs_out" != 'Test262:AsyncTestComplete' ]; then @@ -51,7 +51,12 @@ END else if [ -n "$njs_negative" ]; then - passed $njs_test + if [ "$status" = "$NJS_TEST_EXIT_CODE" ]; then + passed $njs_test + else + echo "negative test exited with unexpected exit code:$status" + failed $njs_test $njs_log + fi else failed $njs_test $njs_log From xeioex at nginx.com Thu Jan 27 13:03:29 2022 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Thu, 27 Jan 2022 13:03:29 +0000 Subject: [njs] Tests: refactored modules tests using test262 test suite. Message-ID: details: https://hg.nginx.org/njs/rev/4d38ea471228 branches: changeset: 1822:4d38ea471228 user: Dmitry Volyntsev date: Thu Jan 27 13:01:55 2022 +0000 description: Tests: refactored modules tests using test262 test suite. diffstat: auto/expect | 6 +- auto/make | 2 +- test/js/import_declaration_exception.t.js | 9 + test/js/import_exception.t.js | 11 + test/js/import_export_comma_expression.t.js | 9 + test/js/import_export_empty.t.js | 9 + test/js/import_export_expression.t.js | 9 + test/js/import_export_multi_default.t.js | 9 + test/js/import_export_non_assignment.t.js | 9 + test/js/import_export_non_default.t.js | 9 + test/js/import_export_object.t.js | 9 + test/js/import_export_ref_exception.t.js | 9 + test/js/import_export_return.t.js | 9 + test/js/import_loading_exception.t.js | 9 + test/js/import_normal.t.js | 31 + test/js/import_not_enough.t.js | 10 + test/js/import_not_found.t.js | 9 + test/js/import_recursive.t.js | 9 + test/js/module/declaration_exception.js | 10 + test/js/module/exception.js | 4 + test/js/module/export.js | 4 + test/js/module/export_comma_expression.js | 5 + test/js/module/export_expression.js | 10 + test/js/module/export_name.js | 6 + test/js/module/export_non_assignment.js | 1 + test/js/module/export_non_default.js | 3 + test/js/module/lib1.js | 26 + test/js/module/lib2.js | 7 + test/js/module/lib3.js | 11 + test/js/module/libs/hash.js | 10 + test/js/module/libs/name.js | 1 + test/js/module/loading_exception.js | 3 + test/js/module/name.js | 1 + test/js/module/ref_exception.js | 1 + test/js/module/return.js | 1 + test/js/module/sub/sub1.js | 12 + test/js/module/sub/sub2.js | 7 + test/module/declaration_exception.js | 10 - test/module/exception.js | 4 - test/module/export.js | 4 - test/module/export_expression.js | 10 - test/module/export_expression2.js | 5 - test/module/export_name.js | 6 - test/module/export_non_assignment.js | 1 - test/module/export_non_default.js | 3 - test/module/lib1.js | 26 - test/module/lib2.js | 7 - test/module/lib3.js | 11 - test/module/libs/hash.js | 10 - test/module/libs/name.js | 1 - test/module/loading_exception.js | 3 - test/module/name.js | 1 - test/module/normal.js | 51 -- test/module/recursive.js | 1 - test/module/ref_exception.js | 1 - test/module/return.js | 1 - test/module/sub/sub1.js | 12 - test/module/sub/sub2.js | 7 - test/njs_expect_test.exp | 650 ---------------------------- test/shell_test.exp | 598 +++++++++++++++++++++++++ 60 files changed, 894 insertions(+), 829 deletions(-) diffs (truncated from 1983 to 1000 lines): diff -r b4998a4f82dc -r 4d38ea471228 auto/expect --- a/auto/expect Wed Jan 26 17:24:58 2022 +0000 +++ b/auto/expect Thu Jan 27 13:01:55 2022 +0000 @@ -20,9 +20,9 @@ fi if [ $njs_found = yes -a $NJS_HAVE_READLINE = YES ]; then cat << END >> $NJS_MAKEFILE -expect_test: njs test/njs_expect_test.exp +shell_test: njs test/shell_test.exp INPUTRC=test/inputrc PATH=$NJS_BUILD_DIR:\$(PATH) \ - expect -f test/njs_expect_test.exp + expect -f test/shell_test.exp END else @@ -30,7 +30,7 @@ else cat << END >> $NJS_MAKEFILE -expect_test: +shell_test: @echo "Skipping expect tests" END diff -r b4998a4f82dc -r 4d38ea471228 auto/make --- a/auto/make Wed Jan 26 17:24:58 2022 +0000 +++ b/auto/make Thu Jan 27 13:01:55 2022 +0000 @@ -249,7 +249,7 @@ unit_test: $NJS_BUILD_DIR/njs_auto_confi $NJS_BUILD_DIR/njs_unit_test -test: expect_test unit_test test262 +test: shell_test unit_test test262 benchmark: $NJS_BUILD_DIR/njs_auto_config.h \\ $NJS_BUILD_DIR/njs_benchmark diff -r b4998a4f82dc -r 4d38ea471228 test/js/import_declaration_exception.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/import_declaration_exception.t.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,9 @@ +/*--- +includes: [] +flags: [] +paths: [test/js/module, test/js/module/libs] +negative: + phase: runtime +---*/ + +import m from 'declaration_exception.js'; diff -r b4998a4f82dc -r 4d38ea471228 test/js/import_exception.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/import_exception.t.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,11 @@ +/*--- +includes: [] +flags: [] +paths: [test/js/module, test/js/module/libs] +negative: + phase: runtime +---*/ + +import lib from 'lib3.js'; + +lib.exception(); diff -r b4998a4f82dc -r 4d38ea471228 test/js/import_export_comma_expression.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/import_export_comma_expression.t.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,9 @@ +/*--- +includes: [] +flags: [] +paths: [test/js/module] +---*/ + +import m from 'export_comma_expression.js'; + +assert.sameValue(m.prod(3,5), 15); diff -r b4998a4f82dc -r 4d38ea471228 test/js/import_export_empty.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/import_export_empty.t.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,9 @@ +/*--- +includes: [] +flags: [] +paths: [test/js/module, test/js/module/libs] +negative: + phase: runtime +---*/ + +import m from 'empty.js'; diff -r b4998a4f82dc -r 4d38ea471228 test/js/import_export_expression.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/import_export_expression.t.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,9 @@ +/*--- +includes: [] +flags: [] +paths: [test/js/module] +---*/ + +import m from 'export_expression.js'; + +assert.sameValue(m.sum(3,4), 7); diff -r b4998a4f82dc -r 4d38ea471228 test/js/import_export_multi_default.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/import_export_multi_default.t.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,9 @@ +/*--- +includes: [] +flags: [] +paths: [test/js/module, test/js/module/libs] +negative: + phase: runtime +---*/ + +import m from 'export.js'; diff -r b4998a4f82dc -r 4d38ea471228 test/js/import_export_non_assignment.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/import_export_non_assignment.t.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,9 @@ +/*--- +includes: [] +flags: [] +paths: [test/js/module, test/js/module/libs] +negative: + phase: runtime +---*/ + +import m from 'export_non_assignment.js'; diff -r b4998a4f82dc -r 4d38ea471228 test/js/import_export_non_default.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/import_export_non_default.t.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,9 @@ +/*--- +includes: [] +flags: [] +paths: [test/js/module, test/js/module/libs] +negative: + phase: runtime +---*/ + +import m from 'export_non_default.js'; diff -r b4998a4f82dc -r 4d38ea471228 test/js/import_export_object.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/import_export_object.t.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,9 @@ +/*--- +includes: [] +flags: [] +paths: [test/js/module] +---*/ + +import m from 'export_name.js'; + +assert.sameValue(m.prod(3,4), 12); diff -r b4998a4f82dc -r 4d38ea471228 test/js/import_export_ref_exception.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/import_export_ref_exception.t.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,9 @@ +/*--- +includes: [] +flags: [] +paths: [test/js/module, test/js/module/libs] +negative: + phase: runtime +---*/ + +import m from 'ref_exception.js'; diff -r b4998a4f82dc -r 4d38ea471228 test/js/import_export_return.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/import_export_return.t.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,9 @@ +/*--- +includes: [] +flags: [] +paths: [test/js/module, test/js/module/libs] +negative: + phase: runtime +---*/ + +import m from 'return.js' diff -r b4998a4f82dc -r 4d38ea471228 test/js/import_loading_exception.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/import_loading_exception.t.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,9 @@ +/*--- +includes: [] +flags: [] +paths: [test/js/module, test/js/module/libs] +negative: + phase: runtime +---*/ + +import m from 'loading_exception.js'; diff -r b4998a4f82dc -r 4d38ea471228 test/js/import_normal.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/import_normal.t.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,31 @@ +/*--- +includes: [] +flags: [] +paths: [test/js/module/, test/js/module/libs/] +---*/ + +import name from 'name.js'; +import lib1 from 'lib1.js'; +import lib2 from 'lib2.js'; +import lib1_2 from 'lib1.js'; + +import crypto from 'crypto'; +var h = crypto.createHash('md5'); +var hash = h.update('AB').digest('hex'); + +assert.sameValue(name, "name"); + +assert.sameValue(lib1.name, "libs.name"); + +assert.sameValue(lib1.hash(), hash); +assert.sameValue(lib2.hash(), hash); + +assert.sameValue(lib1.get(), 0); + +assert.sameValue(lib1_2.get(), 0); + +lib1.inc(); + +assert.sameValue(lib1.get(), 1); + +assert.sameValue(lib1_2.get(), 1); diff -r b4998a4f82dc -r 4d38ea471228 test/js/import_not_enough.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/import_not_enough.t.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,10 @@ +/*--- +includes: [] +flags: [] +paths: [test/js/module] +negative: + phase: runtime +---*/ + +import name from 'name.js'; +import lib1 from 'lib1.js'; diff -r b4998a4f82dc -r 4d38ea471228 test/js/import_not_found.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/import_not_found.t.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,9 @@ +/*--- +includes: [] +flags: [] +njs_cmd_args: [] +negative: + phase: runtime +---*/ + +import name from 'name.js'; diff -r b4998a4f82dc -r 4d38ea471228 test/js/import_recursive.t.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/import_recursive.t.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,9 @@ +/*--- +includes: [] +flags: [] +paths: [test/js/module/] +negative: + phase: runtime +---*/ + +import lib from 'import_recursive.t.js'; diff -r b4998a4f82dc -r 4d38ea471228 test/js/module/declaration_exception.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/module/declaration_exception.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,10 @@ + +function f() { + return 1; +} + +function f() { + return 2; +} + +export default f; diff -r b4998a4f82dc -r 4d38ea471228 test/js/module/exception.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/module/exception.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,4 @@ +import lib from 'lib3.js'; + +lib.exception(); + diff -r b4998a4f82dc -r 4d38ea471228 test/js/module/export.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/module/export.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,4 @@ +var a = 1; + +export default {a} +export default {a} diff -r b4998a4f82dc -r 4d38ea471228 test/js/module/export_comma_expression.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/module/export_comma_expression.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,5 @@ +var _export = {}; + +export default (_export.sum = function(a, b) { return a + b; }, + _export.prod = function(a, b) { return a * b; }, + _export); diff -r b4998a4f82dc -r 4d38ea471228 test/js/module/export_expression.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/module/export_expression.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,10 @@ +function gen_export() { + var _export = {}; + + _export.sum = function(a, b) { return a + b; } + _export.prod = function(a, b) { return a * b; } + + return _export; +} + +export default gen_export(); diff -r b4998a4f82dc -r 4d38ea471228 test/js/module/export_name.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/module/export_name.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,6 @@ +var _export = {}; + +_export.sum = function(a, b) { return a + b; } +_export.prod = function(a, b) { return a * b; } + +export default _export; diff -r b4998a4f82dc -r 4d38ea471228 test/js/module/export_non_assignment.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/module/export_non_assignment.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,1 @@ +export default 10, 11; diff -r b4998a4f82dc -r 4d38ea471228 test/js/module/export_non_default.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/module/export_non_default.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,3 @@ +var a = 1; + +export a {a} diff -r b4998a4f82dc -r 4d38ea471228 test/js/module/lib1.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/module/lib1.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,26 @@ +var foo = (function(){ + return (function f() {}) +}); + +foo()({1:[]}) + +function hash() { + var h = crypto.createHash('md5'); + var v = h.update('AB').digest('hex'); + return v; +} + +import hashlib from 'hash.js'; +import crypto from 'crypto'; + +var state = {count:0} + +function inc() { + state.count++; +} + +function get() { + return state.count; +} + +export default {hash, inc, get, name: hashlib.name} diff -r b4998a4f82dc -r 4d38ea471228 test/js/module/lib2.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/module/lib2.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,7 @@ +import lib3 from 'lib3.js'; + +function hash() { + return lib3.hash(); +} + +export default {hash}; diff -r b4998a4f82dc -r 4d38ea471228 test/js/module/lib3.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/module/lib3.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,11 @@ +function hash() { + return sub.hash(); +} + +function exception() { + return sub.error(); +} + +import sub from './sub/sub1.js'; + +export default {hash, exception}; diff -r b4998a4f82dc -r 4d38ea471228 test/js/module/libs/hash.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/module/libs/hash.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,10 @@ +function hash() { + var h = crypto.createHash('md5'); + var v = h.update('AB').digest('hex'); + return v; +} + +import name from 'name.js'; +import crypto from 'crypto'; + +export default {hash, name}; diff -r b4998a4f82dc -r 4d38ea471228 test/js/module/libs/name.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/module/libs/name.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,1 @@ +export default 'libs.name'; diff -r b4998a4f82dc -r 4d38ea471228 test/js/module/loading_exception.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/module/loading_exception.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,3 @@ +throw Error('loading exception'); + +export default {}; diff -r b4998a4f82dc -r 4d38ea471228 test/js/module/name.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/module/name.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,1 @@ +export default 'name'; diff -r b4998a4f82dc -r 4d38ea471228 test/js/module/ref_exception.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/module/ref_exception.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,1 @@ +export default {type:typeof undeclared, undeclared}; diff -r b4998a4f82dc -r 4d38ea471228 test/js/module/return.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/module/return.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,1 @@ +return 1; diff -r b4998a4f82dc -r 4d38ea471228 test/js/module/sub/sub1.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/module/sub/sub1.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,12 @@ +function hash() { + return sub2.hash(crypto); +} + +function error() { + return {}.a.a; +} + +import sub2 from 'sub2.js'; +import crypto from 'crypto'; + +export default {hash, error}; diff -r b4998a4f82dc -r 4d38ea471228 test/js/module/sub/sub2.js --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/test/js/module/sub/sub2.js Thu Jan 27 13:01:55 2022 +0000 @@ -0,0 +1,7 @@ +function hash(crypto) { + return hashlib.hash(); +} + +import hashlib from 'hash.js'; + +export default {hash}; diff -r b4998a4f82dc -r 4d38ea471228 test/module/declaration_exception.js --- a/test/module/declaration_exception.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,10 +0,0 @@ - -function f() { - return 1; -} - -function f() { - return 2; -} - -export default f; diff -r b4998a4f82dc -r 4d38ea471228 test/module/exception.js --- a/test/module/exception.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,4 +0,0 @@ -import lib from 'lib3.js'; - -lib.exception(); - diff -r b4998a4f82dc -r 4d38ea471228 test/module/export.js --- a/test/module/export.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,4 +0,0 @@ -var a = 1; - -export default {a} -export default {a} diff -r b4998a4f82dc -r 4d38ea471228 test/module/export_expression.js --- a/test/module/export_expression.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,10 +0,0 @@ -function gen_export() { - var _export = {}; - - _export.sum = function(a, b) { return a + b; } - _export.prod = function(a, b) { return a * b; } - - return _export; -} - -export default gen_export(); diff -r b4998a4f82dc -r 4d38ea471228 test/module/export_expression2.js --- a/test/module/export_expression2.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,5 +0,0 @@ -var _export = {}; - -export default (_export.sum = function(a, b) { return a + b; }, - _export.prod = function(a, b) { return a * b; }, - _export); diff -r b4998a4f82dc -r 4d38ea471228 test/module/export_name.js --- a/test/module/export_name.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,6 +0,0 @@ -var _export = {}; - -_export.sum = function(a, b) { return a + b; } -_export.prod = function(a, b) { return a * b; } - -export default _export; diff -r b4998a4f82dc -r 4d38ea471228 test/module/export_non_assignment.js --- a/test/module/export_non_assignment.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,1 +0,0 @@ -export default 10, 11; diff -r b4998a4f82dc -r 4d38ea471228 test/module/export_non_default.js --- a/test/module/export_non_default.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,3 +0,0 @@ -var a = 1; - -export a {a} diff -r b4998a4f82dc -r 4d38ea471228 test/module/lib1.js --- a/test/module/lib1.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,26 +0,0 @@ -var foo = (function(){ - return (function f() {}) -}); - -foo()({1:[]}) - -function hash() { - var h = crypto.createHash('md5'); - var v = h.update('AB').digest('hex'); - return v; -} - -import hashlib from 'hash.js'; -import crypto from 'crypto'; - -var state = {count:0} - -function inc() { - state.count++; -} - -function get() { - return state.count; -} - -export default {hash, inc, get, name: hashlib.name} diff -r b4998a4f82dc -r 4d38ea471228 test/module/lib2.js --- a/test/module/lib2.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,7 +0,0 @@ -import lib3 from 'lib3.js'; - -function hash() { - return lib3.hash(); -} - -export default {hash}; diff -r b4998a4f82dc -r 4d38ea471228 test/module/lib3.js --- a/test/module/lib3.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,11 +0,0 @@ -function hash() { - return sub.hash(); -} - -function exception() { - return sub.error(); -} - -import sub from './sub/sub1.js'; - -export default {hash, exception}; diff -r b4998a4f82dc -r 4d38ea471228 test/module/libs/hash.js --- a/test/module/libs/hash.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,10 +0,0 @@ -function hash() { - var h = crypto.createHash('md5'); - var v = h.update('AB').digest('hex'); - return v; -} - -import name from 'name.js'; -import crypto from 'crypto'; - -export default {hash, name}; diff -r b4998a4f82dc -r 4d38ea471228 test/module/libs/name.js --- a/test/module/libs/name.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,1 +0,0 @@ -export default 'libs.name'; diff -r b4998a4f82dc -r 4d38ea471228 test/module/loading_exception.js --- a/test/module/loading_exception.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,3 +0,0 @@ -throw Error('loading exception'); - -export default {}; diff -r b4998a4f82dc -r 4d38ea471228 test/module/name.js --- a/test/module/name.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,1 +0,0 @@ -export default 'name'; diff -r b4998a4f82dc -r 4d38ea471228 test/module/normal.js --- a/test/module/normal.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,51 +0,0 @@ -import name from 'name.js'; -import lib1 from 'lib1.js'; -import lib2 from 'lib2.js'; -import lib1_2 from 'lib1.js'; - -import crypto from 'crypto'; -var h = crypto.createHash('md5'); -var hash = h.update('AB').digest('hex'); - -var fails = 0; - -if (name != 'name') { - fails++; -} - -if (lib1.name != 'libs.name') { - fails++; -} - -if (lib1.hash() != hash) { - fails++; -} - -if (lib2.hash() != hash) { - fails++; -} - -if (lib1.get() != 0) { - fails++; -} - -if (lib1_2.get() != 0) { - fails++; -} - -lib1.inc(); - -if (lib1.get() != 1) { - fails++; -} - -if (lib1_2.get() != 1) { - fails++; -} - -if (JSON.stringify({}) != "{}") { - fails++; -} - -setImmediate(console.log, - fails ? "failed: " + fails : "passed!"); diff -r b4998a4f82dc -r 4d38ea471228 test/module/recursive.js --- a/test/module/recursive.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,1 +0,0 @@ -import lib from 'recursive.js'; diff -r b4998a4f82dc -r 4d38ea471228 test/module/ref_exception.js --- a/test/module/ref_exception.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,1 +0,0 @@ -export default {type:typeof undeclared, undeclared}; diff -r b4998a4f82dc -r 4d38ea471228 test/module/return.js --- a/test/module/return.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,1 +0,0 @@ -return 1; diff -r b4998a4f82dc -r 4d38ea471228 test/module/sub/sub1.js --- a/test/module/sub/sub1.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,12 +0,0 @@ -function hash() { - return sub2.hash(crypto); -} - -function error() { - return {}.a.a; -} - -import sub2 from 'sub2.js'; -import crypto from 'crypto'; - -export default {hash, error}; diff -r b4998a4f82dc -r 4d38ea471228 test/module/sub/sub2.js --- a/test/module/sub/sub2.js Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,7 +0,0 @@ -function hash(crypto) { - return hashlib.hash(); -} - -import hashlib from 'hash.js'; - -export default {hash}; diff -r b4998a4f82dc -r 4d38ea471228 test/njs_expect_test.exp --- a/test/njs_expect_test.exp Wed Jan 26 17:24:58 2022 +0000 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,650 +0,0 @@ -# -# Copyright (C) Dmitry Volyntsev -# Copyright (C) NGINX, Inc. -# - -proc njs_test {body {opts ""}} { - - if {$opts eq ""} { - spawn -nottycopy njs - - } else { - eval spawn -nottycopy njs $opts - } - - # TODO: - # SIGINT handling race condition - # deb9-amd64-generic-njs-try - # ub1404-armv7-generic-njs-try - # ub1804-arm64-generic-njs-try - # UTF8 terminal support issue - # sol11-amd64-sunpro-njs-try - # ub1604-arm64-generic-njs-try - - # set timeout 30 - # expect_before timeout { exit 1 } - - expect -re "interactive njs \\d+\.\\d+\.\\d+\r\n\r" - expect "v. -> the properties and prototype methods of v.\r -\r ->> " - - set len [llength $body] - for {set i 0} {$i < $len} {incr i} { - set pair [lindex $body $i] - send [lindex $pair 0] - expect [lindex $pair 1] - } - - # Ctrl-C - send \x03 - expect eof -} - -proc njs_run {opts expected_re} { - catch {exec njs {*}$opts} out - if {[regexp $expected_re $out match] == 0} { - return -code error "njs_run: unexpected output '$out' vs '$expected_re'" - } -} - -njs_test { - {"njs.version\r\n" - "njs.version\r\n\*\.\*\.\*"} -} - -# simple multi line interaction -njs_test { - {"var a = 1\r\n" - "var a = 1\r\nundefined\r\n>> "} - {"a *= 2\r\n" - "a *= 2\r\n2\r\n>> "} -} - -# Global completions, no -njs_test { - {"\t\tn" - "\a\r\nDisplay all*possibilities? (y or n)*>> "} -} - -# Global completions, yes -njs_test { - {"\t\ty" - "\a\r\nDisplay all*possibilities? (y or n)*Array"} -} - -# Global completions, single partial match - -# \a* is WORKAROUND for libedit-20170329.3.1-r3 -# which inserts '\rESC[6G' after '\a'. -njs_test { - {"O\t" - "O\a*bject"} -} - -njs_test { - {"Ma\t" - "Ma\a*th"} -} - -# FIXME: completions for external objects -# are not supported - -# njs_test { -# {"conso\t" -# "conso\a*le"} -# } - -# Global completions, multiple partial match -njs_test { - {"cons\t\t" - "console*const"} -} - -njs_test { - {"O\t" - "O\a*bject"} - {"\t\t" - "Object.create*Object.isSealed"} -} - -njs_test { - {"Object.\t\t" - "Object.create*Object.isSealed"} -} - -njs_test { - {"Object.g\t" - "Object.g\a*et"} - {"\t\t" - "Object.getOwnPropertyDescriptor*Object.getPrototypeOf"} -} - -njs_test { - {"JS\t" - "JS\a*ON"} - {"\t\t" - "JSON.parse*JSON.stringify"} -} - -# Global completions, no matches -njs_test { - {"1.\t\t" - "1."} -} - -njs_test { - {"1..\t\t" - "1.."} -} - -njs_test { - {"'abc'.\t\t" - "'abc'."} -} - -# Global completions, global vars -njs_test { - {"var a = 1; var aa = 2\r\n" - "var a = 1; var aa = 2\r\nundefined\r\n>> "} - {"a\t\t" - "a*aa*arguments*await"} -} - -# z*z is WORKAROUND for libedit-20170329.3.1-r3 -# which inserts bogus '\a' between 'z' -njs_test { - {"var zz = 1\r\n" - "var zz = 1\r\nundefined\r\n>> "} - {"1 + z\t\r\n" - "1 + z*z*\r\n2"} -} - -njs_test { - {"unknown_var\t\t" - "unknown_var"} -} - -njs_test { - {"unknown_var.\t\t" - "unknown_var."} -} - -# An object's level completions -njs_test { - {"var o = {zz:1, zb:2}\r\n" - "var o = {zz:1, zb:2}\r\nundefined\r\n>> "} - {"o.z\t\t" - "o.zb*o.zz"} -} - -njs_test { - {"var d = new Date()\r\n" - "var d = new Date()\r\nundefined\r\n>> "} - {"d.to\t\t" - "d.toDateString*d.toLocaleDateString*d.toString"} -} - -njs_test { - {"var o = {a:new Date()}\r\n" - "var o = {a:new Date()}\r\nundefined\r\n>> "} - {"o.a.to\t\t" - "o.a.toDateString*o.a.toLocaleDateString*o.a.toString"} -} - -njs_test { - {"var o = {a:1,b:2,333:'t'}\r\n" - "var o = {a:1,b:2,333:'t'}\r\nundefined\r\n>> "} - {"o.3\t\t" - "o.3"} -} - -njs_test { - {"var a = Array(5000000); a.aab = 1; a.aac = 2\r\n" - "var a = Array(5000000); a.aab = 1; a.aac = 2\r\n2\r\n>> "} - {"a.\t\t" - "a.aab*"} -} - -njs_test { - {"var a = new Uint8Array([5,6,7,8,8]); a.aab = 1; a.aac = 2\r\n" - "var a = new Uint8Array(\\\[5,6,7,8,8]); a.aab = 1; a.aac = 2\r\n2\r\n>> "} - {"a.\t\t" - "a.aab*"} -} - -# function declarations in interactive mode -njs_test { - {"function a() { return 1; }\r\n" - "undefined\r\n>> "} - {"a();\r\n" - "1\r\n>> "} - {"function a() { return 2; }\r\n" - "undefined\r\n>> "} - {"a();\r\n" - "2\r\n>> "} -} - -# console object -njs_test { - {"console[Symbol.toStringTag]\r\n" - "console\\\[Symbol.toStringTag]\r\n'Console'\r\n>> "} - {"Object.prototype.toString.call(console)\r\n" - "Object.prototype.toString.call(console)\r\n'\\\[object Console]'\r\n>> "} - {"console.toString()\r\n" - "console.toString()\r\n'\\\[object Console]'\r\n>> "} - {"console\r\n" - "console\r\nConsole *>> "} - {"delete console.log\r\n" - "delete console.log\r\ntrue\r\n>>"} - {"console\r\n" - "console\r\nConsole *>> "} -} - -# console log functions -njs_test { - {"console[Symbol.toStringTag]\r\n" - "console\\\[Symbol.toStringTag]\r\n'Console'\r\n>> "} - {"console\r\n" - "console\r\nConsole *>> "} - {"console.log()\r\n" - "console.log()\r\nundefined\r\n>> "} - {"console.log('')\r\n" - "console.log('')\r\n\r\nundefined\r\n>> "} - {"console.log(1)\r\n" - "console.log(1)\r\n1\r\nundefined\r\n>> "} - {"console.log(1, 'a')\r\n" - "console.log(1, 'a')\r\n1 a\r\nundefined\r\n>> "} - {"print(1, 'a')\r\n" - "print(1, 'a')\r\n1 a\r\nundefined\r\n>> "} - {"console.log('\\tабв\\nгд')\r\n" - "console.log('\\\\tабв\\\\nгд')\r\n\tабв\r\nгд\r\nundefined\r\n>> "} - {"console.dump()\r\n" - "console.dump()\r\nundefined\r\n>> "} - {"console.dump(1)\r\n" - "console.dump(1)\r\n1\r\nundefined\r\n>> "} - {"console.dump(1, 'a')\r\n" - "console.dump(1, 'a')\r\n1 a\r\nundefined\r\n>> "} -} - From mdounin at mdounin.ru Thu Jan 27 13:27:59 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 27 Jan 2022 16:27:59 +0300 Subject: request body filter last_buf In-Reply-To: References: Message-ID: Hello! On Wed, Jan 26, 2022 at 08:37:25PM -0800, Dk Jack wrote: > Thanks Maxim, > Are there any other situations where last_buf would not be set besides the > case of content-length being zero? Probably not. -- Maxim Dounin http://mdounin.ru/ From vl at nginx.com Thu Jan 27 13:46:17 2022 From: vl at nginx.com (Vladimir Homutov) Date: Thu, 27 Jan 2022 16:46:17 +0300 Subject: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null In-Reply-To: References: <797C7B9D-D6B9-4EBF-ADAB-37AA62903B56@contoso.com> <1CA525F5-EB7A-4F39-9F53-938A47CFDAEF@baidu.com> <7C889653-E7E9-442C-AA80-77877CDD5C09@baidu.com> <024CFE99-B384-429D-984D-ADAFB9FB5F47@baidu.com> Message-ID: On Thu, Jan 27, 2022 at 04:33:08AM +0000, Gao,Yan(媒体云) wrote: > > The main quic connection is created in ngx_quic_new_connection(), which > > calls ngx_quic_open_sockets() and it sets c->udp for the first time. > > > When packet arrives, c->udp is updated by ngx_lookup_udp_connection(). > > > The main connection does not have c->quic set; this is used in stream > > connections. To access main connection from quic stream, c->quic->parent > > may be used. > > ngx_event_recvmsg->(ls->handler) ngx_http_init_connection->ngx_http_v3_init: > if (c->quic == NULL) { > h3scf->quic.timeout = clcf->keepalive_timeout; > ngx_quic_run(c, &h3scf->quic); > return; > } > > And, why check c->quic == NULL, as it is never set first time you get there with main nginx connection, when a first QUIC packet arrives. Thus test c->quic. and if it is NULL it means we need to create main quic connection and proceed with the handshake. When the handshake is complete, a stream will be created, and the ngx_quic_init_stream_handler() will be called which will invoke listening handler, and we will return into ngx_http_v3_init() with stream connection that has c->quic set and follow the other path. From dnj0496 at gmail.com Thu Jan 27 19:02:45 2022 From: dnj0496 at gmail.com (Dk Jack) Date: Thu, 27 Jan 2022 11:02:45 -0800 Subject: request body filter last_buf In-Reply-To: References: Message-ID: Thank you. Last few questions to complete my understanding? Were the module body filter callbacks being invoked even when content_length_n <= 0 prior to 7913:185c86b830ef change? Is the nginx body handling a store-and-forward architecture where it waits for the entire body or does it forward pkts as and when it receives (especially when dealing with large bodies). What is the behavior of last_buf if it's not store-and-forward. On Thu, Jan 27, 2022 at 5:28 AM Maxim Dounin wrote: > Hello! > > On Wed, Jan 26, 2022 at 08:37:25PM -0800, Dk Jack wrote: > > > Thanks Maxim, > > Are there any other situations where last_buf would not be set besides > the > > case of content-length being zero? > > Probably not. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx-devel mailing list -- nginx-devel at nginx.org > To unsubscribe send an email to nginx-devel-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaoyan09 at baidu.com Fri Jan 28 03:29:06 2022 From: gaoyan09 at baidu.com (=?utf-8?B?R2FvLFlhbijlqpLkvZPkupEp?=) Date: Fri, 28 Jan 2022 03:29:06 +0000 Subject: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null In-Reply-To: References: <797C7B9D-D6B9-4EBF-ADAB-37AA62903B56@contoso.com> <1CA525F5-EB7A-4F39-9F53-938A47CFDAEF@baidu.com> <7C889653-E7E9-442C-AA80-77877CDD5C09@baidu.com> <024CFE99-B384-429D-984D-ADAFB9FB5F47@baidu.com> Message-ID: <2BA3BAA6-C01A-4B97-8A10-DD77BC56ADE0@baidu.com> > first time you get there with main nginx connection, when a first QUIC > packet arrives. Thus test c->quic. and if it is NULL it means we need > to create main quic connection and proceed with the handshake. > When the handshake is complete, a stream will be created, and the > ngx_quic_init_stream_handler() will be called which will invoke > listening handler, and we will return into ngx_http_v3_init() with > stream connection that has c->quic set and follow the other path. Yes, I understand. But what you said, as stream connection that has c->quic set, when main nginx connection c->quic set? ngx_ssl_shutdown and ngx_http_v3_init check c->quic == NULL, but it is never set. No problem? Gao,Yan(ACG VCP) -------------- next part -------------- An HTML attachment was scrubbed... URL: From devashi.tandon at appsentinels.ai Fri Jan 28 06:13:45 2022 From: devashi.tandon at appsentinels.ai (Devashi Tandon) Date: Fri, 28 Jan 2022 06:13:45 +0000 Subject: Using single persistent socket to send subrequests Message-ID: Hi, Was wondering if this question is more suited for the development forum, since I didn't receive any response on the user forum. Repeating the question below: I tried with clearing the connections header but NGINX is still sending the 5th response through a new source port. Let me give a more detailed configuration we have. Just to inform you, we have our own auth module instead of using the NGINX auth module. We call ngx_http_post_request to post subrequests and the code is almost the same as that of auth module. For the subrequest sent by auth module with the following configuration we expect NGINX to send requests through a new port for the first four connections and then reuse one of the ports for the fifth connection, especially when the requests are sequential. http { include /etc/nginx/mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65s; include /etc/nginx/conf.d/*.conf; proxy_socket_keepalive on; server { listen 9000; server_name front-service; ext_auth_fail_allow on; error_log /var/log/nginx/error.log debug; location / { ext_auth_request /auth; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Upgrade $http_upgrade; proxy_set_header X-Real-Ip $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://localhost:8090; location /auth { internal; proxy_set_header X-Req-Uri $request_uri; proxy_set_header X-Method $request_method; proxy_set_header X-Req-Host $host; proxy_set_header X-Client-Addr $remote_addr:$remote_port; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_connect_timeout 5000ms; proxy_read_timeout 5000ms; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_pass http://ext-authz-upstream-server; } } upstream ext-authz-upstream-server { server 172.20.10.6:9006; keepalive 4; } } Could you please help on what we are missing? Thanks, Devashi Date: Mon, 24 Jan 2022 17:56:33 +0300 From: "Sergey A. Osokin" Subject: Re: Using single persistent socket to send subrequests To: nginx at nginx.org Message-ID: Content-Type: text/plain; charset=utf-8 Hi Devashi, Reply Forward On Mon, Jan 24, 2022 at 05:52:56AM +0000, Devashi Tandon wrote: > > We have the following configuration: > > location / { > proxy_http_version 1.1; > proxy_pass http://ext-authz-upstream-server; > } > > upstream ext-authz-upstream-server { > server 172.20.10.6:9006; > keepalive 4; > } > > Do I need to add any other configuration to reuse the first four socket connections besides keepalive 4? You'd need to review and slightly update the `location /' configuration block by adding the following directive: proxy_set_header Connection ""; Please visit the following link to get more details: https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive -- Sergey Osokin -------------- next part -------------- An HTML attachment was scrubbed... URL: From vl at nginx.com Fri Jan 28 08:58:39 2022 From: vl at nginx.com (Vladimir Homutov) Date: Fri, 28 Jan 2022 11:58:39 +0300 Subject: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null In-Reply-To: <2BA3BAA6-C01A-4B97-8A10-DD77BC56ADE0@baidu.com> References: <797C7B9D-D6B9-4EBF-ADAB-37AA62903B56@contoso.com> <1CA525F5-EB7A-4F39-9F53-938A47CFDAEF@baidu.com> <7C889653-E7E9-442C-AA80-77877CDD5C09@baidu.com> <024CFE99-B384-429D-984D-ADAFB9FB5F47@baidu.com> <2BA3BAA6-C01A-4B97-8A10-DD77BC56ADE0@baidu.com> Message-ID: On Fri, Jan 28, 2022 at 03:29:06AM +0000, Gao,Yan(媒体云) wrote: > > first time you get there with main nginx connection, when a first QUIC > > packet arrives. Thus test c->quic. and if it is NULL it means we need > > to create main quic connection and proceed with the handshake. > > > When the handshake is complete, a stream will be created, and the > > ngx_quic_init_stream_handler() will be called which will invoke > > listening handler, and we will return into ngx_http_v3_init() with > > stream connection that has c->quic set and follow the other path. > > Yes, I understand. But what you said, as stream connection that has c->quic set, when main nginx connection c->quic set? > ngx_ssl_shutdown and ngx_http_v3_init check c->quic == NULL, but it is never set. > No problem? c->quic is never set on main connection (it is not really needed there). ngx_http_v3_init() is first called with main connection, and later it is called with _another_ connection that is a stream, and it has c->quic set. ngx_ssl_shutdown() is not supposed to do something on stream connections, ssl object is shared with main connection. all necessary cleanup will be done by main connection handlers. From gaoyan09 at baidu.com Fri Jan 28 10:15:15 2022 From: gaoyan09 at baidu.com (=?utf-8?B?R2FvLFlhbijlqpLkvZPkupEp?=) Date: Fri, 28 Jan 2022 10:15:15 +0000 Subject: all udp connection event should not add to events group Message-ID: <4A34ED94-BA3C-4ABF-B59D-A1C19ABE55C4@baidu.com> # HG changeset patch # User Gao,Yan(ACG VCP) # Date 1643364731 -28800 # Fri Jan 28 18:12:11 2022 +0800 # Branch quic # Node ID ea58c4329a4b03594737cbe8af1003366f9d1160 # Parent 30cad5a0931e5fd418e2e304b4a6ed5252d39aa2 all udp connection event should not add to events group diff -r 30cad5a0931e -r ea58c4329a4b src/event/ngx_event.c --- a/src/event/ngx_event.c Thu Jan 27 13:14:01 2022 +0300 +++ b/src/event/ngx_event.c Fri Jan 28 18:12:11 2022 +0800 @@ -267,18 +267,14 @@ ngx_int_t ngx_handle_read_event(ngx_event_t *rev, ngx_uint_t flags) { -#if (NGX_QUIC) - ngx_connection_t *c; c = rev->data; - if (c->quic) { - return ngx_quic_handle_read_event(rev, flags); + if (c->udp) { + return ngx_udp_handle_read_event(rev, flags); } -#endif - if (ngx_event_flags & NGX_USE_CLEAR_EVENT) { /* kqueue, epoll */ @@ -351,11 +347,9 @@ c = wev->data; -#if (NGX_QUIC) - if (c->quic) { - return ngx_quic_handle_write_event(wev, lowat); + if (c->udp) { + return ngx_udp_handle_write_event(wev, lowat); } -#endif if (lowat) { if (ngx_send_lowat(c, lowat) == NGX_ERROR) { diff -r 30cad5a0931e -r ea58c4329a4b src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Thu Jan 27 13:14:01 2022 +0300 +++ b/src/event/ngx_event_openssl.c Fri Jan 28 18:12:11 2022 +0800 @@ -3149,12 +3149,13 @@ ngx_err_t err; ngx_uint_t tries; -#if (NGX_QUIC) - if (c->quic) { - /* QUIC streams inherit SSL object */ + if (c->udp) { + /* + QUIC streams inherit SSL object + pure UDP sessions cannot handle SSL object + */ return NGX_OK; } -#endif rc = NGX_OK; diff -r 30cad5a0931e -r ea58c4329a4b src/event/ngx_event_udp.c --- a/src/event/ngx_event_udp.c Thu Jan 27 13:14:01 2022 +0300 +++ b/src/event/ngx_event_udp.c Fri Jan 28 18:12:11 2022 +0800 @@ -639,3 +639,31 @@ } #endif + + +ngx_int_t +ngx_udp_handle_read_event(ngx_event_t *rev, ngx_uint_t flags) +{ + if (!rev->active && !rev->ready) { + rev->active = 1; + + } else if (rev->active && (rev->ready || (flags & NGX_CLOSE_EVENT))) { + rev->active = 0; + } + + return NGX_OK; +} + + +ngx_int_t +ngx_udp_handle_write_event(ngx_event_t *wev, size_t lowat) +{ + if (!wev->active && !wev->ready) { + wev->active = 1; + + } else if (wev->active && wev->ready) { + wev->active = 0; + } + + return NGX_OK; +} diff -r 30cad5a0931e -r ea58c4329a4b src/event/ngx_event_udp.h --- a/src/event/ngx_event_udp.h Thu Jan 27 13:14:01 2022 +0300 +++ b/src/event/ngx_event_udp.h Fri Jan 28 18:12:11 2022 +0800 @@ -72,5 +72,7 @@ void ngx_delete_udp_connection(void *data); +ngx_int_t ngx_udp_handle_read_event(ngx_event_t *rev, ngx_uint_t flags); +ngx_int_t ngx_udp_handle_write_event(ngx_event_t *wev, size_t lowat); #endif /* _NGX_EVENT_UDP_H_INCLUDED_ */ diff -r 30cad5a0931e -r ea58c4329a4b src/event/quic/ngx_event_quic.h --- a/src/event/quic/ngx_event_quic.h Thu Jan 27 13:14:01 2022 +0300 +++ b/src/event/quic/ngx_event_quic.h Fri Jan 28 18:12:11 2022 +0800 @@ -77,8 +77,6 @@ const char *reason); ngx_int_t ngx_quic_reset_stream(ngx_connection_t *c, ngx_uint_t err); ngx_int_t ngx_quic_shutdown_stream(ngx_connection_t *c, int how); -ngx_int_t ngx_quic_handle_read_event(ngx_event_t *rev, ngx_uint_t flags); -ngx_int_t ngx_quic_handle_write_event(ngx_event_t *wev, size_t lowat); ngx_int_t ngx_quic_get_packet_dcid(ngx_log_t *log, u_char *data, size_t len, ngx_str_t *dcid); ngx_int_t ngx_quic_derive_key(ngx_log_t *log, const char *label, diff -r 30cad5a0931e -r ea58c4329a4b src/event/quic/ngx_event_quic_streams.c --- a/src/event/quic/ngx_event_quic_streams.c Thu Jan 27 13:14:01 2022 +0300 +++ b/src/event/quic/ngx_event_quic_streams.c Fri Jan 28 18:12:11 2022 +0800 @@ -1569,31 +1569,3 @@ return NGX_OK; } - - -ngx_int_t -ngx_quic_handle_read_event(ngx_event_t *rev, ngx_uint_t flags) -{ - if (!rev->active && !rev->ready) { - rev->active = 1; - - } else if (rev->active && (rev->ready || (flags & NGX_CLOSE_EVENT))) { - rev->active = 0; - } - - return NGX_OK; -} - - -ngx_int_t -ngx_quic_handle_write_event(ngx_event_t *wev, size_t lowat) -{ - if (!wev->active && !wev->ready) { - wev->active = 1; - - } else if (wev->active && wev->ready) { - wev->active = 0; - } - - return NGX_OK; -} Gao,Yan(ACG VCP) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaoyan09 at baidu.com Fri Jan 28 14:09:31 2022 From: gaoyan09 at baidu.com (=?utf-8?B?R2FvLFlhbijlqpLkvZPkupEp?=) Date: Fri, 28 Jan 2022 14:09:31 +0000 Subject: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null In-Reply-To: <2BA3BAA6-C01A-4B97-8A10-DD77BC56ADE0@baidu.com> References: <797C7B9D-D6B9-4EBF-ADAB-37AA62903B56@contoso.com> <1CA525F5-EB7A-4F39-9F53-938A47CFDAEF@baidu.com> <7C889653-E7E9-442C-AA80-77877CDD5C09@baidu.com> <024CFE99-B384-429D-984D-ADAFB9FB5F47@baidu.com> <2BA3BAA6-C01A-4B97-8A10-DD77BC56ADE0@baidu.com> Message-ID: > c->quic is never set on main connection (it is not really needed there). > ngx_http_v3_init() is first called with main connection, and later it is > called with _another_ connection that is a stream, and it has c->quic set. > ngx_ssl_shutdown() is not supposed to do something on stream > connections, ssl object is shared with main connection. all necessary > cleanup will be done by main connection handlers. ngx_http_v3_init() is only called in ngx_http_init_connection, as ls->handler. And then ngx_quic_listen add the main quic connection to udp rbtree. It call main quic connection read->handler If find connection in ngx_lookup_udp_connection, else call ls->handler. But when ngx_http_v3_init() is called by _another_ connection that is a stream? Gao,Yan(ACG VCP) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jiri.setnicka at cdn77.com Fri Jan 28 16:31:52 2022 From: jiri.setnicka at cdn77.com (=?utf-8?b?SmnFmcOtIFNldG5pxI1rYQ==?=) Date: Fri, 28 Jan 2022 17:31:52 +0100 Subject: [PATCH 00 of 15] Serve all requests from single tempfile Message-ID: Hello! Over the last few months, we (a small team of developers including me and Jan Prachař, both from CDN77) developed a missing feature for the proxy caching in Nginx. We are happy to share this feature with the community in the following patch series. We serve a large number of files to an immense number of clients and often multiple clients want the same file at the very same time - especially when it came to streaming (when a file is crafted on the upstream in real-time and getting it could take seconds). Previously there were two options in Nginx when using proxy caching: * pass all incoming requests to the origin * use proxy_cache_lock feature, pass only the first request (served in real-time) and let other requests wait until the first request completion We didn't like any of these options (the first one effectively disables CDN and the second one is unusable for streaming). We considered using Varnish, which solves this problem better, but we are very happy with the Nginx infrastructure we have. Thus we came with the third option. We developed the proxy_cache_tempfile mechanism, which acts similarly to the proxy_cache_lock, but instead of locking other requests waiting for the file completion, we open the tempfile used by the primary request and periodically serve parts of it to the waiting requests. Because there may be multiple tempfiles for the same file (for example when the file expires before it is fully downloaded), we use shared memory per cache with `ngx_http_file_cache_tf_node_t` for each created tempfiles to synchronize all workers. When a new request is passed to the origin, we record its tempfile number and when another request is received, we try to open tempfile with this number and serve from it. When tempfile is already used for some secondary request, it sticks with this same tempfile until its completion. To accomplish this we rely on the POSIX filesystem feature, when you can open file and retain its file descriptor even when it is moved to a new location (on the same filesystem). I'm afraid that this would be hard to accomplish on Windows and this feature will be non-Windows only. We tested this feature thoroughly for the last few months and we use it already in part of our infrastructure without noticing any negative impact, We noticed only a very small increase in memory usage and a minimal increase in CPU and disk io usage (which corresponds with the increased throughput of the server). We also did some synthetic benchmarks where we compared vanilla nginx and our patched version with and without cache lock and with cache tempfiles. Results of the benchmarks, charts, and scripts we used for it are available on my Github: https://github.com/setnicka/nginx-tempfiles-benchmark It should work also for fastcgi, uwsgi, and scgi caches (as it uses internally the same mechanism), but we didn't do testing of these. New config: * proxy_cache_tempfile on; -- activate the whole tempfile logic * proxy_cache_tempfile_timeout 5s; -- how long to wait for tempfile before 504 * proxy_cache_tempfile_loop 50ms; -- loop time for check tempfiles (ans same for fastcgi_cache, uwsgi_cache and scgi_cache) New option for proxy_cache_path: tf_zone=name:size (defaults to key zone name with _tf suffix and 10M size). It creates a shared memory zone used to store tempfiles nodes. We would be very grateful for any reviews and other testing. Jiří Setnička CDN77 From jiri.setnicka at cdn77.com Fri Jan 28 16:31:53 2022 From: jiri.setnicka at cdn77.com (=?utf-8?b?SmnFmcOtIFNldG5pxI1rYQ==?=) Date: Fri, 28 Jan 2022 17:31:53 +0100 Subject: [PATCH 01 of 15] ngx core - obtain number appended to the temp file name In-Reply-To: References: Message-ID: # HG changeset patch # User Jiří Setnička # Date 1643385660 -3600 # Fri Jan 28 17:01:00 2022 +0100 # Node ID ca39d3040e2a9c37eb0940c5880fead78d5d137c # Parent c0a432c0301b89816ec379a7e5a754a4777008b2 ngx core - obtain number appended to the temp file name diff --git a/src/core/ngx_file.c b/src/core/ngx_file.c --- a/src/core/ngx_file.c +++ b/src/core/ngx_file.c @@ -111,8 +111,9 @@ ngx_write_chain_to_temp_file(ngx_temp_fi ngx_int_t rc; if (tf->file.fd == NGX_INVALID_FILE) { - rc = ngx_create_temp_file(&tf->file, tf->path, tf->pool, - tf->persistent, tf->clean, tf->access); + rc = ngx_create_temp_file_n(&tf->file, tf->path, tf->pool, + tf->persistent, tf->clean, tf->access, + &tf->suffix_number); if (rc != NGX_OK) { return rc; @@ -141,6 +142,15 @@ ngx_int_t ngx_create_temp_file(ngx_file_t *file, ngx_path_t *path, ngx_pool_t *pool, ngx_uint_t persistent, ngx_uint_t clean, ngx_uint_t access) { + return ngx_create_temp_file_n(file, path, pool, persistent, clean, access, + NULL); +} + + +ngx_int_t +ngx_create_temp_file_n(ngx_file_t *file, ngx_path_t *path, ngx_pool_t *pool, + ngx_uint_t persistent, ngx_uint_t clean, ngx_uint_t access, uint32_t *nn) +{ size_t levels; u_char *p; uint32_t n; @@ -213,6 +223,10 @@ ngx_create_temp_file(ngx_file_t *file, n clnf->name = file->name.data; clnf->log = pool->log; + if (nn != NULL) { + *nn = n; + } + return NGX_OK; } diff --git a/src/core/ngx_file.h b/src/core/ngx_file.h --- a/src/core/ngx_file.h +++ b/src/core/ngx_file.h @@ -76,6 +76,7 @@ typedef struct { char *warn; ngx_uint_t access; + uint32_t suffix_number; unsigned log_level:8; unsigned persistent:1; @@ -139,6 +140,9 @@ ssize_t ngx_write_chain_to_temp_file(ngx ngx_int_t ngx_create_temp_file(ngx_file_t *file, ngx_path_t *path, ngx_pool_t *pool, ngx_uint_t persistent, ngx_uint_t clean, ngx_uint_t access); +ngx_int_t ngx_create_temp_file_n(ngx_file_t *file, ngx_path_t *path, + ngx_pool_t *pool, ngx_uint_t persistent, ngx_uint_t clean, + ngx_uint_t access, uint32_t *n); void ngx_create_hashed_filename(ngx_path_t *path, u_char *file, size_t len); ngx_int_t ngx_create_path(ngx_file_t *file, ngx_path_t *path); ngx_err_t ngx_create_full_path(u_char *dir, ngx_uint_t access); From jiri.setnicka at cdn77.com Fri Jan 28 16:31:54 2022 From: jiri.setnicka at cdn77.com (=?utf-8?b?SmnFmcOtIFNldG5pxI1rYQ==?=) Date: Fri, 28 Jan 2022 17:31:54 +0100 Subject: [PATCH 02 of 15] ngx core - ensure that tempfile number never be 0 In-Reply-To: References: Message-ID: <64ff9068a0bd89712a0a.1643387514@pathfinder> # HG changeset patch # User Jiří Setnička # Date 1643385660 -3600 # Fri Jan 28 17:01:00 2022 +0100 # Node ID 64ff9068a0bd89712a0ae6fc589a881869663642 # Parent ca39d3040e2a9c37eb0940c5880fead78d5d137c ngx core - ensure that tempfile number never be 0 To be able to use 0 as special value meaning "no tempfile". diff --git a/src/core/ngx_file.c b/src/core/ngx_file.c --- a/src/core/ngx_file.c +++ b/src/core/ngx_file.c @@ -365,7 +365,9 @@ ngx_next_temp_number(ngx_uint_t collisio add = collision ? ngx_random_number : 1; - n = ngx_atomic_fetch_add(ngx_temp_number, add); + do { + n = ngx_atomic_fetch_add(ngx_temp_number, add); + } while (n + add == 0); return n + add; } From jiri.setnicka at cdn77.com Fri Jan 28 16:31:55 2022 From: jiri.setnicka at cdn77.com (=?utf-8?b?SmnFmcOtIFNldG5pxI1rYQ==?=) Date: Fri, 28 Jan 2022 17:31:55 +0100 Subject: [PATCH 03 of 15] Cache: Shared memory for tempfile nodes In-Reply-To: References: Message-ID: <535e503156cf141bf947.1643387515@pathfinder> # HG changeset patch # User Jiří Setnička # Date 1643385660 -3600 # Fri Jan 28 17:01:00 2022 +0100 # Node ID 535e503156cf141bf9471895468423e82f68c8bb # Parent 64ff9068a0bd89712a0ae6fc589a881869663642 Cache: Shared memory for tempfile nodes New option for `proxy_cache_path`: `tf_zone=name:size` (defaults to key zone name with `_tf` suffix and 10M size). It creates a shared memory zone used to store tempfiles nodes. Will be used to track updated state of multiple tempfiles of the same cache file. diff --git a/src/http/ngx_http_cache.h b/src/http/ngx_http_cache.h --- a/src/http/ngx_http_cache.h +++ b/src/http/ngx_http_cache.h @@ -156,10 +156,20 @@ typedef struct { } ngx_http_file_cache_sh_t; +typedef struct { + ngx_rbtree_t rbtree; + ngx_rbtree_node_t sentinel; + ngx_uint_t count; +} ngx_http_file_cache_tf_sh_t; + + struct ngx_http_file_cache_s { ngx_http_file_cache_sh_t *sh; ngx_slab_pool_t *shpool; + ngx_http_file_cache_tf_sh_t *tf_sh; + ngx_slab_pool_t *tf_shpool; + ngx_path_t *path; off_t min_free; @@ -181,6 +191,7 @@ struct ngx_http_file_cache_s { ngx_msec_t manager_threshold; ngx_shm_zone_t *shm_zone; + ngx_shm_zone_t *tf_shm_zone; ngx_uint_t use_temp_path; /* unsigned use_temp_path:1 */ diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -172,6 +172,61 @@ ngx_http_file_cache_init(ngx_shm_zone_t } +static ngx_int_t +ngx_http_file_cache_init_tf(ngx_shm_zone_t *shm_zone, void *data) +{ + ngx_http_file_cache_t *ocache = data; + + size_t len; + ngx_http_file_cache_t *cache; + + cache = shm_zone->data; + + if (ocache) { + /* cache path and level already checked by ngx_http_file_cache_init */ + + cache->tf_sh = ocache->tf_sh; + cache->tf_shpool = ocache->tf_shpool; + + return NGX_OK; + } + + cache->tf_shpool = (ngx_slab_pool_t *) shm_zone->shm.addr; + + if (shm_zone->shm.exists) { + cache->tf_sh = cache->tf_shpool->data; + + return NGX_OK; + } + + cache->tf_sh = ngx_slab_alloc(cache->tf_shpool, sizeof(ngx_http_file_cache_tf_sh_t)); + if (cache->tf_sh == NULL) { + return NGX_ERROR; + } + + cache->tf_shpool->data = cache->tf_sh; + + ngx_rbtree_init(&cache->tf_sh->rbtree, &cache->tf_sh->sentinel, + ngx_http_file_cache_rbtree_insert_value); + + cache->tf_sh->count = 0; + + len = sizeof(" in cache tf zone \"\"") + shm_zone->shm.name.len; + + cache->tf_shpool->log_ctx = ngx_slab_alloc(cache->tf_shpool, len); + if (cache->tf_shpool->log_ctx == NULL) { + return NGX_ERROR; + } + + ngx_sprintf(cache->tf_shpool->log_ctx, " in cache tf zone \"%V\"%Z", + &shm_zone->shm.name); + + cache->tf_shpool->log_nomem = 0; + + return NGX_OK; +} + + ngx_int_t ngx_http_file_cache_new(ngx_http_request_t *r) { @@ -2322,8 +2377,8 @@ ngx_http_file_cache_set_slot(ngx_conf_t off_t max_size, min_free; u_char *last, *p; time_t inactive; - ssize_t size; - ngx_str_t s, name, *value; + ssize_t size, tf_size; + ngx_str_t s, name, tf_name, *value; ngx_int_t loader_files, manager_files; ngx_msec_t loader_sleep, manager_sleep, loader_threshold, manager_threshold; @@ -2354,7 +2409,9 @@ ngx_http_file_cache_set_slot(ngx_conf_t manager_threshold = 200; name.len = 0; + tf_name.len = 0; size = 0; + tf_size = 10000000; max_size = NGX_MAX_OFF_T_VALUE; min_free = 0; @@ -2462,6 +2519,40 @@ ngx_http_file_cache_set_slot(ngx_conf_t continue; } + if (ngx_strncmp(value[i].data, "tf_zone=", 8) == 0) { + + tf_name.data = value[i].data + 8; + + p = (u_char *) ngx_strchr(tf_name.data, ':'); + + if (p == NULL) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "invalid tf_zone size \"%V\"", &value[i]); + return NGX_CONF_ERROR; + } + + tf_name.len = p - tf_name.data; + + s.data = p + 1; + s.len = value[i].data + value[i].len - s.data; + + tf_size = ngx_parse_size(&s); + + if (tf_size == NGX_ERROR) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "invalid tf_zone size \"%V\"", &value[i]); + return NGX_CONF_ERROR; + } + + if (tf_size < (ssize_t) (2 * ngx_pagesize)) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "tf_zone \"%V\" is too small", &value[i]); + return NGX_CONF_ERROR; + } + + continue; + } + if (ngx_strncmp(value[i].data, "inactive=", 9) == 0) { s.len = value[i].len - 9; @@ -2611,6 +2702,17 @@ ngx_http_file_cache_set_slot(ngx_conf_t return NGX_CONF_ERROR; } + if (tf_name.len == 0) { + tf_name.len = name.len + 3; + tf_name.data = ngx_alloc(tf_name.len, cf->log); + if (tf_name.data == NULL) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "Cannot allocate tf_zone name"); + return NGX_CONF_ERROR; + } + ngx_sprintf(tf_name.data, "%V_tf", &name); + } + cache->path->manager = ngx_http_file_cache_manager; cache->path->loader = ngx_http_file_cache_loader; cache->path->data = cache; @@ -2642,6 +2744,21 @@ ngx_http_file_cache_set_slot(ngx_conf_t cache->shm_zone->init = ngx_http_file_cache_init; cache->shm_zone->data = cache; + cache->tf_shm_zone = ngx_shared_memory_add(cf, &tf_name, tf_size, cmd->post); + if (cache->tf_shm_zone == NULL) { + return NGX_CONF_ERROR; + } + + if (cache->tf_shm_zone->data) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "duplicate zone \"%V\"", &tf_name); + return NGX_CONF_ERROR; + } + + + cache->tf_shm_zone->init = ngx_http_file_cache_init_tf; + cache->tf_shm_zone->data = cache; + cache->use_temp_path = use_temp_path; cache->inactive = inactive; From jiri.setnicka at cdn77.com Fri Jan 28 16:31:56 2022 From: jiri.setnicka at cdn77.com (=?utf-8?b?SmnFmcOtIFNldG5pxI1rYQ==?=) Date: Fri, 28 Jan 2022 17:31:56 +0100 Subject: [PATCH 04 of 15] Cache: tf_node for tracking opened tempfile In-Reply-To: References: Message-ID: <2488cf77a1cc6c7f4869.1643387516@pathfinder> # HG changeset patch # User Jiří Setnička # Date 1643385660 -3600 # Fri Jan 28 17:01:00 2022 +0100 # Node ID 2488cf77a1cc6c7f48696816f085c71e49355b72 # Parent 535e503156cf141bf9471895468423e82f68c8bb Cache: tf_node for tracking opened tempfile Lookup and cleanup. diff --git a/src/http/ngx_http_cache.h b/src/http/ngx_http_cache.h --- a/src/http/ngx_http_cache.h +++ b/src/http/ngx_http_cache.h @@ -61,6 +61,14 @@ typedef struct { ngx_msec_t lock_time; } ngx_http_file_cache_node_t; +typedef struct { + ngx_rbtree_node_t node; + off_t length; + unsigned count:20; + unsigned updated:1; + unsigned done:1; +} ngx_http_file_cache_tf_node_t; + struct ngx_http_cache_s { ngx_file_t file; @@ -91,6 +99,8 @@ struct ngx_http_cache_s { ngx_uint_t valid_msec; ngx_uint_t vary_tag; + ngx_http_file_cache_tf_node_t *tf_node; + ngx_buf_t *buf; ngx_http_file_cache_t *file_cache; diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -34,6 +34,10 @@ static ngx_int_t ngx_http_file_cache_nam ngx_path_t *path); static ngx_http_file_cache_node_t * ngx_http_file_cache_lookup(ngx_http_file_cache_t *cache, u_char *key); +static ngx_http_file_cache_tf_node_t * ngx_http_file_cache_tf_lookup( + ngx_http_file_cache_t *cache, uint32_t tf_number); +static void ngx_http_file_cache_tf_delete(ngx_http_cache_t *c, + ngx_http_file_cache_t *cache); static void ngx_http_file_cache_rbtree_insert_value(ngx_rbtree_node_t *temp, ngx_rbtree_node_t *node, ngx_rbtree_node_t *sentinel); static void ngx_http_file_cache_vary(ngx_http_request_t *r, u_char *vary, @@ -1066,6 +1070,53 @@ ngx_http_file_cache_lookup(ngx_http_file } +static ngx_http_file_cache_tf_node_t * +ngx_http_file_cache_tf_lookup(ngx_http_file_cache_t *cache, uint32_t tf_number) +{ + ngx_rbtree_key_t node_key; + ngx_rbtree_node_t *node, *sentinel; + + node_key = tf_number; + + node = cache->tf_sh->rbtree.root; + sentinel = cache->tf_sh->rbtree.sentinel; + + while (node != sentinel) { + + if (node_key < node->key) { + node = node->left; + continue; + } + + if (node_key > node->key) { + node = node->right; + continue; + } + + return (ngx_http_file_cache_tf_node_t *) node; + } + + /* not found */ + + return NULL; +} + + +static void +ngx_http_file_cache_tf_delete(ngx_http_cache_t *c, ngx_http_file_cache_t *cache) +{ + c->tf_node->count--; + + if (c->tf_node->count == 0) { + ngx_rbtree_delete(&cache->tf_sh->rbtree, &c->tf_node->node); + ngx_slab_free_locked(cache->tf_shpool, c->tf_node); + cache->tf_sh->count--; + } + + c->tf_node = NULL; +} + + static void ngx_http_file_cache_rbtree_insert_value(ngx_rbtree_node_t *temp, ngx_rbtree_node_t *node, ngx_rbtree_node_t *sentinel) @@ -1283,6 +1334,10 @@ ngx_http_file_cache_reopen(ngx_http_requ c->node->count--; c->node = NULL; + if (c->tf_node != NULL) { + ngx_http_file_cache_tf_delete(c, cache); + } + ngx_shmtx_unlock(&cache->shpool->mutex); c->secondary = 1; @@ -1391,6 +1446,11 @@ ngx_http_file_cache_update_variant(ngx_h c->node->updating = 0; c->node = NULL; + if (c->tf_node != NULL) { + c->tf_node->done = 1; + ngx_http_file_cache_tf_delete(c, cache); + } + ngx_shmtx_unlock(&cache->shpool->mutex); c->file.name.len = 0; @@ -1479,6 +1539,13 @@ ngx_http_file_cache_update(ngx_http_requ c->node->exists = 1; } + if (c->tf_node != NULL) { + c->tf_node->updated = 1; + c->tf_node->done = 1; + + ngx_http_file_cache_tf_delete(c, cache); + } + c->node->updating = 0; ngx_shmtx_unlock(&cache->shpool->mutex); @@ -1694,6 +1761,13 @@ ngx_http_file_cache_free(ngx_http_cache_ fcn->updating = 0; } + if (c->tf_node != NULL) { + if (c->updating) { + c->tf_node->done = 1; + } + ngx_http_file_cache_tf_delete(c, cache); + } + if (c->error) { fcn->error = c->error; From jiri.setnicka at cdn77.com Fri Jan 28 16:31:57 2022 From: jiri.setnicka at cdn77.com (=?utf-8?b?SmnFmcOtIFNldG5pxI1rYQ==?=) Date: Fri, 28 Jan 2022 17:31:57 +0100 Subject: [PATCH 05 of 15] http upstream & file_cache: store temp file number and length in tf_node In-Reply-To: References: Message-ID: <76c1a836b1de47cb16b6.1643387517@pathfinder> # HG changeset patch # User Jiří Setnička # Date 1643385660 -3600 # Fri Jan 28 17:01:00 2022 +0100 # Node ID 76c1a836b1de47cb16b636b585526e25574e0f58 # Parent 2488cf77a1cc6c7f48696816f085c71e49355b72 http upstream & file_cache: store temp file number and length in tf_node Number could be used to construct name of the tempfile, length is used by secondary requests to read the tempfile. diff --git a/src/http/ngx_http_cache.h b/src/http/ngx_http_cache.h --- a/src/http/ngx_http_cache.h +++ b/src/http/ngx_http_cache.h @@ -59,6 +59,7 @@ typedef struct { size_t body_start; off_t fs_size; ngx_msec_t lock_time; + uint32_t tf_number; /* 0 = no temp file exists */ } ngx_http_file_cache_node_t; typedef struct { @@ -211,6 +212,8 @@ struct ngx_http_file_cache_s { ngx_int_t ngx_http_file_cache_new(ngx_http_request_t *r); ngx_int_t ngx_http_file_cache_create(ngx_http_request_t *r); void ngx_http_file_cache_create_key(ngx_http_request_t *r); +void ngx_http_file_cache_update_tf(ngx_http_request_t *r, + ngx_http_cache_t *c, uint32_t tf_number, off_t length); ngx_int_t ngx_http_file_cache_open(ngx_http_request_t *r); ngx_int_t ngx_http_file_cache_set_header(ngx_http_request_t *r, u_char *buf); void ngx_http_file_cache_update(ngx_http_request_t *r, ngx_temp_file_t *tf); diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -319,6 +319,45 @@ ngx_http_file_cache_create_key(ngx_http_ ngx_memcpy(c->main, c->key, NGX_HTTP_CACHE_KEY_LEN); } +void +ngx_http_file_cache_update_tf(ngx_http_request_t *r, ngx_http_cache_t *c, + uint32_t tf_number, off_t length) +{ + ngx_http_file_cache_t *cache; + + if (!c->node) { + return; + } + + cache = c->file_cache; + + ngx_shmtx_lock(&cache->shpool->mutex); + + if (c->tf_node == NULL) { + c->node->tf_number = tf_number; + c->tf_node = ngx_slab_calloc_locked(cache->tf_shpool, + sizeof(ngx_http_file_cache_tf_node_t)); + if (c->tf_node == NULL) { + ngx_log_error(NGX_LOG_ALERT, ngx_cycle->log, 0, + "could not allocate tf node%s", cache->shpool->log_ctx); + return; + } + + cache->tf_sh->count++; + + c->tf_node->node.key = tf_number; + + ngx_rbtree_insert(&cache->tf_sh->rbtree, &c->tf_node->node); + + c->tf_node->updated = 0; + c->tf_node->count = 1; + } + + c->tf_node->length = length; + + ngx_shmtx_unlock(&cache->shpool->mutex); +} + ngx_int_t ngx_http_file_cache_open(ngx_http_request_t *r) @@ -971,6 +1010,7 @@ renew: fcn->uniq = 0; fcn->body_start = 0; fcn->fs_size = 0; + fcn->tf_number = 0; done: @@ -1547,6 +1587,7 @@ ngx_http_file_cache_update(ngx_http_requ } c->node->updating = 0; + c->node->tf_number = 0; ngx_shmtx_unlock(&cache->shpool->mutex); } @@ -1759,6 +1800,7 @@ ngx_http_file_cache_free(ngx_http_cache_ if (c->updating && fcn->lock_time == c->lock_time) { fcn->updating = 0; + fcn->tf_number = 0; } if (c->tf_node != NULL) { diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -4069,6 +4069,14 @@ ngx_http_upstream_process_upstream(ngx_h ngx_http_upstream_finalize_request(r, u, NGX_ERROR); return; } + +#if (NGX_HTTP_CACHE) + if (u->cacheable && r->cache) { + ngx_http_file_cache_update_tf(r, r->cache, + p->temp_file->suffix_number, + p->temp_file->offset); + } +#endif } ngx_http_upstream_process_request(r, u); From jiri.setnicka at cdn77.com Fri Jan 28 16:31:58 2022 From: jiri.setnicka at cdn77.com (=?utf-8?b?SmnFmcOtIFNldG5pxI1rYQ==?=) Date: Fri, 28 Jan 2022 17:31:58 +0100 Subject: [PATCH 06 of 15] Configuration for tempfiles serving In-Reply-To: References: Message-ID: <5e3013a56643a9f8d26e.1643387518@pathfinder> # HG changeset patch # User Jiří Setnička # Date 1643385660 -3600 # Fri Jan 28 17:01:00 2022 +0100 # Node ID 5e3013a56643a9f8d26ea0e5882a1aa986c51903 # Parent 76c1a836b1de47cb16b636b585526e25574e0f58 Configuration for tempfiles serving New directives: * proxy_cache_tempfile on/off -- activate the whole tempfile serving logic * proxy_cache_tempfile_timeout 5s; -- how long to wait for tempfile before 504 * proxy_cache_tempfile_loop 50ms; -- loop time for check tempiles * fastcgi_cache_tempfile on/off -- activate the whole tempfile serving logic * fastcgi_cache_tempfile_timeout 5s; -- how long to wait for tempfile before 504 * fastcgi_cache_tempfile_loop 50ms; -- loop time for check tempiles * scgi_cache_tempfile on/off -- activate the whole tempfile serving logic * scgi_cache_tempfile_timeout 5s; -- how long to wait for tempfile before 504 * scgi_cache_tempfile_loop 50ms; -- loop time for check tempiles * uwsgi_cache_tempfile on/off -- activate the whole tempfile serving logic * uwsgi_cache_tempfile_timeout 5s; -- how long to wait for tempfile before 504 * uwsgi_cache_tempfile_loop 50ms; -- loop time for check tempiles diff --git a/src/http/modules/ngx_http_fastcgi_module.c b/src/http/modules/ngx_http_fastcgi_module.c --- a/src/http/modules/ngx_http_fastcgi_module.c +++ b/src/http/modules/ngx_http_fastcgi_module.c @@ -473,6 +473,27 @@ static ngx_command_t ngx_http_fastcgi_c offsetof(ngx_http_fastcgi_loc_conf_t, upstream.cache_lock_age), NULL }, + { ngx_string("fastcgi_cache_tempfile"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_fastcgi_loc_conf_t, upstream.cache_tempfile), + NULL }, + + { ngx_string("fastcgi_cache_tempfile_timeout"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_msec_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_fastcgi_loc_conf_t, upstream.cache_tempfile_timeout), + NULL }, + + { ngx_string("fastcgi_cache_tempfile_loop"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_msec_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_fastcgi_loc_conf_t, upstream.cache_tempfile_loop), + NULL }, + { ngx_string("fastcgi_cache_revalidate"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -2866,6 +2887,9 @@ ngx_http_fastcgi_create_loc_conf(ngx_con conf->upstream.cache_lock = NGX_CONF_UNSET; conf->upstream.cache_lock_timeout = NGX_CONF_UNSET_MSEC; conf->upstream.cache_lock_age = NGX_CONF_UNSET_MSEC; + conf->upstream.cache_tempfile = NGX_CONF_UNSET; + conf->upstream.cache_tempfile_timeout = NGX_CONF_UNSET_MSEC; + conf->upstream.cache_tempfile_loop = NGX_CONF_UNSET_MSEC; conf->upstream.cache_revalidate = NGX_CONF_UNSET; conf->upstream.cache_background_update = NGX_CONF_UNSET; #endif @@ -3160,6 +3184,22 @@ ngx_http_fastcgi_merge_loc_conf(ngx_conf ngx_conf_merge_msec_value(conf->upstream.cache_lock_age, prev->upstream.cache_lock_age, 5000); + ngx_conf_merge_value(conf->upstream.cache_tempfile, + prev->upstream.cache_tempfile, 0); + + ngx_conf_merge_msec_value(conf->upstream.cache_tempfile_timeout, + prev->upstream.cache_tempfile_timeout, 5000); + + ngx_conf_merge_msec_value(conf->upstream.cache_tempfile_loop, + prev->upstream.cache_tempfile_loop, 50); + + if (conf->upstream.cache_lock && conf->upstream.cache_tempfile) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "\"fastcgi_cache_lock\" and \"fastcgi_cache_tempfile\"" + " cannot be used together"); + return NGX_CONF_ERROR; + } + ngx_conf_merge_value(conf->upstream.cache_revalidate, prev->upstream.cache_revalidate, 0); diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -590,6 +590,27 @@ static ngx_command_t ngx_http_proxy_com offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_lock_age), NULL }, + { ngx_string("proxy_cache_tempfile"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_tempfile), + NULL }, + + { ngx_string("proxy_cache_tempfile_timeout"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_msec_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_tempfile_timeout), + NULL }, + + { ngx_string("proxy_cache_tempfile_loop"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_msec_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_tempfile_loop), + NULL }, + { ngx_string("proxy_cache_revalidate"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -3384,6 +3405,9 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ conf->upstream.cache_lock = NGX_CONF_UNSET; conf->upstream.cache_lock_timeout = NGX_CONF_UNSET_MSEC; conf->upstream.cache_lock_age = NGX_CONF_UNSET_MSEC; + conf->upstream.cache_tempfile = NGX_CONF_UNSET; + conf->upstream.cache_tempfile_timeout = NGX_CONF_UNSET_MSEC; + conf->upstream.cache_tempfile_loop = NGX_CONF_UNSET_MSEC; conf->upstream.cache_revalidate = NGX_CONF_UNSET; conf->upstream.cache_convert_head = NGX_CONF_UNSET; conf->upstream.cache_background_update = NGX_CONF_UNSET; @@ -3699,6 +3723,22 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t ngx_conf_merge_msec_value(conf->upstream.cache_lock_age, prev->upstream.cache_lock_age, 5000); + ngx_conf_merge_value(conf->upstream.cache_tempfile, + prev->upstream.cache_tempfile, 0); + + ngx_conf_merge_msec_value(conf->upstream.cache_tempfile_timeout, + prev->upstream.cache_tempfile_timeout, 5000); + + ngx_conf_merge_msec_value(conf->upstream.cache_tempfile_loop, + prev->upstream.cache_tempfile_loop, 50); + + if (conf->upstream.cache_lock && conf->upstream.cache_tempfile) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "\"proxy_cache_lock\" and \"proxy_cache_tempfile\"" + " cannot be used together"); + return NGX_CONF_ERROR; + } + ngx_conf_merge_value(conf->upstream.cache_revalidate, prev->upstream.cache_revalidate, 0); diff --git a/src/http/modules/ngx_http_scgi_module.c b/src/http/modules/ngx_http_scgi_module.c --- a/src/http/modules/ngx_http_scgi_module.c +++ b/src/http/modules/ngx_http_scgi_module.c @@ -321,6 +321,27 @@ static ngx_command_t ngx_http_scgi_comma offsetof(ngx_http_scgi_loc_conf_t, upstream.cache_lock_age), NULL }, + { ngx_string("scgi_cache_tempfile"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_scgi_loc_conf_t, upstream.cache_tempfile), + NULL }, + + { ngx_string("scgi_cache_tempfile_timeout"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_msec_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_scgi_loc_conf_t, upstream.cache_tempfile_timeout), + NULL }, + + { ngx_string("scgi_cache_tempfile_loop"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_msec_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_scgi_loc_conf_t, upstream.cache_tempfile_loop), + NULL }, + { ngx_string("scgi_cache_revalidate"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -1273,6 +1294,9 @@ ngx_http_scgi_create_loc_conf(ngx_conf_t conf->upstream.cache_lock = NGX_CONF_UNSET; conf->upstream.cache_lock_timeout = NGX_CONF_UNSET_MSEC; conf->upstream.cache_lock_age = NGX_CONF_UNSET_MSEC; + conf->upstream.cache_tempfile = NGX_CONF_UNSET; + conf->upstream.cache_tempfile_timeout = NGX_CONF_UNSET_MSEC; + conf->upstream.cache_tempfile_loop = NGX_CONF_UNSET_MSEC; conf->upstream.cache_revalidate = NGX_CONF_UNSET; conf->upstream.cache_background_update = NGX_CONF_UNSET; #endif @@ -1562,6 +1586,22 @@ ngx_http_scgi_merge_loc_conf(ngx_conf_t ngx_conf_merge_msec_value(conf->upstream.cache_lock_age, prev->upstream.cache_lock_age, 5000); + ngx_conf_merge_value(conf->upstream.cache_tempfile, + prev->upstream.cache_tempfile, 0); + + ngx_conf_merge_msec_value(conf->upstream.cache_tempfile_timeout, + prev->upstream.cache_tempfile_timeout, 5000); + + ngx_conf_merge_msec_value(conf->upstream.cache_tempfile_loop, + prev->upstream.cache_tempfile_loop, 50); + + if (conf->upstream.cache_lock && conf->upstream.cache_tempfile) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "\"scgi_cache_lock\" and \"scgi_cache_tempfile\"" + " cannot be used together"); + return NGX_CONF_ERROR; + } + ngx_conf_merge_value(conf->upstream.cache_revalidate, prev->upstream.cache_revalidate, 0); diff --git a/src/http/modules/ngx_http_uwsgi_module.c b/src/http/modules/ngx_http_uwsgi_module.c --- a/src/http/modules/ngx_http_uwsgi_module.c +++ b/src/http/modules/ngx_http_uwsgi_module.c @@ -385,6 +385,27 @@ static ngx_command_t ngx_http_uwsgi_comm offsetof(ngx_http_uwsgi_loc_conf_t, upstream.cache_lock_age), NULL }, + { ngx_string("uwsgi_cache_tempfile"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, + ngx_conf_set_flag_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_uwsgi_loc_conf_t, upstream.cache_tempfile), + NULL }, + + { ngx_string("uwsgi_cache_tempfile_timeout"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_msec_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_uwsgi_loc_conf_t, upstream.cache_tempfile_timeout), + NULL }, + + { ngx_string("uwsgi_cache_tempfile_loop"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_msec_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_uwsgi_loc_conf_t, upstream.cache_tempfile_loop), + NULL }, + { ngx_string("uwsgi_cache_revalidate"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -1497,6 +1518,9 @@ ngx_http_uwsgi_create_loc_conf(ngx_conf_ conf->upstream.cache_lock = NGX_CONF_UNSET; conf->upstream.cache_lock_timeout = NGX_CONF_UNSET_MSEC; conf->upstream.cache_lock_age = NGX_CONF_UNSET_MSEC; + conf->upstream.cache_tempfile = NGX_CONF_UNSET; + conf->upstream.cache_tempfile_timeout = NGX_CONF_UNSET_MSEC; + conf->upstream.cache_tempfile_loop = NGX_CONF_UNSET_MSEC; conf->upstream.cache_revalidate = NGX_CONF_UNSET; conf->upstream.cache_background_update = NGX_CONF_UNSET; #endif @@ -1798,6 +1822,22 @@ ngx_http_uwsgi_merge_loc_conf(ngx_conf_t ngx_conf_merge_msec_value(conf->upstream.cache_lock_age, prev->upstream.cache_lock_age, 5000); + ngx_conf_merge_value(conf->upstream.cache_tempfile, + prev->upstream.cache_tempfile, 0); + + ngx_conf_merge_msec_value(conf->upstream.cache_tempfile_timeout, + prev->upstream.cache_tempfile_timeout, 5000); + + ngx_conf_merge_msec_value(conf->upstream.cache_tempfile_loop, + prev->upstream.cache_tempfile_loop, 50); + + if (conf->upstream.cache_lock && conf->upstream.cache_tempfile) { + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, + "\"uwsgi_cache_lock\" and \"uwsgi_cache_tempfile\"" + " cannot be used together"); + return NGX_CONF_ERROR; + } + ngx_conf_merge_value(conf->upstream.cache_revalidate, prev->upstream.cache_revalidate, 0); diff --git a/src/http/ngx_http_cache.h b/src/http/ngx_http_cache.h --- a/src/http/ngx_http_cache.h +++ b/src/http/ngx_http_cache.h @@ -114,11 +114,14 @@ struct ngx_http_cache_s { ngx_msec_t lock_timeout; ngx_msec_t lock_age; ngx_msec_t lock_time; + ngx_msec_t tempfile_timeout; + ngx_msec_t tempfile_loop; ngx_msec_t wait_time; ngx_event_t wait_event; unsigned lock:1; + unsigned serve_tempfile:1; unsigned waiting:1; unsigned updated:1; diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -892,6 +892,10 @@ ngx_http_upstream_cache(ngx_http_request c->lock_timeout = u->conf->cache_lock_timeout; c->lock_age = u->conf->cache_lock_age; + c->serve_tempfile = u->conf->cache_tempfile; + c->tempfile_timeout = u->conf->cache_tempfile_timeout; + c->tempfile_loop = u->conf->cache_tempfile_loop; + u->cache_status = NGX_HTTP_CACHE_MISS; } diff --git a/src/http/ngx_http_upstream.h b/src/http/ngx_http_upstream.h --- a/src/http/ngx_http_upstream.h +++ b/src/http/ngx_http_upstream.h @@ -205,6 +205,10 @@ typedef struct { ngx_msec_t cache_lock_timeout; ngx_msec_t cache_lock_age; + ngx_flag_t cache_tempfile; + ngx_msec_t cache_tempfile_timeout; + ngx_msec_t cache_tempfile_loop; + ngx_flag_t cache_revalidate; ngx_flag_t cache_convert_head; ngx_flag_t cache_background_update; From jiri.setnicka at cdn77.com Fri Jan 28 16:31:59 2022 From: jiri.setnicka at cdn77.com (=?utf-8?b?SmnFmcOtIFNldG5pxI1rYQ==?=) Date: Fri, 28 Jan 2022 17:31:59 +0100 Subject: [PATCH 07 of 15] Tempfiles: Wait handlers In-Reply-To: References: Message-ID: <10e917f6ddb56c338f95.1643387519@pathfinder> # HG changeset patch # User Jiří Setnička # Date 1643385660 -3600 # Fri Jan 28 17:01:00 2022 +0100 # Node ID 10e917f6ddb56c338f9597b76a68cdbf21d8f8e8 # Parent 5e3013a56643a9f8d26ea0e5882a1aa986c51903 Tempfiles: Wait handlers First commit in sequence of commits which adds ability to serve multiple requests on-the-fly from the one growing tempfile. diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -16,6 +16,9 @@ static ngx_int_t ngx_http_file_cache_loc static void ngx_http_file_cache_lock_wait_handler(ngx_event_t *ev); static void ngx_http_file_cache_lock_wait(ngx_http_request_t *r, ngx_http_cache_t *c); +static void ngx_http_file_cache_tempfile_wait_handler(ngx_event_t *ev); +static ngx_int_t ngx_http_file_cache_wait_for_temp_file(ngx_http_request_t *r, + ngx_http_cache_t *c); static ngx_int_t ngx_http_file_cache_read(ngx_http_request_t *r, ngx_http_cache_t *c); static ssize_t ngx_http_file_cache_aio_read(ngx_http_request_t *r, @@ -622,6 +625,86 @@ wakeup: } +static void +ngx_http_file_cache_tempfile_wait_handler(ngx_event_t *ev) +{ + ngx_connection_t *c; + ngx_http_request_t *r; + + r = ev->data; + c = r->connection; + + ngx_http_set_log_request(c->log, r); + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, + "http file cache wait: \"%V?%V\"", &r->uri, &r->args); + + r->main->blocked--; + r->cache_tempfile_busy = 0; + + r->write_event_handler(r); + + ngx_http_run_posted_requests(c); +} + + +static ngx_int_t +ngx_http_file_cache_wait_for_temp_file(ngx_http_request_t *r, ngx_http_cache_t *c) +{ + ngx_int_t updated, done; + ngx_uint_t wait; + ngx_msec_t now, timer; + ngx_http_file_cache_t *cache; + + c->waiting = 0; + + now = ngx_current_msec; + + timer = c->wait_time - now; + + if ((ngx_msec_int_t) timer <= 0) { + ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, + "cache tempfile lock timeout"); + + c->tempfile_timeout = 0; + + return NGX_HTTP_GATEWAY_TIME_OUT; + } + + cache = c->file_cache; + wait = 0; + + ngx_shmtx_lock(&cache->shpool->mutex); + + updated = c->tf_node ? c->tf_node->updated : 0; + done = c->tf_node ? c->tf_node->done : 0; + + if (!done && c->node->updating) { + wait = 1; + } + + ngx_shmtx_unlock(&cache->shpool->mutex); + + if (updated) { + return NGX_DONE; + } + + if (wait) { + c->waiting = 1; + + ngx_add_timer(&c->wait_event, + (timer > c->tempfile_loop) ? c->tempfile_loop : timer); + + r->main->blocked++; + r->cache_tempfile_busy = 1; + + return NGX_AGAIN; + } + + return NGX_DECLINED; +} + + static ngx_int_t ngx_http_file_cache_read(ngx_http_request_t *r, ngx_http_cache_t *c) { diff --git a/src/http/ngx_http_request.h b/src/http/ngx_http_request.h --- a/src/http/ngx_http_request.h +++ b/src/http/ngx_http_request.h @@ -556,6 +556,11 @@ struct ngx_http_request_s { unsigned subrequest_ranges:1; unsigned single_range:1; unsigned disable_not_modified:1; + +#if (NGX_HTTP_CACHE) + unsigned cache_tempfile_busy:1; +#endif + unsigned stat_reading:1; unsigned stat_writing:1; unsigned stat_processing:1; diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -561,6 +561,10 @@ ngx_http_upstream_init_request(ngx_http_ return; } + if (r->cache_tempfile_busy) { + return; + } + u = r->upstream; #if (NGX_HTTP_CACHE) From jiri.setnicka at cdn77.com Fri Jan 28 16:32:01 2022 From: jiri.setnicka at cdn77.com (=?utf-8?b?SmnFmcOtIFNldG5pxI1rYQ==?=) Date: Fri, 28 Jan 2022 17:32:01 +0100 Subject: [PATCH 09 of 15] Tempfiles: Sending data from tempfile In-Reply-To: References: Message-ID: <24453fd1ce204f361748.1643387521@pathfinder> # HG changeset patch # User Jiří Setnička # Date 1643385660 -3600 # Fri Jan 28 17:01:00 2022 +0100 # Node ID 24453fd1ce204f361748c32e3c271d9e6fc7c9eb # Parent 101a15e01c313f1327937c84fbf143f875d868de Tempfiles: Sending data from tempfile This commit adds mechanism for sending only a part of the tempfile: 1. When waiting for tempfile (c->waiting) ngx_http_cache_send obtains current size of file opened by ngx_http_file_cache_open and saves it into c->length 2. ngx_http_cache_send_internal sends appropriate part of the tempfile and records sent bytes. If output filter returned NGX_AGAIN, just wait for write event to wake us, otherwise setup timer using ngx_http_cache_wait_for_temp_file. 3. When tempfile is completed and moved it will server rest of the file (because opened file descriptor still points on the moved file, thank you POSIX) and it will return NGX_OK. diff --git a/src/http/ngx_http_cache.h b/src/http/ngx_http_cache.h --- a/src/http/ngx_http_cache.h +++ b/src/http/ngx_http_cache.h @@ -92,6 +92,7 @@ struct ngx_http_cache_s { size_t buffer_size; size_t header_start; size_t body_start; + size_t body_sent_bytes; off_t length; off_t fs_size; diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -55,6 +55,8 @@ static ngx_int_t ngx_http_file_cache_reo ngx_http_cache_t *c); static ngx_int_t ngx_http_file_cache_update_variant(ngx_http_request_t *r, ngx_http_cache_t *c); +static ngx_int_t ngx_http_cache_send_internal(ngx_http_request_t *r, + ngx_int_t last); static void ngx_http_file_cache_cleanup(void *data); static time_t ngx_http_file_cache_forced_expire(ngx_http_file_cache_t *cache); static time_t ngx_http_file_cache_expire(ngx_http_file_cache_t *cache); @@ -1923,11 +1925,59 @@ done: } } - ngx_int_t ngx_http_cache_send(ngx_http_request_t *r) { ngx_int_t rc; + ngx_event_t *wev; + ngx_http_cache_t *c; + + c = r->cache; + + if (!c->waiting) { + return ngx_http_cache_send_internal(r, 1); + } + + wev = r->connection->write; + + ngx_shmtx_lock(&c->file_cache->shpool->mutex); + + c->length = c->tf_node->length; + + ngx_shmtx_unlock(&c->file_cache->shpool->mutex); + + rc = ngx_http_cache_send_internal(r, 0); + + if (rc != NGX_OK && rc != NGX_AGAIN) { + return rc; + } + + if (rc == NGX_AGAIN && !wev->ready) { + return NGX_BUSY; /* epoll will wake us */ + } + + rc = ngx_http_file_cache_wait_for_temp_file(r, c); + + if (rc == NGX_AGAIN) { + return NGX_BUSY; + } + + if (rc == NGX_DECLINED) { + return NGX_ERROR; /* cannot restart here */ + } + + if (rc == NGX_DONE) { + return ngx_http_cache_send_internal(r, 1); + } + + return rc; +} + + +static ngx_int_t +ngx_http_cache_send_internal(ngx_http_request_t *r, ngx_int_t last) +{ + ngx_int_t rc; ngx_buf_t *b; ngx_chain_t out; ngx_http_cache_t *c; @@ -1937,10 +1987,28 @@ ngx_http_cache_send(ngx_http_request_t * ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "http file cache send: %s", c->file.name.data); + if (r->header_sent && r->header_only) { + return NGX_OK; + } + if (r != r->main && c->length - c->body_start == 0) { return ngx_http_send_header(r); } + if (!last && (size_t)c->length == c->body_start + c->body_sent_bytes) { + /* nothing to write, invoke only output filter */ + + if (!r->header_sent) { + rc = ngx_http_send_header(r); + + if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) { + return rc; + } + } + + return ngx_http_output_filter(r, NULL); + } + /* we need to allocate all before the header would be sent */ b = ngx_calloc_buf(r->pool); @@ -1953,18 +2021,25 @@ ngx_http_cache_send(ngx_http_request_t * return NGX_HTTP_INTERNAL_SERVER_ERROR; } - rc = ngx_http_send_header(r); - - if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) { - return rc; + if (!r->header_sent) { + rc = ngx_http_send_header(r); + + if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) { + return rc; + } } - b->file_pos = c->body_start; + /* If we send more bytes reset wait_time */ + r->cache->wait_time = ngx_current_msec + r->cache->tempfile_timeout; + + b->file_pos = c->body_start + c->body_sent_bytes; b->file_last = c->length; - - b->in_file = (c->length - c->body_start) ? 1: 0; - b->last_buf = (r == r->main) ? 1: 0; - b->last_in_chain = 1; + c->body_sent_bytes = c->length - c->body_start; + + b->in_file = (c->length - b->file_pos) ? 1: 0; + b->last_buf = (r == r->main && last) ? 1: 0; + b->last_in_chain = last; + b->flush = 1; b->file->fd = c->file.fd; b->file->name = c->file.name; From jiri.setnicka at cdn77.com Fri Jan 28 16:32:00 2022 From: jiri.setnicka at cdn77.com (=?utf-8?b?SmnFmcOtIFNldG5pxI1rYQ==?=) Date: Fri, 28 Jan 2022 17:32:00 +0100 Subject: [PATCH 08 of 15] Tempfiles: Mechanism of opening tempfiles used for serving paralell requests In-Reply-To: References: Message-ID: <101a15e01c313f132793.1643387520@pathfinder> # HG changeset patch # User Jiří Setnička # Date 1643385660 -3600 # Fri Jan 28 17:01:00 2022 +0100 # Node ID 101a15e01c313f1327937c84fbf143f875d868de # Parent 10e917f6ddb56c338f9597b76a68cdbf21d8f8e8 Tempfiles: Mechanism of opening tempfiles used for serving paralell requests This commit adds mechanism of choosing and opening tempfile. For the first request it acts like before, fcn->updating is set and request is downloaded to tempfile and served at the same moment. Other paralell requests will have c->waiting flag set and they try to use tempfile number to locate and open the tempfile used by the primary request. diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -11,6 +11,10 @@ #include +static ngx_int_t ngx_http_file_cache_open_file(ngx_http_request_t *r, + ngx_str_t *filename); +static ngx_int_t ngx_http_file_cache_open_temp_file(ngx_http_request_t *r, + ngx_http_cache_t *c); static ngx_int_t ngx_http_file_cache_lock(ngx_http_request_t *r, ngx_http_cache_t *c); static void ngx_http_file_cache_lock_wait_handler(ngx_event_t *ev); @@ -369,13 +373,14 @@ ngx_http_file_cache_open(ngx_http_reques ngx_uint_t test; ngx_http_cache_t *c; ngx_pool_cleanup_t *cln; - ngx_open_file_info_t of; ngx_http_file_cache_t *cache; - ngx_http_core_loc_conf_t *clcf; c = r->cache; if (c->waiting) { + if (c->serve_tempfile) { + return ngx_http_file_cache_open_temp_file(r, c); + } return NGX_AGAIN; } @@ -446,6 +451,32 @@ ngx_http_file_cache_open(ngx_http_reques goto done; } + rc = ngx_http_file_cache_open_file(r, &c->file.name); + if (rc != NGX_DECLINED) { + return rc; + } + +done: + + if (rv == NGX_DECLINED) { + return ngx_http_file_cache_lock(r, c); + } + + return rv; +} + + +static ngx_int_t +ngx_http_file_cache_open_file(ngx_http_request_t *r, ngx_str_t *filename) +{ + ngx_http_cache_t *c; + ngx_open_file_info_t of; + ngx_http_file_cache_t *cache; + ngx_http_core_loc_conf_t *clcf; + + c = r->cache; + cache = c->file_cache; + clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); ngx_memzero(&of, sizeof(ngx_open_file_info_t)); @@ -457,7 +488,7 @@ ngx_http_file_cache_open(ngx_http_reques of.directio = NGX_OPEN_FILE_DIRECTIO_OFF; of.read_ahead = clcf->read_ahead; - if (ngx_open_cached_file(clcf->open_file_cache, &c->file.name, &of, r->pool) + if (ngx_open_cached_file(clcf->open_file_cache, filename, &of, r->pool) != NGX_OK) { switch (of.err) { @@ -467,7 +498,7 @@ ngx_http_file_cache_open(ngx_http_reques case NGX_ENOENT: case NGX_ENOTDIR: - goto done; + return NGX_DECLINED; default: ngx_log_error(NGX_LOG_CRIT, r->connection->log, of.err, @@ -491,24 +522,90 @@ ngx_http_file_cache_open(ngx_http_reques } return ngx_http_file_cache_read(r, c); +} + + +static ngx_int_t +ngx_http_file_cache_open_temp_file(ngx_http_request_t *r, ngx_http_cache_t *c) +{ + u_char *p; + ngx_int_t rc; + ngx_file_t temp_file; + ngx_uint_t tf_number; + ngx_http_file_cache_t *cache; + + cache = c->file_cache; + + if (c->tf_node) { + return NGX_OK; + } + + ngx_shmtx_lock(&cache->shpool->mutex); + + tf_number = c->node->tf_number; + + if (tf_number > 0) { + c->tf_node = ngx_http_file_cache_tf_lookup(cache, tf_number); + if (c->tf_node == NULL) { + ngx_shmtx_unlock(&cache->shpool->mutex); + ngx_log_error(NGX_LOG_WARN, r->connection->log, 0, + "Missing tf_node for %d, probably full tf_zone", tf_number); + return NGX_DECLINED; + } + + c->tf_node->count++; + } + + ngx_shmtx_unlock(&cache->shpool->mutex); + + if (c->tf_node == NULL) { + rc = ngx_http_file_cache_wait_for_temp_file(r, c); + + if (rc == NGX_DONE || rc == NGX_DECLINED) { + goto done; + } + + return rc; + } + + temp_file.name.len = c->file.name.len + 10 + 1; + temp_file.name.data = ngx_pnalloc(r->pool, temp_file.name.len + 1); + if (temp_file.name.data == NULL) { + return NGX_ERROR; + } + + p = ngx_cpymem(temp_file.name.data, c->file.name.data, c->file.name.len); + ngx_sprintf(p, ".%010uD%Z", tf_number); + + rc = ngx_http_file_cache_open_file(r, &temp_file.name); + + if (rc == NGX_DECLINED) { + /* tempfile was moved meantime */ + ngx_shmtx_lock(&cache->shpool->mutex); + + if (c->tf_node != NULL) { + ngx_http_file_cache_tf_delete(c, cache); + } + + ngx_shmtx_unlock(&cache->shpool->mutex); + goto done; + } + + return rc; done: - - if (rv == NGX_DECLINED) { - return ngx_http_file_cache_lock(r, c); - } - - return rv; + c->waiting = 0; + return ngx_http_file_cache_open_file(r, &c->file.name); } static ngx_int_t ngx_http_file_cache_lock(ngx_http_request_t *r, ngx_http_cache_t *c) { - ngx_msec_t now, timer; - ngx_http_file_cache_t *cache; - - if (!c->lock) { + ngx_msec_t now, timer; + ngx_http_file_cache_t *cache; + + if (!c->lock && !c->serve_tempfile) { return NGX_DECLINED; } @@ -522,7 +619,7 @@ ngx_http_file_cache_lock(ngx_http_reques if (!c->node->updating || (ngx_msec_int_t) timer <= 0) { c->node->updating = 1; - c->node->lock_time = now + c->lock_age; + c->node->lock_time = now + (c->lock ? c->lock_age : c->tempfile_timeout); c->updating = 1; c->lock_time = c->node->lock_time; } @@ -537,27 +634,40 @@ ngx_http_file_cache_lock(ngx_http_reques return NGX_DECLINED; } - if (c->lock_timeout == 0) { + if (c->lock && c->lock_timeout == 0) { + return NGX_HTTP_CACHE_SCARCE; + } + + if (c->serve_tempfile && c->tempfile_timeout == 0) { return NGX_HTTP_CACHE_SCARCE; } c->waiting = 1; if (c->wait_time == 0) { - c->wait_time = now + c->lock_timeout; - - c->wait_event.handler = ngx_http_file_cache_lock_wait_handler; + if (c->lock) { + c->wait_time = now + c->lock_timeout; + c->wait_event.handler = ngx_http_file_cache_lock_wait_handler; + } else { + c->wait_time = now + c->tempfile_timeout; + c->wait_event.handler = ngx_http_file_cache_tempfile_wait_handler; + } c->wait_event.data = r; c->wait_event.log = r->connection->log; } - timer = c->wait_time - now; - - ngx_add_timer(&c->wait_event, (timer > 500) ? 500 : timer); - - r->main->blocked++; - - return NGX_AGAIN; + if (c->lock) { + timer = c->wait_time - now; + + ngx_add_timer(&c->wait_event, timer > 500 ? 500 : timer); + + r->main->blocked++; + + return NGX_AGAIN; + } + + /* Tempfile may be already there, try it immediately */ + return ngx_http_file_cache_open_temp_file(r, c); } @@ -829,10 +939,15 @@ ngx_http_file_cache_read(ngx_http_reques } else { c->node->updating = 1; c->updating = 1; + c->node->lock_time = ngx_current_msec + (c->lock ? c->lock_age : c->tempfile_timeout); c->lock_time = c->node->lock_time; rc = NGX_HTTP_CACHE_STALE; } + if (c->tf_node != NULL) { + ngx_http_file_cache_tf_delete(c, cache); + } + ngx_shmtx_unlock(&cache->shpool->mutex); ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, @@ -1464,6 +1579,7 @@ ngx_http_file_cache_reopen(ngx_http_requ ngx_shmtx_unlock(&cache->shpool->mutex); c->secondary = 1; + c->waiting = 0; c->file.name.len = 0; c->body_start = c->buffer_size; From jiri.setnicka at cdn77.com Fri Jan 28 16:32:02 2022 From: jiri.setnicka at cdn77.com (=?utf-8?b?SmnFmcOtIFNldG5pxI1rYQ==?=) Date: Fri, 28 Jan 2022 17:32:02 +0100 Subject: [PATCH 10 of 15] Tempfiles: Setup event handlers in ngx_http_upstream.c In-Reply-To: References: Message-ID: # HG changeset patch # User Jiří Setnička # Date 1643385660 -3600 # Fri Jan 28 17:01:00 2022 +0100 # Node ID bd12e8ba1af2005260e68a410e3c8927a88dac1a # Parent 24453fd1ce204f361748c32e3c271d9e6fc7c9eb Tempfiles: Setup event handlers in ngx_http_upstream.c Introduced in the previous commits the file cache could serve multiple clients from the currently downloading tempfiles. This is achieved by repeated execution of ngx_http_file_cache_send and related functions. Each run is performed by executing ngx_http_upstream_init_request, invoked by either write event or by tempfiles timer. diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -576,10 +576,12 @@ ngx_http_upstream_init_request(ngx_http_ if (rc == NGX_BUSY) { r->write_event_handler = ngx_http_upstream_init_request; + r->read_event_handler = ngx_http_test_reading; return; } r->write_event_handler = ngx_http_request_empty_handler; + r->read_event_handler = ngx_http_block_reading; if (rc == NGX_ERROR) { ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR); @@ -589,6 +591,15 @@ ngx_http_upstream_init_request(ngx_http_ if (rc == NGX_OK) { rc = ngx_http_upstream_cache_send(r, u); + if (rc == NGX_BUSY) { + r->write_event_handler = ngx_http_upstream_init_request; + r->read_event_handler = ngx_http_test_reading; + return; + } + + r->write_event_handler = ngx_http_request_empty_handler; + r->read_event_handler = ngx_http_block_reading; + if (rc == NGX_DONE) { return; } @@ -1088,6 +1099,10 @@ ngx_http_upstream_cache_send(ngx_http_re return NGX_ERROR; } + if (r->header_sent) { + return ngx_http_cache_send(r); + } + rc = u->process_header(r); if (rc == NGX_OK) { @@ -2984,7 +2999,11 @@ ngx_http_upstream_send_response(ngx_http ngx_connection_t *c; ngx_http_core_loc_conf_t *clcf; - rc = ngx_http_send_header(r); + if (r->header_sent) { + rc = NGX_OK; + } else { + rc = ngx_http_send_header(r); + } if (rc == NGX_ERROR || rc > NGX_OK || r->post_action) { ngx_http_upstream_finalize_request(r, u, rc); From jiri.setnicka at cdn77.com Fri Jan 28 16:32:03 2022 From: jiri.setnicka at cdn77.com (=?utf-8?b?SmnFmcOtIFNldG5pxI1rYQ==?=) Date: Fri, 28 Jan 2022 17:32:03 +0100 Subject: [PATCH 11 of 15] Tempfiles: reset c->body_start when updating a tempfile In-Reply-To: References: Message-ID: <64a2e216aeeeb847eb0b.1643387523@pathfinder> # HG changeset patch # User Jiří Setnička # Date 1643385660 -3600 # Fri Jan 28 17:01:00 2022 +0100 # Node ID 64a2e216aeeeb847eb0b9a83ed6e6082ade4ac9e # Parent bd12e8ba1af2005260e68a410e3c8927a88dac1a Tempfiles: reset c->body_start when updating a tempfile Previously when there was an old cached file and multiple concurrent requests to it, the first requets created a new tempfile and other requests tried to open this tempfile - but with c->body_start from the original file. This could result in critical errors "cache file .. has too long header". diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -402,7 +402,9 @@ ngx_http_file_cache_open(ngx_http_reques cln->data = c; } - c->buffer_size = c->body_start; + if (c->buffer_size < c->body_start) { + c->buffer_size = c->body_start; + } rc = ngx_http_file_cache_exists(cache, c); @@ -560,6 +562,8 @@ ngx_http_file_cache_open_temp_file(ngx_h ngx_shmtx_unlock(&cache->shpool->mutex); + c->body_start = c->buffer_size; + if (c->tf_node == NULL) { rc = ngx_http_file_cache_wait_for_temp_file(r, c); From jiri.setnicka at cdn77.com Fri Jan 28 16:32:04 2022 From: jiri.setnicka at cdn77.com (=?utf-8?b?SmnFmcOtIFNldG5pxI1rYQ==?=) Date: Fri, 28 Jan 2022 17:32:04 +0100 Subject: [PATCH 12 of 15] Tempfiles: Expired tempfiles In-Reply-To: References: Message-ID: <0e00ffe7fab3dcf3d316.1643387524@pathfinder> # HG changeset patch # User Jiří Setnička # Date 1643385660 -3600 # Fri Jan 28 17:01:00 2022 +0100 # Node ID 0e00ffe7fab3dcf3d3167851237327e5fb9e10b6 # Parent 64a2e216aeeeb847eb0b9a83ed6e6082ade4ac9e Tempfiles: Expired tempfiles When tempfile expires before it is downloaded we try to fire one request to download newer version of the tempfile and remain other requests locked. diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -939,10 +939,15 @@ ngx_http_file_cache_read(ngx_http_reques ngx_shmtx_lock(&cache->shpool->mutex); - if (c->node->updating) { + /* If serving tempfile and it is already expired try to download a new + * one (but only if not downloading it already) */ + if (c->node->updating + && (!c->tf_node || c->node->tf_number != c->tf_node->node.key) + ) { rc = NGX_HTTP_CACHE_UPDATING; } else { + c->node->tf_number = 0; c->node->updating = 1; c->updating = 1; c->node->lock_time = ngx_current_msec + (c->lock ? c->lock_age : c->tempfile_timeout); @@ -1791,8 +1796,10 @@ ngx_http_file_cache_update(ngx_http_requ ngx_http_file_cache_tf_delete(c, cache); } - c->node->updating = 0; - c->node->tf_number = 0; + if (c->node->lock_time == c->lock_time) { + c->node->updating = 0; + c->node->tf_number = 0; + } ngx_shmtx_unlock(&cache->shpool->mutex); } From jiri.setnicka at cdn77.com Fri Jan 28 16:32:05 2022 From: jiri.setnicka at cdn77.com (=?utf-8?b?SmnFmcOtIFNldG5pxI1rYQ==?=) Date: Fri, 28 Jan 2022 17:32:05 +0100 Subject: [PATCH 13 of 15] Tempfiles: Skip cached file if there is already newer tempfile In-Reply-To: References: Message-ID: <5e64af4c94860cd5cf4b.1643387525@pathfinder> # HG changeset patch # User Jiří Setnička # Date 1643385660 -3600 # Fri Jan 28 17:01:00 2022 +0100 # Node ID 5e64af4c94860cd5cf4b9af5a265d3a087e7b735 # Parent 0e00ffe7fab3dcf3d3167851237327e5fb9e10b6 Tempfiles: Skip cached file if there is already newer tempfile diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -455,6 +455,20 @@ ngx_http_file_cache_open(ngx_http_reques goto done; } + ngx_shmtx_lock(&cache->shpool->mutex); + + if (c->serve_tempfile && c->node->updating) { + /* Do not try old cached file, jump directly to cache_lock and use tempfile */ + test = 0; + } + + ngx_shmtx_unlock(&cache->shpool->mutex); + + if (!test) { + rv = NGX_DECLINED; + goto done; + } + rc = ngx_http_file_cache_open_file(r, &c->file.name); if (rc != NGX_DECLINED) { return rc; From jiri.setnicka at cdn77.com Fri Jan 28 16:32:06 2022 From: jiri.setnicka at cdn77.com (=?utf-8?b?SmnFmcOtIFNldG5pxI1rYQ==?=) Date: Fri, 28 Jan 2022 17:32:06 +0100 Subject: [PATCH 14 of 15] Tempfiles: Set send_timeout inside ngx_http_cache_send In-Reply-To: References: Message-ID: <3cd1f04d933137153b06.1643387526@pathfinder> # HG changeset patch # User Jiří Setnička # Date 1643385660 -3600 # Fri Jan 28 17:01:00 2022 +0100 # Node ID 3cd1f04d933137153b0628819ccb251d1f57614b # Parent 5e64af4c94860cd5cf4b9af5a265d3a087e7b735 Tempfiles: Set send_timeout inside ngx_http_cache_send diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -1953,9 +1953,10 @@ done: ngx_int_t ngx_http_cache_send(ngx_http_request_t *r) { - ngx_int_t rc; - ngx_event_t *wev; - ngx_http_cache_t *c; + ngx_int_t rc; + ngx_event_t *wev; + ngx_http_cache_t *c; + ngx_http_core_loc_conf_t *clcf; c = r->cache; @@ -1965,6 +1966,33 @@ ngx_http_cache_send(ngx_http_request_t * wev = r->connection->write; + clcf = ngx_http_get_module_loc_conf(r->main, ngx_http_core_module); + + if (wev->timedout) { + ngx_log_error(NGX_LOG_INFO, r->connection->log, NGX_ETIMEDOUT, + "client timed out"); + r->connection->timedout = 1; + + return NGX_HTTP_REQUEST_TIME_OUT; + } + + if (wev->delayed || r->aio) { + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, wev->log, 0, + "http writer delayed"); + + if (!wev->delayed) { + ngx_add_timer(wev, clcf->send_timeout); + } + + rc = ngx_handle_write_event(wev, clcf->send_lowat); + if (rc != NGX_OK) { + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "handle write event error: %d", rc); + return rc; + } + + return NGX_BUSY; + } + ngx_shmtx_lock(&c->file_cache->shpool->mutex); c->length = c->tf_node->length; @@ -1978,6 +2006,14 @@ ngx_http_cache_send(ngx_http_request_t * } if (rc == NGX_AGAIN && !wev->ready) { + if (!wev->delayed) { + ngx_add_timer(wev, clcf->send_timeout); + } + + if (ngx_handle_write_event(wev, clcf->send_lowat) != NGX_OK) { + return NGX_ERROR; + } + return NGX_BUSY; /* epoll will wake us */ } From jiri.setnicka at cdn77.com Fri Jan 28 16:32:07 2022 From: jiri.setnicka at cdn77.com (=?utf-8?b?SmnFmcOtIFNldG5pxI1rYQ==?=) Date: Fri, 28 Jan 2022 17:32:07 +0100 Subject: [PATCH 15 of 15] Use cache status UPDATING when serving from tempfile In-Reply-To: References: Message-ID: <6745b619f44ea7debddf.1643387527@pathfinder> # HG changeset patch # User Jiří Setnička # Date 1643385660 -3600 # Fri Jan 28 17:01:00 2022 +0100 # Node ID 6745b619f44ea7debddfe04d9af37d1deffe89dd # Parent 3cd1f04d933137153b0628819ccb251d1f57614b Use cache status UPDATING when serving from tempfile diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -954,7 +954,11 @@ ngx_http_upstream_cache(ngx_http_request break; case NGX_OK: - u->cache_status = NGX_HTTP_CACHE_HIT; + if (c->tf_node != NULL) { + u->cache_status = NGX_HTTP_CACHE_UPDATING; + } else { + u->cache_status = NGX_HTTP_CACHE_HIT; + } } switch (rc) { From mdounin at mdounin.ru Fri Jan 28 22:25:00 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 29 Jan 2022 01:25:00 +0300 Subject: Using single persistent socket to send subrequests In-Reply-To: References: Message-ID: Hello! On Fri, Jan 28, 2022 at 06:13:45AM +0000, Devashi Tandon wrote: > Was wondering if this question is more suited for the > development forum, since I didn't receive any response on the > user forum. Repeating the question below: You've got a response, see here: https://mailman.nginx.org/archives/list/nginx at nginx.org/message/7COCHVJ4ROBCLBPHPVGP7IQHJQXTRKAT/ Please refrain from further posting to the nginx-devel@ mailing list. Thank you. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Fri Jan 28 23:19:05 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 29 Jan 2022 02:19:05 +0300 Subject: request body filter last_buf In-Reply-To: References: Message-ID: Hello! On Thu, Jan 27, 2022 at 11:02:45AM -0800, Dk Jack wrote: > Thank you. Last few questions to complete my understanding? > > Were the module body filter callbacks being invoked even when > content_length_n <= 0 prior to 7913:185c86b830ef change? With content_length_n < 0 (the only valid case is -1) filters were never called. With content_length_n == 0, filters were called. Prior to 7913:185c86b830ef, the proper way for filters to detect that the body was fully read and processed was rb->rest == 0, which is immediately true when content_length_n is 0, see ngx_http_request_body_save_filter() for relevant code. In 7913:185c86b830ef and following 7914:9cf043a5d9ca the code was changed to rely on the last_buf instead, to make it possible to buffer response parts in intermediate body filters. > Is the nginx body handling a store-and-forward architecture where it waits > for the entire body or does it forward pkts as and when it receives > (especially when dealing with large bodies). Body handling might be different depending on the settings and/or particular request handling details. By default, nginx reads the whole response body into rb->bufs (possible written to a temporary file) and then forwards the whole body. Additionally, there is non-buffered request body reading, which is activated with the r->request_body_no_buffering flag (usually with settings like "proxy_request_buffering off;"). In this mode nginx forwards request body data immediately as it is received from the client, without disk buffering. > What is the behavior of last_buf if it's not store-and-forward. The last_buf flag is set on the last buffer, when the whole request body is read from the client and no additional data will follow. It doesn't depend on the request body reading mode being used. -- Maxim Dounin http://mdounin.ru/ From mandeep-singh.chhabra at thalesgroup.com Sat Jan 29 13:50:24 2022 From: mandeep-singh.chhabra at thalesgroup.com (CHHABRA Mandeep Singh) Date: Sat, 29 Jan 2022 13:50:24 +0000 Subject: [PATCH] Add provision to fetch certificate chain from Nginx In-Reply-To: References: Message-ID: Hi Maxim, > Sure, intermediate certificates are not required to be known by the server and can be provided by the client in the extra certificates during SSL/TLS handshake. I am not sure what you mean by passing the intermediate CAs in the extra certificates. They are CA certificates and are passed as CA certificate chain. For example: curl has option --cacert which can take the entire chain of CA certificates (from immediate issuer to the trust anchor) # curl --cacert > Further, it is not really possible to properly retrieve such client-provided intermediate certificates after the initial > handshake: these certificates are not saved to the session data and therefore not available after session reuse, see > 7653:8409f9df6219 (http://hg.nginx.org/nginx/rev/8409f9df6219). AFAIU, intermediate CA certificates are not supposed to be known by Nginx anyway. I am not sure if we are referring to two different things here. But with the changes which I proposed, I am able to retrieve the entire chain from Nginx(also in cases when intermediate CAs are not known to the server). During certificate verification, Nginx creates a verified chain from the incoming certificate chain and the CA certificate which is trusted on the web interface. Nginx only needs to know about the trust anchor (the self signed CA certificate) > And you want to allow access only to certificates signed by > Intermediate1 CA in some cases, and only certificates signed by > Intermediate2 CA in other cases. Is that correct? You are correct partially. Yes, we need to allow/deny a configuration based on the configuration available for the CA. There could be different combinations with the chain here 1- Cert-> Intermediate1 -> Root 2- Cert-> Intermediate2 -> Root 3- Cert-> Root 4- Cert-> Intermediate1 -> Intermediate2 -> Root For example:- Consider the chain "Cert-> Intermediate1 -> Intermediate2 -> Root" We need to something based on Intermediate2's configuration and it is possible that the Intermediate1 is not known to the server. Let me try to explain in detail : 1- There is a client certificate which is issued by an intermediate CA certificate and the intermediate CA is issued by a trust anchor. i.e. Cert1 -> Intermediate1 ->Root1 ( self signed CA). The Root1 is a trust anchor and is trusted on the web interface. The Intermediate1 is not known on the server. When a client established a connection with the server, passing the chain as Cert1 -> Intermediate1 -> Root1(optionally) nginx accepts this connection and creates a verified chain, because the Root1 is trusted on the interface and it is a self signed certificate. Nginx passes the client certificate (Cert1) to the middleware. If the Intermediate1 was known to the server always, we could do what you are suggesting(using ssl_client_i_dn and ssl_client_escaped_cert) If the Intermediate1 is not known to the server, there is no way to know/get the Root1 CA from the Cert1. Probably, I can try to explain in a better format, if you need. Please let me know. Regards, Mandeep -----Original Message----- From: nginx-devel On Behalf Of Maxim Dounin Sent: Wednesday, January 12, 2022 2:11 AM To: nginx-devel at nginx.org Subject: Re: [PATCH] Add provision to fetch certificate chain from Nginx Hello! On Thu, Dec 30, 2021 at 09:35:26AM +0000, CHHABRA Mandeep Singh wrote: > As far as my understanding goes, the intermediate CA certificates are > not required to be known to the server. > It is only the trust anchor(the root CA certificate) which is required > to be known and trusted on the sever. > And in our case also, the root CA certificate is trusted for the web. Sure, intermediate certificates are not required to be known by the server and can be provided by the client in the extra certificates during SSL/TLS handshake. Such configurations are believed to be extremely rare though: in most cases intermediate certificates are well known and can be easily configured on the server side, and this saves extra configuration on clients. Further, it is not really possible to properly retrieve such client-provided intermediate certificates after the initial handshake: these certificates are not saved to the session data and therefore not available after session reuse, see 7653:8409f9df6219 (http://hg.nginx.org/nginx/rev/8409f9df6219). Hence the original question about the problem you trying to solve. > I have tried to give a brief of the problem in the following section. > > We have a product which supports multi-tenancy and uses Nginx as a > reverse proxy. > There are different isolated domains which share the same trust > anchor. But there could be difference in the client certificate chain > in different domains. There is a need to do some extra validations > based on the CAs in the chain. > To be more precise, we have option to specify if a CA could be used to > do client or user authentication. There is a possibility that in one > domain, a CA is enabled for client authentication and in another , the > same CA is disabled. > > So, we need a way to get the certificate chain from Nginx, to do these > extra validations, apart from what Nginx does i.e. > checking if the chain could be verified. > But there is no way to get the chain, today. Not sure I've understood your description correctly, but from what I understood it looks like you are not trying to retrieve client-provided intermediate certificates, but instead trying to do additional checking on the chain which contains client-provided end certificate and the chain constructed by nginx from the intermediate certificates known on the server during certificate verification. That is, you have something like: - Root CA, Intermediate1 CA, Intermediate2 CA - all known on the server; - Client certs signed by Intermediate1 CA; - Client certs signed by Intermediate2 CA. And you want to allow access only to certificates signed by Intermediate1 CA in some cases, and only certificates signed by Intermediate2 CA in other cases. Is that correct? Such problem seems to be solvable by just looking at $ssl_client_escaped_cert and re-creating the certificate chain from the list of CA certificates known on the server. In simple cases (assuming all intermediate CA DNs are unique) just checking the $ssl_client_i_dn variable would be enough. Does it look reasonable, or I misunderstood something? -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel From mdounin at mdounin.ru Sun Jan 30 23:55:09 2022 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 31 Jan 2022 02:55:09 +0300 Subject: [PATCH] Add provision to fetch certificate chain from Nginx In-Reply-To: References: Message-ID: Hello! On Sat, Jan 29, 2022 at 01:50:24PM +0000, CHHABRA Mandeep Singh via nginx-devel wrote: > > Sure, intermediate certificates are not required to be known > > by the server and can be provided by the client in the extra > > certificates during SSL/TLS handshake. > > I am not sure what you mean by passing the intermediate CAs in > the extra certificates. They are CA certificates and are passed > as CA certificate chain. For example: curl has option --cacert > which can take the entire chain of CA certificates (from > immediate issuer to the trust anchor) > # curl --cacert The "extra" is as in SSL_CTX_add_extra_chain_cert, see https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_add_extra_chain_cert.html. During SSL/TLS handshake client sends the list of certificates to the server. The certificate list is expected to contain the client certificate itself, followed by optional/extra additional certificates (usually not including the root, since including it is just a waste of resources). It is defined by TLSv1.2 sepecification as follows (https://datatracker.ietf.org/doc/html/rfc5246#section-7.4.2): certificate_list This is a sequence (chain) of certificates. The sender's certificate MUST come first in the list. Each following certificate MUST directly certify the one preceding it. Because certificate validation requires that root keys be distributed independently, the self-signed certificate that specifies the root certificate authority MAY be omitted from the chain, under the assumption that the remote end must already possess it in order to validate it in any case. Note that the certificate list is used for both server and client certificates. As previously suggested, it is believed that client certificates are almost always used without any additional intermediate certificates being sent by the client. > > Further, it is not really possible to properly retrieve such > > client-provided intermediate certificates after the initial > > handshake: these certificates are not saved to the session > > data and therefore not available after session reuse, see > > 7653:8409f9df6219 > > (http://hg.nginx.org/nginx/rev/8409f9df6219). > > AFAIU, intermediate CA certificates are not supposed to be known > by Nginx anyway. I am not sure if we are referring to > two different things here. But with the changes which I > proposed, I am able to retrieve the entire chain from Nginx(also > in cases > when intermediate CAs are not known to the server). During > certificate verification, Nginx creates a verified chain from > the incoming certificate chain and the CA certificate which is > trusted on the web interface. Nginx only needs to know about the > trust anchor (the self signed CA certificate) Try to retrieve the entire chain in a connection which is using an abbreviated handshake, not an initial one. The information about intermediate certificates which are not known on the server is not preserved by OpenSSL in the session data. As such, it is not possible to re-create the certificate chain in the connection using abbreviated handshake. This is explicitly documented in the man page of the SSL_get0_verified_chain() function you are using, see https://www.openssl.org/docs/man1.1.1/man3/SSL_get0_verified_chain.html: : If the session is resumed peers do not send certificates so a NULL : pointer is returned by these functions. The client certificate itself is preserved in the session data, so it is available after session resumption. This makes it possible to re-create the certificate chain if no client-provided certificates are used, see the 7653:8409f9df6219 change mentioned above. Hope this helps. > > And you want to allow access only to certificates signed by > > Intermediate1 CA in some cases, and only certificates signed by > > Intermediate2 CA in other cases. Is that correct? > > You are correct partially. Yes, we need to allow/deny a configuration based on the configuration available for the CA. > There could be different combinations with the chain here > 1- Cert-> Intermediate1 -> Root > 2- Cert-> Intermediate2 -> Root > 3- Cert-> Root > 4- Cert-> Intermediate1 -> Intermediate2 -> Root > > For example:- > Consider the chain "Cert-> Intermediate1 -> Intermediate2 -> Root" > We need to something based on Intermediate2's configuration and it is possible that the > Intermediate1 is not known to the server. As previously suggested, while technically it is possible, it is usually indicate that it might be a good idea to reconsider the configuration and make sure that all intermediate certificates are known on the server. -- Maxim Dounin http://mdounin.ru/ From vadimjunk at gmail.com Mon Jan 31 00:37:10 2022 From: vadimjunk at gmail.com (Vadim Fedorenko) Date: Mon, 31 Jan 2022 00:37:10 +0000 Subject: [PATCH 13 of 15] Tempfiles: Skip cached file if there is already newer tempfile In-Reply-To: <5e64af4c94860cd5cf4b.1643387525@pathfinder> References: <5e64af4c94860cd5cf4b.1643387525@pathfinder> Message-ID: Hi! Thanks for sharing patches. It's interesting for me and I'm going to test it soon. For this particular patch I would suggest to reduce the scope of mutex locking and remove it when "serve_tempfile" is not configured. See my version below: diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c index db379450..97982aed 100644 --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -460,6 +460,22 @@ ngx_http_file_cache_open(ngx_http_request_t *r) goto done; } + if (c->serve_tempfile) { + ngx_shmtx_lock(&cache->shpool->mutex); + + if (c->node->updating) { + /* Do not try old cached file, jump directly to cache_lock and use tempfile */ + test = 0; + } + + ngx_shmtx_unlock(&cache->shpool->mutex); + + if (!test) { + rv = NGX_DECLINED; + goto done; + } + } + rc = ngx_http_file_cache_open_file(r, &c->file.name); if (rc != NGX_DECLINED) { return rc; Best wishes, Vadim пт, 28 янв. 2022 г. в 17:00, Jiří Setnička via nginx-devel < nginx-devel at nginx.org>: > # HG changeset patch > # User Jiří Setnička > # Date 1643385660 -3600 > # Fri Jan 28 17:01:00 2022 +0100 > # Node ID 5e64af4c94860cd5cf4b9af5a265d3a087e7b735 > # Parent 0e00ffe7fab3dcf3d3167851237327e5fb9e10b6 > Tempfiles: Skip cached file if there is already newer tempfile > > diff --git a/src/http/ngx_http_file_cache.c > b/src/http/ngx_http_file_cache.c > --- a/src/http/ngx_http_file_cache.c > +++ b/src/http/ngx_http_file_cache.c > @@ -455,6 +455,20 @@ ngx_http_file_cache_open(ngx_http_reques > goto done; > } > > + ngx_shmtx_lock(&cache->shpool->mutex); > + > + if (c->serve_tempfile && c->node->updating) { > + /* Do not try old cached file, jump directly to cache_lock and > use tempfile */ > + test = 0; > + } > + > + ngx_shmtx_unlock(&cache->shpool->mutex); > + > + if (!test) { > + rv = NGX_DECLINED; > + goto done; > + } > + > rc = ngx_http_file_cache_open_file(r, &c->file.name); > if (rc != NGX_DECLINED) { > return rc; > _______________________________________________ > nginx-devel mailing list -- nginx-devel at nginx.org > To unsubscribe send an email to nginx-devel-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yugo-horie at jocdn.co.jp Mon Jan 31 02:30:28 2022 From: yugo-horie at jocdn.co.jp (Yugo Horie) Date: Mon, 31 Jan 2022 11:30:28 +0900 Subject: Prioritize `X-Accel-Expires` than `Cache-Control` and `Expires` (#964) In-Reply-To: References: Message-ID: Thanks for reviewing. We've reconsidered that. After all, a new patch has been pushed below this. > the following set of headers will result in caching being incorrectly enabled (while it should be disabled due to Set-Cookie header): Sorry, we had lost these points. So we reconsidered that it had complex rules which include not only Set-Cookie but also Vary asterisk. > A better solution might be to save parsing results somewhere in u->headers_in, Agree with you. Thus, we've introduced several variables in our new patch which stores cache valid seconds in xxxx_n and also introduced xxxx_c flags which store the whether cacheable or not in each parsing header phases instead of u->cacheable. > and apply these parsing results in a separate step after parsing all headers, probably somewhere in ngx_http_upstream_process_headers() Moreover, we introduced ngx_http_upstream_cache_validate_regardless_order which applies cache behavior with the parsing result in ngx_http_upstream_process_headers. Although, we also consider that these procedures could not be more easily. changeset: 8000:7614ced0f04d branch: issue-964 tag: tip user: Yugo Horie date: Sun Jan 30 22:11:42 2022 +0900 files: src/http/ngx_http_upstream.c src/http/ngx_http_upstream.h description: Prioritize cache behavior headers (#964) user: Yugo Horie branch 'issue-964' changed src/http/ngx_http_upstream.c changed src/http/ngx_http_upstream.h diff -r 56ead48cfe88 -r 7614ced0f04d src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Tue Jan 25 18:03:52 2022 +0300 +++ b/src/http/ngx_http_upstream.c Sun Jan 30 22:11:42 2022 +0900 @@ -2348,6 +2348,43 @@ ngx_http_upstream_send_request(r, u, 0); } +static void +ngx_http_upstream_cache_validate_regardless_order(ngx_http_request_t *r, ngx_http_upstream_t *u) +{ + ngx_http_upstream_headers_in_t *uh = &u->headers_in; + if (uh->x_accel_expires != NULL && uh->x_accel_expires_n >= 0) { + if (uh->cookies.elts != NULL) { + u->cacheable = uh->cookies_c; + } else if (uh->vary != NULL) { + u->cacheable = uh->vary_c; + } else { + u->cacheable = uh->x_accel_expires_c; + } + r->cache->valid_sec = ngx_time() + uh->x_accel_expires_n; + r->cache->updating_sec = 0; + r->cache->error_sec = 0; + } else if (uh->cache_control.elts != NULL) { + if (uh->cookies.elts != NULL) { + u->cacheable = uh->cookies_c; + } else if (uh->vary != NULL) { + u->cacheable = uh->vary_c; + } else { + u->cacheable = uh->cache_control_c; + } + if (uh->cache_control_n > 0) { + r->cache->valid_sec = ngx_time() + uh->cache_control_n; + } + } else if (uh->expires != NULL && uh->expires_n >= 0) { + if (uh->cookies.elts != NULL) { + u->cacheable = uh->cookies_c; + } else if (uh->vary != NULL) { + u->cacheable = uh->vary_c; + } else { + u->cacheable = uh->expires_c; + } + r->cache->valid_sec = ngx_time() + uh->expires_n; + } +} static void ngx_http_upstream_process_header(ngx_http_request_t *r, ngx_http_upstream_t *u) @@ -2469,6 +2506,9 @@ continue; } +#if (NGX_HTTP_CACHE) + ngx_http_upstream_cache_validate_regardless_order(r, u); +#endif break; } @@ -4688,8 +4728,10 @@ *ph = h; #if (NGX_HTTP_CACHE) + u->headers_in.cookies_c = u->cacheable; + if (!(u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_SET_COOKIE)) { - u->cacheable = 0; + u->headers_in.cookies_c = 0; } #endif @@ -4727,6 +4769,8 @@ u_char *p, *start, *last; ngx_int_t n; + u->headers_in.cache_control_c = u->cacheable; + if (u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_CACHE_CONTROL) { return NGX_OK; } @@ -4746,7 +4790,7 @@ || ngx_strlcasestrn(start, last, (u_char *) "no-store", 8 - 1) != NULL || ngx_strlcasestrn(start, last, (u_char *) "private", 7 - 1) != NULL) { - u->cacheable = 0; + u->headers_in.cache_control_c = 0; return NGX_OK; } @@ -4771,16 +4815,16 @@ continue; } - u->cacheable = 0; + u->headers_in.cache_control_c = 0; return NGX_OK; } if (n == 0) { - u->cacheable = 0; + u->headers_in.cache_control_c = 0; return NGX_OK; } - r->cache->valid_sec = ngx_time() + n; + u->headers_in.cache_control_n = n; } p = ngx_strlcasestrn(start, last, (u_char *) "stale-while-revalidate=", @@ -4799,7 +4843,7 @@ continue; } - u->cacheable = 0; + u->headers_in.cache_control_c = 0; return NGX_OK; } @@ -4822,7 +4866,7 @@ continue; } - u->cacheable = 0; + u->headers_in.cache_control_c = 0; return NGX_OK; } @@ -4848,6 +4892,8 @@ { time_t expires; + u->headers_in.expires_c = u->cacheable; + if (u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_EXPIRES) { return NGX_OK; } @@ -4863,11 +4909,10 @@ expires = ngx_parse_http_time(h->value.data, h->value.len); if (expires == NGX_ERROR || expires < ngx_time()) { - u->cacheable = 0; + u->headers_in.expires_c = 0; return NGX_OK; } - - r->cache->valid_sec = expires; + u->headers_in.expires_n = expires - ngx_time(); } #endif @@ -4890,6 +4935,8 @@ size_t len; ngx_int_t n; + u->headers_in.x_accel_expires_c = u->cacheable; + if (u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_XA_EXPIRES) { return NGX_OK; } @@ -4906,14 +4953,14 @@ switch (n) { case 0: - u->cacheable = 0; + u->headers_in.x_accel_expires_c = 0; /* fall through */ case NGX_ERROR: return NGX_OK; default: - r->cache->valid_sec = ngx_time() + n; + u->headers_in.x_accel_expires_n = n; return NGX_OK; } } @@ -4924,7 +4971,7 @@ n = ngx_atoi(p, len); if (n != NGX_ERROR) { - r->cache->valid_sec = n; + u->headers_in.x_accel_expires_n = n - ngx_time(); } } #endif @@ -5055,6 +5102,8 @@ #if (NGX_HTTP_CACHE) + u->headers_in.vary_c = u->cacheable; + if (u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_VARY) { return NGX_OK; } @@ -5066,7 +5115,7 @@ if (h->value.len > NGX_HTTP_CACHE_VARY_LEN || (h->value.len == 1 && h->value.data[0] == '*')) { - u->cacheable = 0; + u->headers_in.vary_c = 0; } r->cache->vary = h->value; diff -r 56ead48cfe88 -r 7614ced0f04d src/http/ngx_http_upstream.h --- a/src/http/ngx_http_upstream.h Tue Jan 25 18:03:52 2022 +0300 +++ b/src/http/ngx_http_upstream.h Sun Jan 30 22:11:42 2022 +0900 @@ -294,6 +294,14 @@ off_t content_length_n; time_t last_modified_time; + ngx_int_t cache_control_n; + ngx_int_t expires_n; + ngx_int_t x_accel_expires_n; + unsigned cache_control_c:1; + unsigned expires_c:1; + unsigned vary_c:1; + unsigned cookies_c:1; + unsigned x_accel_expires_c:1; unsigned connection_close:1; unsigned chunked:1; 2022年1月27日(木) 9:10 Maxim Dounin : > Hello! > > On Tue, Jan 25, 2022 at 12:27:58PM +0900, Yugo Horie wrote: > > > changeset: 7997:86f70e48a64a > > branch: issue-964 > > tag: tip > > user: Yugo Horie > > date: Tue Jan 25 12:16:05 2022 +0900 > > files: src/http/ngx_http_upstream.c src/http/ngx_http_upstream.h > > description: > > Prioritize `X-Accel-Expires` than `Cache-Control` and `Expires` (#964) > > > > We introduce 3 flags that indicate to be overwriting cache control > behavior. > > > > * The `overwrite_noncache` switches on the case of not to be cached when > > processing `Cache-Control` and `Expires` headers from upstream. > > > > * The `overwrite_stale_xxx` flags also switch on when it's enabled to use > > stale cache behavior on processing those headers. > > > > * `process_accel_expires` watches these flags, which invalidates their > non- > > cache > > and stale behavior which had been set in other headers to prioritize > > `X-Accel-Expires`. > > > > user: Yugo Horie > > changed src/http/ngx_http_upstream.c > > changed src/http/ngx_http_upstream.h > > > > > > diff -r 5d88e2bf92b3 -r 86f70e48a64a src/http/ngx_http_upstream.c > > --- a/src/http/ngx_http_upstream.c Sat Jan 22 00:28:51 2022 +0300 > > +++ b/src/http/ngx_http_upstream.c Tue Jan 25 12:16:05 2022 +0900 > > @@ -4747,6 +4747,7 @@ > > || ngx_strlcasestrn(start, last, (u_char *) "private", 7 - 1) != > > NULL) > > { > > u->cacheable = 0; > > + u->overwrite_noncache = 1; > > return NGX_OK; > > } > > > > @@ -4772,11 +4773,13 @@ > > } > > > > u->cacheable = 0; > > + u->overwrite_noncache = 1; > > return NGX_OK; > > } > > > > if (n == 0) { > > u->cacheable = 0; > > + u->overwrite_noncache = 1; > > return NGX_OK; > > } > > > > @@ -4800,9 +4803,12 @@ > > } > > > > u->cacheable = 0; > > + u->overwrite_noncache = 1; > > return NGX_OK; > > } > > > > + u->overwrite_stale_updating = 1; > > + u->overwrite_stale_error = 1; > > r->cache->updating_sec = n; > > r->cache->error_sec = n; > > } > > @@ -4822,10 +4828,12 @@ > > continue; > > } > > > > + u->overwrite_noncache = 1; > > u->cacheable = 0; > > return NGX_OK; > > } > > > > + u->overwrite_stale_error = 1; > > r->cache->error_sec = n; > > } > > } > > @@ -4863,6 +4871,7 @@ > > expires = ngx_parse_http_time(h->value.data, h->value.len); > > > > if (expires == NGX_ERROR || expires < ngx_time()) { > > + u->overwrite_noncache = 1; > > u->cacheable = 0; > > return NGX_OK; > > } > > @@ -4897,6 +4906,15 @@ > > if (r->cache == NULL) { > > return NGX_OK; > > } > > + if (u->overwrite_noncache) { > > + u->cacheable = 1; > > + } > > + if (u->overwrite_stale_updating) { > > + r->cache->updating_sec = 0; > > + } > > + if (u->overwrite_stale_error) { > > + r->cache->error_sec = 0; > > + } > > > > len = h->value.len; > > p = h->value.data; > > diff -r 5d88e2bf92b3 -r 86f70e48a64a src/http/ngx_http_upstream.h > > --- a/src/http/ngx_http_upstream.h Sat Jan 22 00:28:51 2022 +0300 > > +++ b/src/http/ngx_http_upstream.h Tue Jan 25 12:16:05 2022 +0900 > > @@ -386,6 +386,9 @@ > > > > unsigned store:1; > > unsigned cacheable:1; > > + unsigned overwrite_noncache:1; > > + unsigned overwrite_stale_updating:1; > > + unsigned overwrite_stale_error:1; > > unsigned accel:1; > > unsigned ssl:1; > > #if (NGX_HTTP_CACHE) > > Thank you for the patch. > > As already suggested in ticket #2309, the approach taken looks too > fragile. For example, the following set of headers will result in > caching being incorrectly enabled (while it should be disabled due > to Set-Cookie header): > > Set-Cookie: foo=bar > Cache-Control: no-cache > X-Accel-Expires: 100 > > A better solution might be to save parsing results somewhere in > u->headers_in, and apply these parsing results in a separate > step after parsing all headers, probably somewhere in > ngx_http_upstream_process_headers(). Similar implementation can > be seen, for example, in Content-Length and Transfer-Encoding > parsing. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx-devel mailing list -- nginx-devel at nginx.org > To unsubscribe send an email to nginx-devel-leave at nginx.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vl at nginx.com Mon Jan 31 07:18:51 2022 From: vl at nginx.com (Vladimir Homutov) Date: Mon, 31 Jan 2022 10:18:51 +0300 Subject: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null In-Reply-To: References: <797C7B9D-D6B9-4EBF-ADAB-37AA62903B56@contoso.com> <1CA525F5-EB7A-4F39-9F53-938A47CFDAEF@baidu.com> <7C889653-E7E9-442C-AA80-77877CDD5C09@baidu.com> <024CFE99-B384-429D-984D-ADAFB9FB5F47@baidu.com> <2BA3BAA6-C01A-4B97-8A10-DD77BC56ADE0@baidu.com> Message-ID: On Fri, Jan 28, 2022 at 02:09:31PM +0000, Gao,Yan(媒体云) wrote: > > c->quic is never set on main connection (it is not really needed there). > > ngx_http_v3_init() is first called with main connection, and later it is > > called with _another_ connection that is a stream, and it has c->quic set. > > > ngx_ssl_shutdown() is not supposed to do something on stream > > connections, ssl object is shared with main connection. all necessary > > cleanup will be done by main connection handlers. > > ngx_http_v3_init() is only called in ngx_http_init_connection, as ls->handler. > And then ngx_quic_listen add the main quic connection to udp rbtree. > It call main quic connection read->handler If find connection in > ngx_lookup_udp_connection, else call ls->handler. > But when ngx_http_v3_init() is called by _another_ connection that is a stream? the ngx_http_v3_init() may be called with either main or stream quic connection. for main connection c->quic is NULL, and ngx_quic_run() is invoked, after that it returns. if c->quic is set, then ngx_http_v3_init() proceeds further, and initializes HTTP/3 stream and proceeds to processing requests. From arut at nginx.com Mon Jan 31 07:34:05 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 31 Jan 2022 10:34:05 +0300 Subject: [PATCH 0 of 3] QUIC stream states and events In-Reply-To: References: Message-ID: - fixed flow control - fixed stream closure - added HTTP/3 uni stream closure patch From arut at nginx.com Mon Jan 31 07:34:06 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 31 Jan 2022 10:34:06 +0300 Subject: [PATCH 1 of 3] QUIC: introduced explicit stream states In-Reply-To: References: Message-ID: <8dcb9908989401d750b1.1643614446@arut-laptop> # HG changeset patch # User Roman Arutyunyan # Date 1643611562 -10800 # Mon Jan 31 09:46:02 2022 +0300 # Branch quic # Node ID 8dcb9908989401d750b14fe5dccf444a5485c23d # Parent 81a3429db8b00ec9fc476d3687d1cd18088f3365 QUIC: introduced explicit stream states. This allows to eliminate the usage of stream connection event flags for tracking stream state. diff --git a/src/event/quic/ngx_event_quic.h b/src/event/quic/ngx_event_quic.h --- a/src/event/quic/ngx_event_quic.h +++ b/src/event/quic/ngx_event_quic.h @@ -28,6 +28,26 @@ #define NGX_QUIC_STREAM_UNIDIRECTIONAL 0x02 +typedef enum { + NGX_QUIC_STREAM_SEND_READY = 0, + NGX_QUIC_STREAM_SEND_SEND, + NGX_QUIC_STREAM_SEND_DATA_SENT, + NGX_QUIC_STREAM_SEND_DATA_RECVD, + NGX_QUIC_STREAM_SEND_RESET_SENT, + NGX_QUIC_STREAM_SEND_RESET_RECVD +} ngx_quic_stream_send_state_e; + + +typedef enum { + NGX_QUIC_STREAM_RECV_RECV = 0, + NGX_QUIC_STREAM_RECV_SIZE_KNOWN, + NGX_QUIC_STREAM_RECV_DATA_RECVD, + NGX_QUIC_STREAM_RECV_DATA_READ, + NGX_QUIC_STREAM_RECV_RESET_RECVD, + NGX_QUIC_STREAM_RECV_RESET_READ +} ngx_quic_stream_recv_state_e; + + typedef struct { ngx_ssl_t *ssl; @@ -66,6 +86,8 @@ struct ngx_quic_stream_s { ngx_chain_t *in; ngx_chain_t *out; ngx_uint_t cancelable; /* unsigned cancelable:1; */ + ngx_quic_stream_send_state_e send_state; + ngx_quic_stream_recv_state_e recv_state; }; diff --git a/src/event/quic/ngx_event_quic_ack.c b/src/event/quic/ngx_event_quic_ack.c --- a/src/event/quic/ngx_event_quic_ack.c +++ b/src/event/quic/ngx_event_quic_ack.c @@ -617,10 +617,13 @@ ngx_quic_resend_frames(ngx_connection_t case NGX_QUIC_FT_STREAM: qs = ngx_quic_find_stream(&qc->streams.tree, f->u.stream.stream_id); - if (qs && qs->connection->write->error) { - /* RESET_STREAM was sent */ - ngx_quic_free_frame(c, f); - break; + if (qs) { + if (qs->send_state == NGX_QUIC_STREAM_SEND_RESET_SENT + || qs->send_state == NGX_QUIC_STREAM_SEND_RESET_RECVD) + { + ngx_quic_free_frame(c, f); + break; + } } /* fall through */ diff --git a/src/event/quic/ngx_event_quic_streams.c b/src/event/quic/ngx_event_quic_streams.c --- a/src/event/quic/ngx_event_quic_streams.c +++ b/src/event/quic/ngx_event_quic_streams.c @@ -192,12 +192,13 @@ ngx_quic_close_streams(ngx_connection_t { qs = (ngx_quic_stream_t *) node; + qs->recv_state = NGX_QUIC_STREAM_RECV_RESET_RECVD; + qs->send_state = NGX_QUIC_STREAM_SEND_RESET_SENT; + rev = qs->connection->read; - rev->error = 1; rev->ready = 1; wev = qs->connection->write; - wev->error = 1; wev->ready = 1; ngx_post_event(rev, &ngx_posted_events); @@ -221,19 +222,22 @@ ngx_quic_close_streams(ngx_connection_t ngx_int_t ngx_quic_reset_stream(ngx_connection_t *c, ngx_uint_t err) { - ngx_event_t *wev; ngx_connection_t *pc; ngx_quic_frame_t *frame; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; - wev = c->write; + qs = c->quic; - if (wev->error) { + if (qs->send_state == NGX_QUIC_STREAM_SEND_DATA_RECVD + || qs->send_state == NGX_QUIC_STREAM_SEND_RESET_SENT + || qs->send_state == NGX_QUIC_STREAM_SEND_RESET_RECVD) + { return NGX_OK; } - qs = c->quic; + qs->send_state = NGX_QUIC_STREAM_SEND_RESET_SENT; + pc = qs->parent; qc = ngx_quic_get_connection(pc); @@ -250,9 +254,6 @@ ngx_quic_reset_stream(ngx_connection_t * ngx_quic_queue_frame(qc, frame); - wev->error = 1; - wev->ready = 1; - return NGX_OK; } @@ -260,27 +261,15 @@ ngx_quic_reset_stream(ngx_connection_t * ngx_int_t ngx_quic_shutdown_stream(ngx_connection_t *c, int how) { - ngx_quic_stream_t *qs; - - qs = c->quic; - if (how == NGX_RDWR_SHUTDOWN || how == NGX_WRITE_SHUTDOWN) { - if ((qs->id & NGX_QUIC_STREAM_SERVER_INITIATED) - || (qs->id & NGX_QUIC_STREAM_UNIDIRECTIONAL) == 0) - { - if (ngx_quic_shutdown_stream_send(c) != NGX_OK) { - return NGX_ERROR; - } + if (ngx_quic_shutdown_stream_send(c) != NGX_OK) { + return NGX_ERROR; } } if (how == NGX_RDWR_SHUTDOWN || how == NGX_READ_SHUTDOWN) { - if ((qs->id & NGX_QUIC_STREAM_SERVER_INITIATED) == 0 - || (qs->id & NGX_QUIC_STREAM_UNIDIRECTIONAL) == 0) - { - if (ngx_quic_shutdown_stream_recv(c) != NGX_OK) { - return NGX_ERROR; - } + if (ngx_quic_shutdown_stream_recv(c) != NGX_OK) { + return NGX_ERROR; } } @@ -291,19 +280,21 @@ ngx_quic_shutdown_stream(ngx_connection_ static ngx_int_t ngx_quic_shutdown_stream_send(ngx_connection_t *c) { - ngx_event_t *wev; ngx_connection_t *pc; ngx_quic_frame_t *frame; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; - wev = c->write; + qs = c->quic; - if (wev->error) { + if (qs->send_state != NGX_QUIC_STREAM_SEND_READY + && qs->send_state != NGX_QUIC_STREAM_SEND_SEND) + { return NGX_OK; } - qs = c->quic; + qs->send_state = NGX_QUIC_STREAM_SEND_DATA_SENT; + pc = qs->parent; qc = ngx_quic_get_connection(pc); @@ -327,8 +318,6 @@ ngx_quic_shutdown_stream_send(ngx_connec ngx_quic_queue_frame(qc, frame); - wev->error = 1; - return NGX_OK; } @@ -336,19 +325,19 @@ ngx_quic_shutdown_stream_send(ngx_connec static ngx_int_t ngx_quic_shutdown_stream_recv(ngx_connection_t *c) { - ngx_event_t *rev; ngx_connection_t *pc; ngx_quic_frame_t *frame; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; - rev = c->read; + qs = c->quic; - if (rev->pending_eof || rev->error) { + if (qs->recv_state != NGX_QUIC_STREAM_RECV_RECV + && qs->recv_state != NGX_QUIC_STREAM_RECV_SIZE_KNOWN) + { return NGX_OK; } - qs = c->quic; pc = qs->parent; qc = ngx_quic_get_connection(pc); @@ -371,8 +360,6 @@ ngx_quic_shutdown_stream_recv(ngx_connec ngx_quic_queue_frame(qc, frame); - rev->error = 1; - return NGX_OK; } @@ -690,9 +677,13 @@ ngx_quic_create_stream(ngx_connection_t if (id & NGX_QUIC_STREAM_UNIDIRECTIONAL) { if (id & NGX_QUIC_STREAM_SERVER_INITIATED) { qs->send_max_data = qc->ctp.initial_max_stream_data_uni; + qs->recv_state = NGX_QUIC_STREAM_RECV_DATA_READ; + qs->send_state = NGX_QUIC_STREAM_SEND_READY; } else { qs->recv_max_data = qc->tp.initial_max_stream_data_uni; + qs->recv_state = NGX_QUIC_STREAM_RECV_RECV; + qs->send_state = NGX_QUIC_STREAM_SEND_DATA_RECVD; } } else { @@ -704,6 +695,9 @@ ngx_quic_create_stream(ngx_connection_t qs->send_max_data = qc->ctp.initial_max_stream_data_bidi_local; qs->recv_max_data = qc->tp.initial_max_stream_data_bidi_remote; } + + qs->recv_state = NGX_QUIC_STREAM_RECV_RECV; + qs->send_state = NGX_QUIC_STREAM_SEND_READY; } qs->recv_window = qs->recv_max_data; @@ -744,26 +738,16 @@ ngx_quic_stream_recv(ngx_connection_t *c pc = qs->parent; rev = c->read; - if (rev->error) { + if (qs->recv_state == NGX_QUIC_STREAM_RECV_RESET_RECVD + || qs->recv_state == NGX_QUIC_STREAM_RECV_RESET_READ) + { + qs->recv_state = NGX_QUIC_STREAM_RECV_RESET_READ; + rev->error = 1; return NGX_ERROR; } - ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, - "quic stream id:0x%xL recv eof:%d buf:%uz", - qs->id, rev->pending_eof, size); - - if (qs->in == NULL || qs->in->buf->sync) { - rev->ready = 0; - - if (qs->recv_offset == qs->final_size) { - rev->eof = 1; - return 0; - } - - ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, - "quic stream id:0x%xL recv() not ready", qs->id); - return NGX_AGAIN; - } + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, + "quic stream id:0x%xL recv buf:%uz", qs->id, size); in = ngx_quic_read_chain(pc, &qs->in, size); if (in == NGX_CHAIN_ERROR) { @@ -780,8 +764,23 @@ ngx_quic_stream_recv(ngx_connection_t *c ngx_quic_free_chain(pc, in); - if (qs->in == NULL) { - rev->ready = rev->pending_eof; + if (len == 0) { + rev->ready = 0; + + if (qs->recv_state == NGX_QUIC_STREAM_RECV_SIZE_KNOWN + && qs->recv_offset == qs->final_size) + { + qs->recv_state = NGX_QUIC_STREAM_RECV_DATA_READ; + } + + if (qs->recv_state == NGX_QUIC_STREAM_RECV_DATA_READ) { + rev->eof = 1; + return 0; + } + + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, + "quic stream id:0x%xL recv() not ready", qs->id); + return NGX_AGAIN; } ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, @@ -839,10 +838,15 @@ ngx_quic_stream_send_chain(ngx_connectio qc = ngx_quic_get_connection(pc); wev = c->write; - if (wev->error) { + if (qs->send_state != NGX_QUIC_STREAM_SEND_READY + && qs->send_state != NGX_QUIC_STREAM_SEND_SEND) + { + wev->error = 1; return NGX_CHAIN_ERROR; } + qs->send_state = NGX_QUIC_STREAM_SEND_SEND; + flow = ngx_quic_max_stream_flow(c); if (flow == 0) { wev->ready = 0; @@ -1051,9 +1055,9 @@ ngx_quic_handle_stream_frame(ngx_connect sc = qs->connection; - rev = sc->read; - - if (rev->error) { + if (qs->recv_state != NGX_QUIC_STREAM_RECV_RECV + && qs->recv_state != NGX_QUIC_STREAM_RECV_SIZE_KNOWN) + { return NGX_OK; } @@ -1086,8 +1090,8 @@ ngx_quic_handle_stream_frame(ngx_connect return NGX_ERROR; } - rev->pending_eof = 1; qs->final_size = last; + qs->recv_state = NGX_QUIC_STREAM_RECV_SIZE_KNOWN; } if (ngx_quic_write_chain(c, &qs->in, frame->data, f->length, @@ -1098,6 +1102,7 @@ ngx_quic_handle_stream_frame(ngx_connect } if (f->offset == qs->recv_offset) { + rev = sc->read; rev->ready = 1; if (rev->active) { @@ -1273,11 +1278,15 @@ ngx_quic_handle_reset_stream_frame(ngx_c return NGX_OK; } - sc = qs->connection; + if (qs->recv_state == NGX_QUIC_STREAM_RECV_RESET_RECVD + || qs->recv_state == NGX_QUIC_STREAM_RECV_RESET_READ) + { + return NGX_OK; + } - rev = sc->read; - rev->error = 1; - rev->ready = 1; + qs->recv_state = NGX_QUIC_STREAM_RECV_RESET_RECVD; + + sc = qs->connection; if (ngx_quic_control_flow(sc, f->final_size) != NGX_OK) { return NGX_ERROR; @@ -1299,6 +1308,9 @@ ngx_quic_handle_reset_stream_frame(ngx_c return NGX_ERROR; } + rev = sc->read; + rev->ready = 1; + if (rev->active) { ngx_post_event(rev, &ngx_posted_events); } @@ -1341,6 +1353,7 @@ ngx_quic_handle_stop_sending_frame(ngx_c wev = qs->connection->write; if (wev->active) { + wev->ready = 1; ngx_post_event(wev, &ngx_posted_events); } @@ -1413,11 +1426,9 @@ static ngx_int_t ngx_quic_control_flow(ngx_connection_t *c, uint64_t last) { uint64_t len; - ngx_event_t *rev; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; - rev = c->read; qs = c->quic; qc = ngx_quic_get_connection(qs->parent); @@ -1434,7 +1445,9 @@ ngx_quic_control_flow(ngx_connection_t * qs->recv_last += len; - if (!rev->error && qs->recv_last > qs->recv_max_data) { + if (qs->recv_state == NGX_QUIC_STREAM_RECV_RECV + && qs->recv_last > qs->recv_max_data) + { qc->error = NGX_QUIC_ERR_FLOW_CONTROL_ERROR; return NGX_ERROR; } @@ -1454,12 +1467,10 @@ static ngx_int_t ngx_quic_update_flow(ngx_connection_t *c, uint64_t last) { uint64_t len; - ngx_event_t *rev; ngx_connection_t *pc; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; - rev = c->read; qs = c->quic; pc = qs->parent; qc = ngx_quic_get_connection(pc); @@ -1475,9 +1486,7 @@ ngx_quic_update_flow(ngx_connection_t *c qs->recv_offset += len; - if (!rev->pending_eof && !rev->error - && qs->recv_max_data <= qs->recv_offset + qs->recv_window / 2) - { + if (qs->recv_max_data <= qs->recv_offset + qs->recv_window / 2) { if (ngx_quic_update_max_stream_data(c) != NGX_OK) { return NGX_ERROR; } @@ -1510,6 +1519,10 @@ ngx_quic_update_max_stream_data(ngx_conn pc = qs->parent; qc = ngx_quic_get_connection(pc); + if (qs->recv_state != NGX_QUIC_STREAM_RECV_RECV) { + return NGX_OK; + } + recv_max_data = qs->recv_offset + qs->recv_window; if (qs->recv_max_data == recv_max_data) { From arut at nginx.com Mon Jan 31 07:34:07 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 31 Jan 2022 10:34:07 +0300 Subject: [PATCH 2 of 3] HTTP/3: proper uni stream closure detection In-Reply-To: References: Message-ID: # HG changeset patch # User Roman Arutyunyan # Date 1643611590 -10800 # Mon Jan 31 09:46:30 2022 +0300 # Branch quic # Node ID d3c6dea9454c48ded14b8c087dffc4dea46f78ef # Parent 8dcb9908989401d750b14fe5dccf444a5485c23d HTTP/3: proper uni stream closure detection. Previously, closure detection for server-initiated uni streams was not properly implemented. Instead, HTTP/3 code relied on QUIC code posting the read event and setting rev->error when it needed to close the stream. Then, regular uni stream read handler called c->recv() and received error, which closed the stream. This was an ad-hoc solution. If, for whatever reason, the read handler was called earlier, c->recv() would return 0, which would also close the stream. Now server-initiated uni streams have a separate read event handler for tracking stream closure. The handler calls c->recv(), which normally returns 0, but may return error in case of closure. diff --git a/src/http/v3/ngx_http_v3_uni.c b/src/http/v3/ngx_http_v3_uni.c --- a/src/http/v3/ngx_http_v3_uni.c +++ b/src/http/v3/ngx_http_v3_uni.c @@ -26,6 +26,7 @@ typedef struct { static void ngx_http_v3_close_uni_stream(ngx_connection_t *c); static void ngx_http_v3_uni_read_handler(ngx_event_t *rev); +static void ngx_http_v3_dummy_read_handler(ngx_event_t *wev); static void ngx_http_v3_dummy_write_handler(ngx_event_t *wev); static void ngx_http_v3_push_cleanup(void *data); static ngx_connection_t *ngx_http_v3_get_uni_stream(ngx_connection_t *c, @@ -252,6 +253,32 @@ failed: static void +ngx_http_v3_dummy_read_handler(ngx_event_t *rev) +{ + u_char ch; + ngx_connection_t *c; + + c = rev->data; + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http3 dummy read handler"); + + if (rev->ready) { + if (c->recv(c, &ch, 1) != 0) { + ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_NO_ERROR, NULL); + ngx_http_v3_close_uni_stream(c); + return; + } + } + + if (ngx_handle_read_event(rev, 0) != NGX_OK) { + ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_INTERNAL_ERROR, + NULL); + ngx_http_v3_close_uni_stream(c); + } +} + + +static void ngx_http_v3_dummy_write_handler(ngx_event_t *wev) { ngx_connection_t *c; @@ -393,7 +420,7 @@ ngx_http_v3_get_uni_stream(ngx_connectio sc->data = us; - sc->read->handler = ngx_http_v3_uni_read_handler; + sc->read->handler = ngx_http_v3_dummy_read_handler; sc->write->handler = ngx_http_v3_dummy_write_handler; if (index >= 0) { @@ -409,6 +436,8 @@ ngx_http_v3_get_uni_stream(ngx_connectio goto failed; } + ngx_post_event(sc->read, &ngx_posted_events); + return sc; failed: From arut at nginx.com Mon Jan 31 07:34:08 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 31 Jan 2022 10:34:08 +0300 Subject: [PATCH 3 of 3] QUIC: stream event setting function In-Reply-To: References: Message-ID: <9f5c59800a9894aad00b.1643614448@arut-laptop> # HG changeset patch # User Roman Arutyunyan # Date 1643187691 -10800 # Wed Jan 26 12:01:31 2022 +0300 # Branch quic # Node ID 9f5c59800a9894aad00b06df93ec454aab97372d # Parent d3c6dea9454c48ded14b8c087dffc4dea46f78ef QUIC: stream event setting function. The function ngx_quic_set_event() is now called instead of posting events directly. diff --git a/src/event/quic/ngx_event_quic_streams.c b/src/event/quic/ngx_event_quic_streams.c --- a/src/event/quic/ngx_event_quic_streams.c +++ b/src/event/quic/ngx_event_quic_streams.c @@ -34,6 +34,7 @@ static ngx_int_t ngx_quic_control_flow(n static ngx_int_t ngx_quic_update_flow(ngx_connection_t *c, uint64_t last); static ngx_int_t ngx_quic_update_max_stream_data(ngx_connection_t *c); static ngx_int_t ngx_quic_update_max_data(ngx_connection_t *c); +static void ngx_quic_set_event(ngx_event_t *ev); ngx_connection_t * @@ -156,7 +157,6 @@ ngx_quic_close_streams(ngx_connection_t { ngx_pool_t *pool; ngx_queue_t *q; - ngx_event_t *rev, *wev; ngx_rbtree_t *tree; ngx_rbtree_node_t *node; ngx_quic_stream_t *qs; @@ -195,17 +195,8 @@ ngx_quic_close_streams(ngx_connection_t qs->recv_state = NGX_QUIC_STREAM_RECV_RESET_RECVD; qs->send_state = NGX_QUIC_STREAM_SEND_RESET_SENT; - rev = qs->connection->read; - rev->ready = 1; - - wev = qs->connection->write; - wev->ready = 1; - - ngx_post_event(rev, &ngx_posted_events); - - if (rev->timer_set) { - ngx_del_timer(rev); - } + ngx_quic_set_event(qs->connection->read); + ngx_quic_set_event(qs->connection->write); #if (NGX_DEBUG) ns++; @@ -1024,7 +1015,6 @@ ngx_quic_handle_stream_frame(ngx_connect ngx_quic_frame_t *frame) { uint64_t last; - ngx_event_t *rev; ngx_connection_t *sc; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; @@ -1102,12 +1092,7 @@ ngx_quic_handle_stream_frame(ngx_connect } if (f->offset == qs->recv_offset) { - rev = sc->read; - rev->ready = 1; - - if (rev->active) { - ngx_post_event(rev, &ngx_posted_events); - } + ngx_quic_set_event(sc->read); } return NGX_OK; @@ -1118,7 +1103,6 @@ ngx_int_t ngx_quic_handle_max_data_frame(ngx_connection_t *c, ngx_quic_max_data_frame_t *f) { - ngx_event_t *wev; ngx_rbtree_t *tree; ngx_rbtree_node_t *node; ngx_quic_stream_t *qs; @@ -1140,12 +1124,7 @@ ngx_quic_handle_max_data_frame(ngx_conne node = ngx_rbtree_next(tree, node)) { qs = (ngx_quic_stream_t *) node; - wev = qs->connection->write; - - if (wev->active) { - wev->ready = 1; - ngx_post_event(wev, &ngx_posted_events); - } + ngx_quic_set_event(qs->connection->write); } } @@ -1206,7 +1185,6 @@ ngx_quic_handle_max_stream_data_frame(ng ngx_quic_header_t *pkt, ngx_quic_max_stream_data_frame_t *f) { uint64_t sent; - ngx_event_t *wev; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; @@ -1236,12 +1214,7 @@ ngx_quic_handle_max_stream_data_frame(ng sent = qs->connection->sent; if (sent >= qs->send_max_data) { - wev = qs->connection->write; - - if (wev->active) { - wev->ready = 1; - ngx_post_event(wev, &ngx_posted_events); - } + ngx_quic_set_event(qs->connection->write); } qs->send_max_data = f->limit; @@ -1254,7 +1227,6 @@ ngx_int_t ngx_quic_handle_reset_stream_frame(ngx_connection_t *c, ngx_quic_header_t *pkt, ngx_quic_reset_stream_frame_t *f) { - ngx_event_t *rev; ngx_connection_t *sc; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; @@ -1308,12 +1280,7 @@ ngx_quic_handle_reset_stream_frame(ngx_c return NGX_ERROR; } - rev = sc->read; - rev->ready = 1; - - if (rev->active) { - ngx_post_event(rev, &ngx_posted_events); - } + ngx_quic_set_event(qs->connection->read); return NGX_OK; } @@ -1323,7 +1290,6 @@ ngx_int_t ngx_quic_handle_stop_sending_frame(ngx_connection_t *c, ngx_quic_header_t *pkt, ngx_quic_stop_sending_frame_t *f) { - ngx_event_t *wev; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; @@ -1350,12 +1316,7 @@ ngx_quic_handle_stop_sending_frame(ngx_c return NGX_ERROR; } - wev = qs->connection->write; - - if (wev->active) { - wev->ready = 1; - ngx_post_event(wev, &ngx_posted_events); - } + ngx_quic_set_event(qs->connection->write); return NGX_OK; } @@ -1394,7 +1355,6 @@ void ngx_quic_handle_stream_ack(ngx_connection_t *c, ngx_quic_frame_t *f) { uint64_t sent, unacked; - ngx_event_t *wev; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; @@ -1405,13 +1365,11 @@ ngx_quic_handle_stream_ack(ngx_connectio return; } - wev = qs->connection->write; sent = qs->connection->sent; unacked = sent - qs->acked; - if (unacked >= qc->conf->stream_buffer_size && wev->active) { - wev->ready = 1; - ngx_post_event(wev, &ngx_posted_events); + if (unacked >= qc->conf->stream_buffer_size) { + ngx_quic_set_event(qs->connection->write); } qs->acked += f->u.stream.length; @@ -1585,6 +1543,17 @@ ngx_quic_update_max_data(ngx_connection_ } +static void +ngx_quic_set_event(ngx_event_t *ev) +{ + ev->ready = 1; + + if (ev->active) { + ngx_post_event(ev, &ngx_posted_events); + } +} + + ngx_int_t ngx_quic_handle_read_event(ngx_event_t *rev, ngx_uint_t flags) { From vl at nginx.com Mon Jan 31 10:18:32 2022 From: vl at nginx.com (Vladimir Homutov) Date: Mon, 31 Jan 2022 13:18:32 +0300 Subject: [PATCH 1 of 3] QUIC: introduced explicit stream states In-Reply-To: <8dcb9908989401d750b1.1643614446@arut-laptop> References: <8dcb9908989401d750b1.1643614446@arut-laptop> Message-ID: On Mon, Jan 31, 2022 at 10:34:06AM +0300, Roman Arutyunyan wrote: > # HG changeset patch > # User Roman Arutyunyan > # Date 1643611562 -10800 > # Mon Jan 31 09:46:02 2022 +0300 > # Branch quic > # Node ID 8dcb9908989401d750b14fe5dccf444a5485c23d > # Parent 81a3429db8b00ec9fc476d3687d1cd18088f3365 > QUIC: introduced explicit stream states. > > This allows to eliminate the usage of stream connection event flags for tracking > stream state. > > diff --git a/src/event/quic/ngx_event_quic.h b/src/event/quic/ngx_event_quic.h > --- a/src/event/quic/ngx_event_quic.h > +++ b/src/event/quic/ngx_event_quic.h > @@ -28,6 +28,26 @@ > #define NGX_QUIC_STREAM_UNIDIRECTIONAL 0x02 > > > +typedef enum { > + NGX_QUIC_STREAM_SEND_READY = 0, > + NGX_QUIC_STREAM_SEND_SEND, > + NGX_QUIC_STREAM_SEND_DATA_SENT, > + NGX_QUIC_STREAM_SEND_DATA_RECVD, > + NGX_QUIC_STREAM_SEND_RESET_SENT, > + NGX_QUIC_STREAM_SEND_RESET_RECVD > +} ngx_quic_stream_send_state_e; > + > + > +typedef enum { > + NGX_QUIC_STREAM_RECV_RECV = 0, > + NGX_QUIC_STREAM_RECV_SIZE_KNOWN, > + NGX_QUIC_STREAM_RECV_DATA_RECVD, > + NGX_QUIC_STREAM_RECV_DATA_READ, > + NGX_QUIC_STREAM_RECV_RESET_RECVD, > + NGX_QUIC_STREAM_RECV_RESET_READ > +} ngx_quic_stream_recv_state_e; > + > + > typedef struct { > ngx_ssl_t *ssl; > > @@ -66,6 +86,8 @@ struct ngx_quic_stream_s { > ngx_chain_t *in; > ngx_chain_t *out; > ngx_uint_t cancelable; /* unsigned cancelable:1; */ > + ngx_quic_stream_send_state_e send_state; > + ngx_quic_stream_recv_state_e recv_state; > }; let's fix this little style incosistency in a separate patch by moving all struct stuff to the right. [..] > @@ -780,8 +764,23 @@ ngx_quic_stream_recv(ngx_connection_t *c > > ngx_quic_free_chain(pc, in); > > - if (qs->in == NULL) { > - rev->ready = rev->pending_eof; > + if (len == 0) { this also covers the case when ngx_quic_stream_recv() is called with zero-length buffer. Not sure what semantic should be implemented. man 2 read says: If count is zero, read() may detect the errors described below. In the absence of any errors, or if read() does not check for errors, a read() with a count of 0 returns zero and has no other effects. i.e. if we have data in buffer, but we are called with zero, we should not change state probably and handle this case separately. > + rev->ready = 0; > + > + if (qs->recv_state == NGX_QUIC_STREAM_RECV_SIZE_KNOWN > + && qs->recv_offset == qs->final_size) > + { > + qs->recv_state = NGX_QUIC_STREAM_RECV_DATA_READ; > + } > + > + if (qs->recv_state == NGX_QUIC_STREAM_RECV_DATA_READ) { > + rev->eof = 1; > + return 0; > + } > + > + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, > + "quic stream id:0x%xL recv() not ready", qs->id); > + return NGX_AGAIN; > } > > ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, side note: while looking at state transitions, i've realized we never send STREAM_DATA_BLOCKED and DATA_BLOCKED. From vl at nginx.com Mon Jan 31 10:40:36 2022 From: vl at nginx.com (Vladimir Homutov) Date: Mon, 31 Jan 2022 13:40:36 +0300 Subject: [PATCH 2 of 3] HTTP/3: proper uni stream closure detection In-Reply-To: References: Message-ID: On Mon, Jan 31, 2022 at 10:34:07AM +0300, Roman Arutyunyan wrote: > # HG changeset patch > # User Roman Arutyunyan > # Date 1643611590 -10800 > # Mon Jan 31 09:46:30 2022 +0300 > # Branch quic > # Node ID d3c6dea9454c48ded14b8c087dffc4dea46f78ef > # Parent 8dcb9908989401d750b14fe5dccf444a5485c23d > HTTP/3: proper uni stream closure detection. > > Previously, closure detection for server-initiated uni streams was not properly > implemented. Instead, HTTP/3 code relied on QUIC code posting the read event > and setting rev->error when it needed to close the stream. Then, regular > uni stream read handler called c->recv() and received error, which closed the > stream. This was an ad-hoc solution. If, for whatever reason, the read > handler was called earlier, c->recv() would return 0, which would also close > the stream. > > Now server-initiated uni streams have a separate read event handler for > tracking stream closure. The handler calls c->recv(), which normally returns > 0, but may return error in case of closure. > > diff --git a/src/http/v3/ngx_http_v3_uni.c b/src/http/v3/ngx_http_v3_uni.c > --- a/src/http/v3/ngx_http_v3_uni.c > +++ b/src/http/v3/ngx_http_v3_uni.c > @@ -26,6 +26,7 @@ typedef struct { > > static void ngx_http_v3_close_uni_stream(ngx_connection_t *c); > static void ngx_http_v3_uni_read_handler(ngx_event_t *rev); > +static void ngx_http_v3_dummy_read_handler(ngx_event_t *wev); > static void ngx_http_v3_dummy_write_handler(ngx_event_t *wev); > static void ngx_http_v3_push_cleanup(void *data); > static ngx_connection_t *ngx_http_v3_get_uni_stream(ngx_connection_t *c, > @@ -252,6 +253,32 @@ failed: > > > static void > +ngx_http_v3_dummy_read_handler(ngx_event_t *rev) should it be ngx_http_v3_uni_dummy_read_handler? > +{ > + u_char ch; > + ngx_connection_t *c; > + > + c = rev->data; > + > + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http3 dummy read handler"); > + > + if (rev->ready) { > + if (c->recv(c, &ch, 1) != 0) { > + ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_NO_ERROR, NULL); > + ngx_http_v3_close_uni_stream(c); > + return; > + } > + } > + > + if (ngx_handle_read_event(rev, 0) != NGX_OK) { > + ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_INTERNAL_ERROR, > + NULL); > + ngx_http_v3_close_uni_stream(c); > + } > +} > + > + > +static void > ngx_http_v3_dummy_write_handler(ngx_event_t *wev) > { > ngx_connection_t *c; > @@ -393,7 +420,7 @@ ngx_http_v3_get_uni_stream(ngx_connectio > > sc->data = us; > > - sc->read->handler = ngx_http_v3_uni_read_handler; > + sc->read->handler = ngx_http_v3_dummy_read_handler; > sc->write->handler = ngx_http_v3_dummy_write_handler; > > if (index >= 0) { > @@ -409,6 +436,8 @@ ngx_http_v3_get_uni_stream(ngx_connectio > goto failed; > } > > + ngx_post_event(sc->read, &ngx_posted_events); > + > return sc; > > failed: Looks ok From vl at nginx.com Mon Jan 31 12:18:44 2022 From: vl at nginx.com (Vladimir Homutov) Date: Mon, 31 Jan 2022 15:18:44 +0300 Subject: [PATCH 3 of 3] QUIC: stream event setting function In-Reply-To: <9f5c59800a9894aad00b.1643614448@arut-laptop> References: <9f5c59800a9894aad00b.1643614448@arut-laptop> Message-ID: On Mon, Jan 31, 2022 at 10:34:08AM +0300, Roman Arutyunyan wrote: > # HG changeset patch > # User Roman Arutyunyan > # Date 1643187691 -10800 > # Wed Jan 26 12:01:31 2022 +0300 > # Branch quic > # Node ID 9f5c59800a9894aad00b06df93ec454aab97372d > # Parent d3c6dea9454c48ded14b8c087dffc4dea46f78ef > QUIC: stream event setting function. > > The function ngx_quic_set_event() is now called instead of posting events > directly. > > diff --git a/src/event/quic/ngx_event_quic_streams.c b/src/event/quic/ngx_event_quic_streams.c > --- a/src/event/quic/ngx_event_quic_streams.c > +++ b/src/event/quic/ngx_event_quic_streams.c > @@ -34,6 +34,7 @@ static ngx_int_t ngx_quic_control_flow(n > static ngx_int_t ngx_quic_update_flow(ngx_connection_t *c, uint64_t last); > static ngx_int_t ngx_quic_update_max_stream_data(ngx_connection_t *c); > static ngx_int_t ngx_quic_update_max_data(ngx_connection_t *c); > +static void ngx_quic_set_event(ngx_event_t *ev); > > > ngx_connection_t * > @@ -156,7 +157,6 @@ ngx_quic_close_streams(ngx_connection_t > { > ngx_pool_t *pool; > ngx_queue_t *q; > - ngx_event_t *rev, *wev; > ngx_rbtree_t *tree; > ngx_rbtree_node_t *node; > ngx_quic_stream_t *qs; > @@ -195,17 +195,8 @@ ngx_quic_close_streams(ngx_connection_t > qs->recv_state = NGX_QUIC_STREAM_RECV_RESET_RECVD; > qs->send_state = NGX_QUIC_STREAM_SEND_RESET_SENT; > > - rev = qs->connection->read; > - rev->ready = 1; > - > - wev = qs->connection->write; > - wev->ready = 1; > - > - ngx_post_event(rev, &ngx_posted_events); > - > - if (rev->timer_set) { > - ngx_del_timer(rev); > - } > + ngx_quic_set_event(qs->connection->read); > + ngx_quic_set_event(qs->connection->write); > > #if (NGX_DEBUG) > ns++; > @@ -1024,7 +1015,6 @@ ngx_quic_handle_stream_frame(ngx_connect > ngx_quic_frame_t *frame) > { > uint64_t last; > - ngx_event_t *rev; > ngx_connection_t *sc; > ngx_quic_stream_t *qs; > ngx_quic_connection_t *qc; > @@ -1102,12 +1092,7 @@ ngx_quic_handle_stream_frame(ngx_connect > } > > if (f->offset == qs->recv_offset) { > - rev = sc->read; > - rev->ready = 1; > - > - if (rev->active) { > - ngx_post_event(rev, &ngx_posted_events); > - } > + ngx_quic_set_event(sc->read); > } > > return NGX_OK; > @@ -1118,7 +1103,6 @@ ngx_int_t > ngx_quic_handle_max_data_frame(ngx_connection_t *c, > ngx_quic_max_data_frame_t *f) > { > - ngx_event_t *wev; > ngx_rbtree_t *tree; > ngx_rbtree_node_t *node; > ngx_quic_stream_t *qs; > @@ -1140,12 +1124,7 @@ ngx_quic_handle_max_data_frame(ngx_conne > node = ngx_rbtree_next(tree, node)) > { > qs = (ngx_quic_stream_t *) node; > - wev = qs->connection->write; > - > - if (wev->active) { > - wev->ready = 1; > - ngx_post_event(wev, &ngx_posted_events); > - } > + ngx_quic_set_event(qs->connection->write); > } > } > > @@ -1206,7 +1185,6 @@ ngx_quic_handle_max_stream_data_frame(ng > ngx_quic_header_t *pkt, ngx_quic_max_stream_data_frame_t *f) > { > uint64_t sent; > - ngx_event_t *wev; > ngx_quic_stream_t *qs; > ngx_quic_connection_t *qc; > > @@ -1236,12 +1214,7 @@ ngx_quic_handle_max_stream_data_frame(ng > sent = qs->connection->sent; > > if (sent >= qs->send_max_data) { > - wev = qs->connection->write; > - > - if (wev->active) { > - wev->ready = 1; > - ngx_post_event(wev, &ngx_posted_events); > - } > + ngx_quic_set_event(qs->connection->write); > } > > qs->send_max_data = f->limit; > @@ -1254,7 +1227,6 @@ ngx_int_t > ngx_quic_handle_reset_stream_frame(ngx_connection_t *c, > ngx_quic_header_t *pkt, ngx_quic_reset_stream_frame_t *f) > { > - ngx_event_t *rev; > ngx_connection_t *sc; > ngx_quic_stream_t *qs; > ngx_quic_connection_t *qc; > @@ -1308,12 +1280,7 @@ ngx_quic_handle_reset_stream_frame(ngx_c > return NGX_ERROR; > } > > - rev = sc->read; > - rev->ready = 1; > - > - if (rev->active) { > - ngx_post_event(rev, &ngx_posted_events); > - } > + ngx_quic_set_event(qs->connection->read); > > return NGX_OK; > } > @@ -1323,7 +1290,6 @@ ngx_int_t > ngx_quic_handle_stop_sending_frame(ngx_connection_t *c, > ngx_quic_header_t *pkt, ngx_quic_stop_sending_frame_t *f) > { > - ngx_event_t *wev; > ngx_quic_stream_t *qs; > ngx_quic_connection_t *qc; > > @@ -1350,12 +1316,7 @@ ngx_quic_handle_stop_sending_frame(ngx_c > return NGX_ERROR; > } > > - wev = qs->connection->write; > - > - if (wev->active) { > - wev->ready = 1; > - ngx_post_event(wev, &ngx_posted_events); > - } > + ngx_quic_set_event(qs->connection->write); > > return NGX_OK; > } > @@ -1394,7 +1355,6 @@ void > ngx_quic_handle_stream_ack(ngx_connection_t *c, ngx_quic_frame_t *f) > { > uint64_t sent, unacked; > - ngx_event_t *wev; > ngx_quic_stream_t *qs; > ngx_quic_connection_t *qc; > > @@ -1405,13 +1365,11 @@ ngx_quic_handle_stream_ack(ngx_connectio > return; > } > > - wev = qs->connection->write; > sent = qs->connection->sent; > unacked = sent - qs->acked; > > - if (unacked >= qc->conf->stream_buffer_size && wev->active) { > - wev->ready = 1; > - ngx_post_event(wev, &ngx_posted_events); > + if (unacked >= qc->conf->stream_buffer_size) { > + ngx_quic_set_event(qs->connection->write); > } > > qs->acked += f->u.stream.length; > @@ -1585,6 +1543,17 @@ ngx_quic_update_max_data(ngx_connection_ > } > > > +static void > +ngx_quic_set_event(ngx_event_t *ev) > +{ > + ev->ready = 1; > + > + if (ev->active) { > + ngx_post_event(ev, &ngx_posted_events); > + } > +} > + > + > ngx_int_t > ngx_quic_handle_read_event(ngx_event_t *rev, ngx_uint_t flags) > { Looks ok_______________________________________________ From arut at nginx.com Mon Jan 31 15:10:54 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 31 Jan 2022 18:10:54 +0300 Subject: [PATCH 1 of 3] QUIC: introduced explicit stream states In-Reply-To: References: <8dcb9908989401d750b1.1643614446@arut-laptop> Message-ID: <20220131151054.nys2cnjvdcgblann@Romans-MacBook-Pro.local> On Mon, Jan 31, 2022 at 01:18:32PM +0300, Vladimir Homutov wrote: > On Mon, Jan 31, 2022 at 10:34:06AM +0300, Roman Arutyunyan wrote: > > # HG changeset patch > > # User Roman Arutyunyan > > # Date 1643611562 -10800 > > # Mon Jan 31 09:46:02 2022 +0300 > > # Branch quic > > # Node ID 8dcb9908989401d750b14fe5dccf444a5485c23d > > # Parent 81a3429db8b00ec9fc476d3687d1cd18088f3365 > > QUIC: introduced explicit stream states. > > > > This allows to eliminate the usage of stream connection event flags for tracking > > stream state. > > > > diff --git a/src/event/quic/ngx_event_quic.h b/src/event/quic/ngx_event_quic.h > > --- a/src/event/quic/ngx_event_quic.h > > +++ b/src/event/quic/ngx_event_quic.h > > @@ -28,6 +28,26 @@ > > #define NGX_QUIC_STREAM_UNIDIRECTIONAL 0x02 > > > > > > +typedef enum { > > + NGX_QUIC_STREAM_SEND_READY = 0, > > + NGX_QUIC_STREAM_SEND_SEND, > > + NGX_QUIC_STREAM_SEND_DATA_SENT, > > + NGX_QUIC_STREAM_SEND_DATA_RECVD, > > + NGX_QUIC_STREAM_SEND_RESET_SENT, > > + NGX_QUIC_STREAM_SEND_RESET_RECVD > > +} ngx_quic_stream_send_state_e; > > + > > + > > +typedef enum { > > + NGX_QUIC_STREAM_RECV_RECV = 0, > > + NGX_QUIC_STREAM_RECV_SIZE_KNOWN, > > + NGX_QUIC_STREAM_RECV_DATA_RECVD, > > + NGX_QUIC_STREAM_RECV_DATA_READ, > > + NGX_QUIC_STREAM_RECV_RESET_RECVD, > > + NGX_QUIC_STREAM_RECV_RESET_READ > > +} ngx_quic_stream_recv_state_e; > > + > > + > > typedef struct { > > ngx_ssl_t *ssl; > > > > @@ -66,6 +86,8 @@ struct ngx_quic_stream_s { > > ngx_chain_t *in; > > ngx_chain_t *out; > > ngx_uint_t cancelable; /* unsigned cancelable:1; */ > > + ngx_quic_stream_send_state_e send_state; > > + ngx_quic_stream_recv_state_e recv_state; > > }; > > let's fix this little style incosistency in a separate patch by moving > all struct stuff to the right. OK, added a patch for this. > [..] > > > @@ -780,8 +764,23 @@ ngx_quic_stream_recv(ngx_connection_t *c > > > > ngx_quic_free_chain(pc, in); > > > > - if (qs->in == NULL) { > > - rev->ready = rev->pending_eof; > > + if (len == 0) { > > this also covers the case when ngx_quic_stream_recv() is called > with zero-length buffer. Not sure what semantic should be implemented. > > man 2 read says: > > If count is zero, read() may detect the errors described below. In > the absence of any errors, or if read() does not check for errors, a > read() with a count of 0 returns zero and has no other effects. > > i.e. if we have data in buffer, but we are called with zero, we should > not change state probably and handle this case separately. You're right, now handling this case separately. > > + rev->ready = 0; > > + > > + if (qs->recv_state == NGX_QUIC_STREAM_RECV_SIZE_KNOWN > > + && qs->recv_offset == qs->final_size) > > + { > > + qs->recv_state = NGX_QUIC_STREAM_RECV_DATA_READ; > > + } > > + > > + if (qs->recv_state == NGX_QUIC_STREAM_RECV_DATA_READ) { > > + rev->eof = 1; > > + return 0; > > + } > > + > > + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, > > + "quic stream id:0x%xL recv() not ready", qs->id); > > + return NGX_AGAIN; > > } > > > > ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, > > > side note: while looking at state transitions, i've realized we never send > STREAM_DATA_BLOCKED and DATA_BLOCKED. > > _______________________________________________ > nginx-devel mailing list -- nginx-devel at nginx.org > To unsubscribe send an email to nginx-devel-leave at nginx.org -- Roman Arutyunyan From arut at nginx.com Mon Jan 31 15:13:59 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 31 Jan 2022 18:13:59 +0300 Subject: [PATCH 2 of 3] HTTP/3: proper uni stream closure detection In-Reply-To: References: Message-ID: <20220131151359.h5qjps4fhpvhyls6@Romans-MacBook-Pro.local> On Mon, Jan 31, 2022 at 01:40:36PM +0300, Vladimir Homutov wrote: > On Mon, Jan 31, 2022 at 10:34:07AM +0300, Roman Arutyunyan wrote: > > # HG changeset patch > > # User Roman Arutyunyan > > # Date 1643611590 -10800 > > # Mon Jan 31 09:46:30 2022 +0300 > > # Branch quic > > # Node ID d3c6dea9454c48ded14b8c087dffc4dea46f78ef > > # Parent 8dcb9908989401d750b14fe5dccf444a5485c23d > > HTTP/3: proper uni stream closure detection. > > > > Previously, closure detection for server-initiated uni streams was not properly > > implemented. Instead, HTTP/3 code relied on QUIC code posting the read event > > and setting rev->error when it needed to close the stream. Then, regular > > uni stream read handler called c->recv() and received error, which closed the > > stream. This was an ad-hoc solution. If, for whatever reason, the read > > handler was called earlier, c->recv() would return 0, which would also close > > the stream. > > > > Now server-initiated uni streams have a separate read event handler for > > tracking stream closure. The handler calls c->recv(), which normally returns > > 0, but may return error in case of closure. > > > > diff --git a/src/http/v3/ngx_http_v3_uni.c b/src/http/v3/ngx_http_v3_uni.c > > --- a/src/http/v3/ngx_http_v3_uni.c > > +++ b/src/http/v3/ngx_http_v3_uni.c > > @@ -26,6 +26,7 @@ typedef struct { > > > > static void ngx_http_v3_close_uni_stream(ngx_connection_t *c); > > static void ngx_http_v3_uni_read_handler(ngx_event_t *rev); > > +static void ngx_http_v3_dummy_read_handler(ngx_event_t *wev); > > static void ngx_http_v3_dummy_write_handler(ngx_event_t *wev); > > static void ngx_http_v3_push_cleanup(void *data); > > static ngx_connection_t *ngx_http_v3_get_uni_stream(ngx_connection_t *c, > > @@ -252,6 +253,32 @@ failed: > > > > > > static void > > +ngx_http_v3_dummy_read_handler(ngx_event_t *rev) > > should it be ngx_http_v3_uni_dummy_read_handler? The write handler was called ngx_http_v3_dummy_write_handler. Let's rename both of them. > > +{ > > + u_char ch; > > + ngx_connection_t *c; > > + > > + c = rev->data; > > + > > + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http3 dummy read handler"); > > + > > + if (rev->ready) { > > + if (c->recv(c, &ch, 1) != 0) { > > + ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_NO_ERROR, NULL); > > + ngx_http_v3_close_uni_stream(c); > > + return; > > + } > > + } > > + > > + if (ngx_handle_read_event(rev, 0) != NGX_OK) { > > + ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_INTERNAL_ERROR, > > + NULL); > > + ngx_http_v3_close_uni_stream(c); > > + } > > +} > > + > > + > > +static void > > ngx_http_v3_dummy_write_handler(ngx_event_t *wev) > > { > > ngx_connection_t *c; > > @@ -393,7 +420,7 @@ ngx_http_v3_get_uni_stream(ngx_connectio > > > > sc->data = us; > > > > - sc->read->handler = ngx_http_v3_uni_read_handler; > > + sc->read->handler = ngx_http_v3_dummy_read_handler; > > sc->write->handler = ngx_http_v3_dummy_write_handler; > > > > if (index >= 0) { > > @@ -409,6 +436,8 @@ ngx_http_v3_get_uni_stream(ngx_connectio > > goto failed; > > } > > > > + ngx_post_event(sc->read, &ngx_posted_events); > > + > > return sc; > > > > failed: > > Looks ok > _______________________________________________ > nginx-devel mailing list -- nginx-devel at nginx.org > To unsubscribe send an email to nginx-devel-leave at nginx.org -- Roman Arutyunyan From arut at nginx.com Mon Jan 31 15:21:03 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 31 Jan 2022 18:21:03 +0300 Subject: [PATCH 0 of 4] QUIC stream states and events In-Reply-To: References: Message-ID: - added zero size handling in stream recv() - renamed http3 uni stream handlers - added style patch From arut at nginx.com Mon Jan 31 15:21:04 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 31 Jan 2022 18:21:04 +0300 Subject: [PATCH 1 of 4] QUIC: introduced explicit stream states In-Reply-To: References: Message-ID: # HG changeset patch # User Roman Arutyunyan # Date 1643611562 -10800 # Mon Jan 31 09:46:02 2022 +0300 # Branch quic # Node ID b42a041d23a2226ec6def395bd0b084889b85473 # Parent 81a3429db8b00ec9fc476d3687d1cd18088f3365 QUIC: introduced explicit stream states. This allows to eliminate the usage of stream connection event flags for tracking stream state. diff --git a/src/event/quic/ngx_event_quic.h b/src/event/quic/ngx_event_quic.h --- a/src/event/quic/ngx_event_quic.h +++ b/src/event/quic/ngx_event_quic.h @@ -28,6 +28,26 @@ #define NGX_QUIC_STREAM_UNIDIRECTIONAL 0x02 +typedef enum { + NGX_QUIC_STREAM_SEND_READY = 0, + NGX_QUIC_STREAM_SEND_SEND, + NGX_QUIC_STREAM_SEND_DATA_SENT, + NGX_QUIC_STREAM_SEND_DATA_RECVD, + NGX_QUIC_STREAM_SEND_RESET_SENT, + NGX_QUIC_STREAM_SEND_RESET_RECVD +} ngx_quic_stream_send_state_e; + + +typedef enum { + NGX_QUIC_STREAM_RECV_RECV = 0, + NGX_QUIC_STREAM_RECV_SIZE_KNOWN, + NGX_QUIC_STREAM_RECV_DATA_RECVD, + NGX_QUIC_STREAM_RECV_DATA_READ, + NGX_QUIC_STREAM_RECV_RESET_RECVD, + NGX_QUIC_STREAM_RECV_RESET_READ +} ngx_quic_stream_recv_state_e; + + typedef struct { ngx_ssl_t *ssl; @@ -66,6 +86,8 @@ struct ngx_quic_stream_s { ngx_chain_t *in; ngx_chain_t *out; ngx_uint_t cancelable; /* unsigned cancelable:1; */ + ngx_quic_stream_send_state_e send_state; + ngx_quic_stream_recv_state_e recv_state; }; diff --git a/src/event/quic/ngx_event_quic_ack.c b/src/event/quic/ngx_event_quic_ack.c --- a/src/event/quic/ngx_event_quic_ack.c +++ b/src/event/quic/ngx_event_quic_ack.c @@ -617,10 +617,13 @@ ngx_quic_resend_frames(ngx_connection_t case NGX_QUIC_FT_STREAM: qs = ngx_quic_find_stream(&qc->streams.tree, f->u.stream.stream_id); - if (qs && qs->connection->write->error) { - /* RESET_STREAM was sent */ - ngx_quic_free_frame(c, f); - break; + if (qs) { + if (qs->send_state == NGX_QUIC_STREAM_SEND_RESET_SENT + || qs->send_state == NGX_QUIC_STREAM_SEND_RESET_RECVD) + { + ngx_quic_free_frame(c, f); + break; + } } /* fall through */ diff --git a/src/event/quic/ngx_event_quic_streams.c b/src/event/quic/ngx_event_quic_streams.c --- a/src/event/quic/ngx_event_quic_streams.c +++ b/src/event/quic/ngx_event_quic_streams.c @@ -192,12 +192,13 @@ ngx_quic_close_streams(ngx_connection_t { qs = (ngx_quic_stream_t *) node; + qs->recv_state = NGX_QUIC_STREAM_RECV_RESET_RECVD; + qs->send_state = NGX_QUIC_STREAM_SEND_RESET_SENT; + rev = qs->connection->read; - rev->error = 1; rev->ready = 1; wev = qs->connection->write; - wev->error = 1; wev->ready = 1; ngx_post_event(rev, &ngx_posted_events); @@ -221,19 +222,22 @@ ngx_quic_close_streams(ngx_connection_t ngx_int_t ngx_quic_reset_stream(ngx_connection_t *c, ngx_uint_t err) { - ngx_event_t *wev; ngx_connection_t *pc; ngx_quic_frame_t *frame; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; - wev = c->write; + qs = c->quic; - if (wev->error) { + if (qs->send_state == NGX_QUIC_STREAM_SEND_DATA_RECVD + || qs->send_state == NGX_QUIC_STREAM_SEND_RESET_SENT + || qs->send_state == NGX_QUIC_STREAM_SEND_RESET_RECVD) + { return NGX_OK; } - qs = c->quic; + qs->send_state = NGX_QUIC_STREAM_SEND_RESET_SENT; + pc = qs->parent; qc = ngx_quic_get_connection(pc); @@ -250,9 +254,6 @@ ngx_quic_reset_stream(ngx_connection_t * ngx_quic_queue_frame(qc, frame); - wev->error = 1; - wev->ready = 1; - return NGX_OK; } @@ -260,27 +261,15 @@ ngx_quic_reset_stream(ngx_connection_t * ngx_int_t ngx_quic_shutdown_stream(ngx_connection_t *c, int how) { - ngx_quic_stream_t *qs; - - qs = c->quic; - if (how == NGX_RDWR_SHUTDOWN || how == NGX_WRITE_SHUTDOWN) { - if ((qs->id & NGX_QUIC_STREAM_SERVER_INITIATED) - || (qs->id & NGX_QUIC_STREAM_UNIDIRECTIONAL) == 0) - { - if (ngx_quic_shutdown_stream_send(c) != NGX_OK) { - return NGX_ERROR; - } + if (ngx_quic_shutdown_stream_send(c) != NGX_OK) { + return NGX_ERROR; } } if (how == NGX_RDWR_SHUTDOWN || how == NGX_READ_SHUTDOWN) { - if ((qs->id & NGX_QUIC_STREAM_SERVER_INITIATED) == 0 - || (qs->id & NGX_QUIC_STREAM_UNIDIRECTIONAL) == 0) - { - if (ngx_quic_shutdown_stream_recv(c) != NGX_OK) { - return NGX_ERROR; - } + if (ngx_quic_shutdown_stream_recv(c) != NGX_OK) { + return NGX_ERROR; } } @@ -291,19 +280,21 @@ ngx_quic_shutdown_stream(ngx_connection_ static ngx_int_t ngx_quic_shutdown_stream_send(ngx_connection_t *c) { - ngx_event_t *wev; ngx_connection_t *pc; ngx_quic_frame_t *frame; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; - wev = c->write; + qs = c->quic; - if (wev->error) { + if (qs->send_state != NGX_QUIC_STREAM_SEND_READY + && qs->send_state != NGX_QUIC_STREAM_SEND_SEND) + { return NGX_OK; } - qs = c->quic; + qs->send_state = NGX_QUIC_STREAM_SEND_DATA_SENT; + pc = qs->parent; qc = ngx_quic_get_connection(pc); @@ -327,8 +318,6 @@ ngx_quic_shutdown_stream_send(ngx_connec ngx_quic_queue_frame(qc, frame); - wev->error = 1; - return NGX_OK; } @@ -336,19 +325,19 @@ ngx_quic_shutdown_stream_send(ngx_connec static ngx_int_t ngx_quic_shutdown_stream_recv(ngx_connection_t *c) { - ngx_event_t *rev; ngx_connection_t *pc; ngx_quic_frame_t *frame; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; - rev = c->read; + qs = c->quic; - if (rev->pending_eof || rev->error) { + if (qs->recv_state != NGX_QUIC_STREAM_RECV_RECV + && qs->recv_state != NGX_QUIC_STREAM_RECV_SIZE_KNOWN) + { return NGX_OK; } - qs = c->quic; pc = qs->parent; qc = ngx_quic_get_connection(pc); @@ -371,8 +360,6 @@ ngx_quic_shutdown_stream_recv(ngx_connec ngx_quic_queue_frame(qc, frame); - rev->error = 1; - return NGX_OK; } @@ -690,9 +677,13 @@ ngx_quic_create_stream(ngx_connection_t if (id & NGX_QUIC_STREAM_UNIDIRECTIONAL) { if (id & NGX_QUIC_STREAM_SERVER_INITIATED) { qs->send_max_data = qc->ctp.initial_max_stream_data_uni; + qs->recv_state = NGX_QUIC_STREAM_RECV_DATA_READ; + qs->send_state = NGX_QUIC_STREAM_SEND_READY; } else { qs->recv_max_data = qc->tp.initial_max_stream_data_uni; + qs->recv_state = NGX_QUIC_STREAM_RECV_RECV; + qs->send_state = NGX_QUIC_STREAM_SEND_DATA_RECVD; } } else { @@ -704,6 +695,9 @@ ngx_quic_create_stream(ngx_connection_t qs->send_max_data = qc->ctp.initial_max_stream_data_bidi_local; qs->recv_max_data = qc->tp.initial_max_stream_data_bidi_remote; } + + qs->recv_state = NGX_QUIC_STREAM_RECV_RECV; + qs->send_state = NGX_QUIC_STREAM_SEND_READY; } qs->recv_window = qs->recv_max_data; @@ -744,25 +738,19 @@ ngx_quic_stream_recv(ngx_connection_t *c pc = qs->parent; rev = c->read; - if (rev->error) { + if (qs->recv_state == NGX_QUIC_STREAM_RECV_RESET_RECVD + || qs->recv_state == NGX_QUIC_STREAM_RECV_RESET_READ) + { + qs->recv_state = NGX_QUIC_STREAM_RECV_RESET_READ; + rev->error = 1; return NGX_ERROR; } - ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, - "quic stream id:0x%xL recv eof:%d buf:%uz", - qs->id, rev->pending_eof, size); - - if (qs->in == NULL || qs->in->buf->sync) { - rev->ready = 0; + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, + "quic stream id:0x%xL recv buf:%uz", qs->id, size); - if (qs->recv_offset == qs->final_size) { - rev->eof = 1; - return 0; - } - - ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, - "quic stream id:0x%xL recv() not ready", qs->id); - return NGX_AGAIN; + if (size == 0) { + return 0; } in = ngx_quic_read_chain(pc, &qs->in, size); @@ -780,8 +768,23 @@ ngx_quic_stream_recv(ngx_connection_t *c ngx_quic_free_chain(pc, in); - if (qs->in == NULL) { - rev->ready = rev->pending_eof; + if (len == 0) { + rev->ready = 0; + + if (qs->recv_state == NGX_QUIC_STREAM_RECV_SIZE_KNOWN + && qs->recv_offset == qs->final_size) + { + qs->recv_state = NGX_QUIC_STREAM_RECV_DATA_READ; + } + + if (qs->recv_state == NGX_QUIC_STREAM_RECV_DATA_READ) { + rev->eof = 1; + return 0; + } + + ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, + "quic stream id:0x%xL recv() not ready", qs->id); + return NGX_AGAIN; } ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, @@ -839,10 +842,15 @@ ngx_quic_stream_send_chain(ngx_connectio qc = ngx_quic_get_connection(pc); wev = c->write; - if (wev->error) { + if (qs->send_state != NGX_QUIC_STREAM_SEND_READY + && qs->send_state != NGX_QUIC_STREAM_SEND_SEND) + { + wev->error = 1; return NGX_CHAIN_ERROR; } + qs->send_state = NGX_QUIC_STREAM_SEND_SEND; + flow = ngx_quic_max_stream_flow(c); if (flow == 0) { wev->ready = 0; @@ -1051,9 +1059,9 @@ ngx_quic_handle_stream_frame(ngx_connect sc = qs->connection; - rev = sc->read; - - if (rev->error) { + if (qs->recv_state != NGX_QUIC_STREAM_RECV_RECV + && qs->recv_state != NGX_QUIC_STREAM_RECV_SIZE_KNOWN) + { return NGX_OK; } @@ -1086,8 +1094,8 @@ ngx_quic_handle_stream_frame(ngx_connect return NGX_ERROR; } - rev->pending_eof = 1; qs->final_size = last; + qs->recv_state = NGX_QUIC_STREAM_RECV_SIZE_KNOWN; } if (ngx_quic_write_chain(c, &qs->in, frame->data, f->length, @@ -1098,6 +1106,7 @@ ngx_quic_handle_stream_frame(ngx_connect } if (f->offset == qs->recv_offset) { + rev = sc->read; rev->ready = 1; if (rev->active) { @@ -1273,11 +1282,15 @@ ngx_quic_handle_reset_stream_frame(ngx_c return NGX_OK; } - sc = qs->connection; + if (qs->recv_state == NGX_QUIC_STREAM_RECV_RESET_RECVD + || qs->recv_state == NGX_QUIC_STREAM_RECV_RESET_READ) + { + return NGX_OK; + } - rev = sc->read; - rev->error = 1; - rev->ready = 1; + qs->recv_state = NGX_QUIC_STREAM_RECV_RESET_RECVD; + + sc = qs->connection; if (ngx_quic_control_flow(sc, f->final_size) != NGX_OK) { return NGX_ERROR; @@ -1299,6 +1312,9 @@ ngx_quic_handle_reset_stream_frame(ngx_c return NGX_ERROR; } + rev = sc->read; + rev->ready = 1; + if (rev->active) { ngx_post_event(rev, &ngx_posted_events); } @@ -1341,6 +1357,7 @@ ngx_quic_handle_stop_sending_frame(ngx_c wev = qs->connection->write; if (wev->active) { + wev->ready = 1; ngx_post_event(wev, &ngx_posted_events); } @@ -1413,11 +1430,9 @@ static ngx_int_t ngx_quic_control_flow(ngx_connection_t *c, uint64_t last) { uint64_t len; - ngx_event_t *rev; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; - rev = c->read; qs = c->quic; qc = ngx_quic_get_connection(qs->parent); @@ -1434,7 +1449,9 @@ ngx_quic_control_flow(ngx_connection_t * qs->recv_last += len; - if (!rev->error && qs->recv_last > qs->recv_max_data) { + if (qs->recv_state == NGX_QUIC_STREAM_RECV_RECV + && qs->recv_last > qs->recv_max_data) + { qc->error = NGX_QUIC_ERR_FLOW_CONTROL_ERROR; return NGX_ERROR; } @@ -1454,12 +1471,10 @@ static ngx_int_t ngx_quic_update_flow(ngx_connection_t *c, uint64_t last) { uint64_t len; - ngx_event_t *rev; ngx_connection_t *pc; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; - rev = c->read; qs = c->quic; pc = qs->parent; qc = ngx_quic_get_connection(pc); @@ -1475,9 +1490,7 @@ ngx_quic_update_flow(ngx_connection_t *c qs->recv_offset += len; - if (!rev->pending_eof && !rev->error - && qs->recv_max_data <= qs->recv_offset + qs->recv_window / 2) - { + if (qs->recv_max_data <= qs->recv_offset + qs->recv_window / 2) { if (ngx_quic_update_max_stream_data(c) != NGX_OK) { return NGX_ERROR; } @@ -1510,6 +1523,10 @@ ngx_quic_update_max_stream_data(ngx_conn pc = qs->parent; qc = ngx_quic_get_connection(pc); + if (qs->recv_state != NGX_QUIC_STREAM_RECV_RECV) { + return NGX_OK; + } + recv_max_data = qs->recv_offset + qs->recv_window; if (qs->recv_max_data == recv_max_data) { From arut at nginx.com Mon Jan 31 15:21:05 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 31 Jan 2022 18:21:05 +0300 Subject: [PATCH 2 of 4] HTTP/3: proper uni stream closure detection In-Reply-To: References: Message-ID: <3436b441239b494e6f63.1643642465@arut-laptop> # HG changeset patch # User Roman Arutyunyan # Date 1643611590 -10800 # Mon Jan 31 09:46:30 2022 +0300 # Branch quic # Node ID 3436b441239b494e6f63db3b8d26a71682b4f106 # Parent b42a041d23a2226ec6def395bd0b084889b85473 HTTP/3: proper uni stream closure detection. Previously, closure detection for server-initiated uni streams was not properly implemented. Instead, HTTP/3 code relied on QUIC code posting the read event and setting rev->error when it needed to close the stream. Then, regular uni stream read handler called c->recv() and received error, which closed the stream. This was an ad-hoc solution. If, for whatever reason, the read handler was called earlier, c->recv() would return 0, which would also close the stream. Now server-initiated uni streams have a separate read event handler for tracking stream closure. The handler calls c->recv(), which normally returns 0, but may return error in case of closure. diff --git a/src/http/v3/ngx_http_v3_uni.c b/src/http/v3/ngx_http_v3_uni.c --- a/src/http/v3/ngx_http_v3_uni.c +++ b/src/http/v3/ngx_http_v3_uni.c @@ -26,7 +26,8 @@ typedef struct { static void ngx_http_v3_close_uni_stream(ngx_connection_t *c); static void ngx_http_v3_uni_read_handler(ngx_event_t *rev); -static void ngx_http_v3_dummy_write_handler(ngx_event_t *wev); +static void ngx_http_v3_uni_dummy_read_handler(ngx_event_t *wev); +static void ngx_http_v3_uni_dummy_write_handler(ngx_event_t *wev); static void ngx_http_v3_push_cleanup(void *data); static ngx_connection_t *ngx_http_v3_get_uni_stream(ngx_connection_t *c, ngx_uint_t type); @@ -68,7 +69,7 @@ ngx_http_v3_init_uni_stream(ngx_connecti c->data = us; c->read->handler = ngx_http_v3_uni_read_handler; - c->write->handler = ngx_http_v3_dummy_write_handler; + c->write->handler = ngx_http_v3_uni_dummy_write_handler; ngx_http_v3_uni_read_handler(c->read); } @@ -252,7 +253,33 @@ failed: static void -ngx_http_v3_dummy_write_handler(ngx_event_t *wev) +ngx_http_v3_uni_dummy_read_handler(ngx_event_t *rev) +{ + u_char ch; + ngx_connection_t *c; + + c = rev->data; + + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http3 dummy read handler"); + + if (rev->ready) { + if (c->recv(c, &ch, 1) != 0) { + ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_NO_ERROR, NULL); + ngx_http_v3_close_uni_stream(c); + return; + } + } + + if (ngx_handle_read_event(rev, 0) != NGX_OK) { + ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_INTERNAL_ERROR, + NULL); + ngx_http_v3_close_uni_stream(c); + } +} + + +static void +ngx_http_v3_uni_dummy_write_handler(ngx_event_t *wev) { ngx_connection_t *c; @@ -393,8 +420,8 @@ ngx_http_v3_get_uni_stream(ngx_connectio sc->data = us; - sc->read->handler = ngx_http_v3_uni_read_handler; - sc->write->handler = ngx_http_v3_dummy_write_handler; + sc->read->handler = ngx_http_v3_uni_dummy_read_handler; + sc->write->handler = ngx_http_v3_uni_dummy_write_handler; if (index >= 0) { h3c->known_streams[index] = sc; @@ -409,6 +436,8 @@ ngx_http_v3_get_uni_stream(ngx_connectio goto failed; } + ngx_post_event(sc->read, &ngx_posted_events); + return sc; failed: From arut at nginx.com Mon Jan 31 15:21:06 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 31 Jan 2022 18:21:06 +0300 Subject: [PATCH 3 of 4] QUIC: style In-Reply-To: References: Message-ID: # HG changeset patch # User Roman Arutyunyan # Date 1643641743 -10800 # Mon Jan 31 18:09:03 2022 +0300 # Branch quic # Node ID cba58cb06b3aee94e7e16a8dc0562a5a4f7d3066 # Parent 3436b441239b494e6f63db3b8d26a71682b4f106 QUIC: style. diff --git a/src/event/quic/ngx_event_quic.h b/src/event/quic/ngx_event_quic.h --- a/src/event/quic/ngx_event_quic.h +++ b/src/event/quic/ngx_event_quic.h @@ -49,45 +49,45 @@ typedef enum { typedef struct { - ngx_ssl_t *ssl; + ngx_ssl_t *ssl; - ngx_flag_t retry; - ngx_flag_t gso_enabled; - ngx_flag_t disable_active_migration; - ngx_msec_t timeout; - ngx_str_t host_key; - size_t mtu; - size_t stream_buffer_size; - ngx_uint_t max_concurrent_streams_bidi; - ngx_uint_t max_concurrent_streams_uni; - ngx_uint_t active_connection_id_limit; - ngx_int_t stream_close_code; - ngx_int_t stream_reject_code_uni; - ngx_int_t stream_reject_code_bidi; + ngx_flag_t retry; + ngx_flag_t gso_enabled; + ngx_flag_t disable_active_migration; + ngx_msec_t timeout; + ngx_str_t host_key; + size_t mtu; + size_t stream_buffer_size; + ngx_uint_t max_concurrent_streams_bidi; + ngx_uint_t max_concurrent_streams_uni; + ngx_uint_t active_connection_id_limit; + ngx_int_t stream_close_code; + ngx_int_t stream_reject_code_uni; + ngx_int_t stream_reject_code_bidi; - u_char av_token_key[NGX_QUIC_AV_KEY_LEN]; - u_char sr_token_key[NGX_QUIC_SR_KEY_LEN]; + u_char av_token_key[NGX_QUIC_AV_KEY_LEN]; + u_char sr_token_key[NGX_QUIC_SR_KEY_LEN]; } ngx_quic_conf_t; struct ngx_quic_stream_s { - ngx_rbtree_node_t node; - ngx_queue_t queue; - ngx_connection_t *parent; - ngx_connection_t *connection; - uint64_t id; - uint64_t acked; - uint64_t send_max_data; - uint64_t recv_max_data; - uint64_t recv_offset; - uint64_t recv_window; - uint64_t recv_last; - uint64_t final_size; - ngx_chain_t *in; - ngx_chain_t *out; - ngx_uint_t cancelable; /* unsigned cancelable:1; */ - ngx_quic_stream_send_state_e send_state; - ngx_quic_stream_recv_state_e recv_state; + ngx_rbtree_node_t node; + ngx_queue_t queue; + ngx_connection_t *parent; + ngx_connection_t *connection; + uint64_t id; + uint64_t acked; + uint64_t send_max_data; + uint64_t recv_max_data; + uint64_t recv_offset; + uint64_t recv_window; + uint64_t recv_last; + uint64_t final_size; + ngx_chain_t *in; + ngx_chain_t *out; + ngx_uint_t cancelable; /* unsigned cancelable:1; */ + ngx_quic_stream_send_state_e send_state; + ngx_quic_stream_recv_state_e recv_state; }; From arut at nginx.com Mon Jan 31 15:21:07 2022 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 31 Jan 2022 18:21:07 +0300 Subject: [PATCH 4 of 4] QUIC: stream event setting function In-Reply-To: References: Message-ID: <7626aa7a21566bf72300.1643642467@arut-laptop> # HG changeset patch # User Roman Arutyunyan # Date 1643187691 -10800 # Wed Jan 26 12:01:31 2022 +0300 # Branch quic # Node ID 7626aa7a21566bf723001b23b98a456efbdea4ba # Parent cba58cb06b3aee94e7e16a8dc0562a5a4f7d3066 QUIC: stream event setting function. The function ngx_quic_set_event() is now called instead of posting events directly. diff --git a/src/event/quic/ngx_event_quic_streams.c b/src/event/quic/ngx_event_quic_streams.c --- a/src/event/quic/ngx_event_quic_streams.c +++ b/src/event/quic/ngx_event_quic_streams.c @@ -34,6 +34,7 @@ static ngx_int_t ngx_quic_control_flow(n static ngx_int_t ngx_quic_update_flow(ngx_connection_t *c, uint64_t last); static ngx_int_t ngx_quic_update_max_stream_data(ngx_connection_t *c); static ngx_int_t ngx_quic_update_max_data(ngx_connection_t *c); +static void ngx_quic_set_event(ngx_event_t *ev); ngx_connection_t * @@ -156,7 +157,6 @@ ngx_quic_close_streams(ngx_connection_t { ngx_pool_t *pool; ngx_queue_t *q; - ngx_event_t *rev, *wev; ngx_rbtree_t *tree; ngx_rbtree_node_t *node; ngx_quic_stream_t *qs; @@ -195,17 +195,8 @@ ngx_quic_close_streams(ngx_connection_t qs->recv_state = NGX_QUIC_STREAM_RECV_RESET_RECVD; qs->send_state = NGX_QUIC_STREAM_SEND_RESET_SENT; - rev = qs->connection->read; - rev->ready = 1; - - wev = qs->connection->write; - wev->ready = 1; - - ngx_post_event(rev, &ngx_posted_events); - - if (rev->timer_set) { - ngx_del_timer(rev); - } + ngx_quic_set_event(qs->connection->read); + ngx_quic_set_event(qs->connection->write); #if (NGX_DEBUG) ns++; @@ -1028,7 +1019,6 @@ ngx_quic_handle_stream_frame(ngx_connect ngx_quic_frame_t *frame) { uint64_t last; - ngx_event_t *rev; ngx_connection_t *sc; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; @@ -1106,12 +1096,7 @@ ngx_quic_handle_stream_frame(ngx_connect } if (f->offset == qs->recv_offset) { - rev = sc->read; - rev->ready = 1; - - if (rev->active) { - ngx_post_event(rev, &ngx_posted_events); - } + ngx_quic_set_event(sc->read); } return NGX_OK; @@ -1122,7 +1107,6 @@ ngx_int_t ngx_quic_handle_max_data_frame(ngx_connection_t *c, ngx_quic_max_data_frame_t *f) { - ngx_event_t *wev; ngx_rbtree_t *tree; ngx_rbtree_node_t *node; ngx_quic_stream_t *qs; @@ -1144,12 +1128,7 @@ ngx_quic_handle_max_data_frame(ngx_conne node = ngx_rbtree_next(tree, node)) { qs = (ngx_quic_stream_t *) node; - wev = qs->connection->write; - - if (wev->active) { - wev->ready = 1; - ngx_post_event(wev, &ngx_posted_events); - } + ngx_quic_set_event(qs->connection->write); } } @@ -1210,7 +1189,6 @@ ngx_quic_handle_max_stream_data_frame(ng ngx_quic_header_t *pkt, ngx_quic_max_stream_data_frame_t *f) { uint64_t sent; - ngx_event_t *wev; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; @@ -1240,12 +1218,7 @@ ngx_quic_handle_max_stream_data_frame(ng sent = qs->connection->sent; if (sent >= qs->send_max_data) { - wev = qs->connection->write; - - if (wev->active) { - wev->ready = 1; - ngx_post_event(wev, &ngx_posted_events); - } + ngx_quic_set_event(qs->connection->write); } qs->send_max_data = f->limit; @@ -1258,7 +1231,6 @@ ngx_int_t ngx_quic_handle_reset_stream_frame(ngx_connection_t *c, ngx_quic_header_t *pkt, ngx_quic_reset_stream_frame_t *f) { - ngx_event_t *rev; ngx_connection_t *sc; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; @@ -1312,12 +1284,7 @@ ngx_quic_handle_reset_stream_frame(ngx_c return NGX_ERROR; } - rev = sc->read; - rev->ready = 1; - - if (rev->active) { - ngx_post_event(rev, &ngx_posted_events); - } + ngx_quic_set_event(qs->connection->read); return NGX_OK; } @@ -1327,7 +1294,6 @@ ngx_int_t ngx_quic_handle_stop_sending_frame(ngx_connection_t *c, ngx_quic_header_t *pkt, ngx_quic_stop_sending_frame_t *f) { - ngx_event_t *wev; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; @@ -1354,12 +1320,7 @@ ngx_quic_handle_stop_sending_frame(ngx_c return NGX_ERROR; } - wev = qs->connection->write; - - if (wev->active) { - wev->ready = 1; - ngx_post_event(wev, &ngx_posted_events); - } + ngx_quic_set_event(qs->connection->write); return NGX_OK; } @@ -1398,7 +1359,6 @@ void ngx_quic_handle_stream_ack(ngx_connection_t *c, ngx_quic_frame_t *f) { uint64_t sent, unacked; - ngx_event_t *wev; ngx_quic_stream_t *qs; ngx_quic_connection_t *qc; @@ -1409,13 +1369,11 @@ ngx_quic_handle_stream_ack(ngx_connectio return; } - wev = qs->connection->write; sent = qs->connection->sent; unacked = sent - qs->acked; - if (unacked >= qc->conf->stream_buffer_size && wev->active) { - wev->ready = 1; - ngx_post_event(wev, &ngx_posted_events); + if (unacked >= qc->conf->stream_buffer_size) { + ngx_quic_set_event(qs->connection->write); } qs->acked += f->u.stream.length; @@ -1589,6 +1547,17 @@ ngx_quic_update_max_data(ngx_connection_ } +static void +ngx_quic_set_event(ngx_event_t *ev) +{ + ev->ready = 1; + + if (ev->active) { + ngx_post_event(ev, &ngx_posted_events); + } +} + + ngx_int_t ngx_quic_handle_read_event(ngx_event_t *rev, ngx_uint_t flags) { From vl at nginx.com Mon Jan 31 17:12:32 2022 From: vl at nginx.com (Vladimir Homutov) Date: Mon, 31 Jan 2022 20:12:32 +0300 Subject: [PATCH 0 of 4] QUIC stream states and events In-Reply-To: References: Message-ID: On Mon, Jan 31, 2022 at 06:21:03PM +0300, Roman Arutyunyan wrote: > - added zero size handling in stream recv() > - renamed http3 uni stream handlers > - added style patch looks good to me