From agentzh at gmail.com Sat Nov 1 01:20:18 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Fri, 31 Oct 2014 18:20:18 -0700 Subject: Gzip Gunzip: always flush busy bufs when the incoming chain is NULL. Message-ID: # HG changeset patch # User Yichun Zhang # Date 1414804249 25200 # Fri Oct 31 18:10:49 2014 -0700 # Node ID 38a74e59f199edafad0a8caae5cfc921ab3302e8 # Parent dff86e2246a53b0f4a61935cd5c8c0a0f66d0ca2 Gzip Gunzip: always flush busy bufs when the incoming chain is NULL. After the system send buffer is full, NULL incoming chains are used to flush pending output upon new write events. The current gzip and gunzip filters may intercept NULL chains and keep the data stalling in nginx's own send buffers, leading to request hanging (until send timeout). This regression had appeared in nginx 1.7.7. diff -r dff86e2246a5 -r 38a74e59f199 src/http/modules/ngx_http_gunzip_filter_module.c --- a/src/http/modules/ngx_http_gunzip_filter_module.c Mon Aug 25 13:41:31 2014 +0400 +++ b/src/http/modules/ngx_http_gunzip_filter_module.c Fri Oct 31 18:10:49 2014 -0700 @@ -200,7 +200,7 @@ ngx_http_gunzip_body_filter(ngx_http_req } } - if (ctx->nomem) { + if (ctx->nomem || in == NULL) { /* flush busy buffers */ diff -r dff86e2246a5 -r 38a74e59f199 src/http/modules/ngx_http_gzip_filter_module.c --- a/src/http/modules/ngx_http_gzip_filter_module.c Mon Aug 25 13:41:31 2014 +0400 +++ b/src/http/modules/ngx_http_gzip_filter_module.c Fri Oct 31 18:10:49 2014 -0700 @@ -373,7 +373,7 @@ ngx_http_gzip_body_filter(ngx_http_reque r->connection->buffered |= NGX_HTTP_GZIP_BUFFERED; } - if (ctx->nomem) { + if (ctx->nomem || in == NULL) { /* flush busy buffers */ -------------- next part -------------- A non-text attachment was scrubbed... Name: gzip_gunzip_flush.patch Type: text/x-patch Size: 1541 bytes Desc: not available URL: From jadas at akamai.com Sun Nov 2 16:43:04 2014 From: jadas at akamai.com (Das, Jagannath) Date: Sun, 2 Nov 2014 22:13:04 +0530 Subject: NGINX Persistent Connection Limit Value In-Reply-To: <20141029191159.GR45418@mdounin.ru> References: <20141029191159.GR45418@mdounin.ru> Message-ID: Hi Maxim, I tried posting some of queries to nginx at nginx.org, but it failed to deliver. I thought of posting my queries on this post as it is related to implementation of keep alive_timeout feature. I am not able to conclude on the correctness of this feature as the timeout I set and the time it takes to close the connection from the server side is not consistent always. Do I have to use some additional config parametes to achieve the same? Please suggest. Thanks, Jagannath From: Maxim Dounin > Reply-To: "nginx-devel at nginx.org" > Date: Thursday, October 30, 2014 at 12:41 AM To: "nginx-devel at nginx.org" > Subject: Re: NGINX Persistent Connection Limit Value Hello! On Wed, Oct 29, 2014 at 11:45:11PM +0530, Das, Jagannath wrote: Hi Folks, Provided the scalable architecture we have today, Is it possible that we may hit the connection limit issue using the persistent connection flags like the keepalive_timeout/keepalive_requests due to too many open connections? I wonder how it's related to the nginx-devel@ mailing list. You may want to ask in the nginx@ list instead. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.calderon at ssi.gouv.fr Mon Nov 3 16:53:55 2014 From: thomas.calderon at ssi.gouv.fr (Thomas Calderon) Date: Mon, 03 Nov 2014 17:53:55 +0100 Subject: [PATCH] Add PKCS#11 support to nginx http module Message-ID: <5457B323.8080707@ssi.gouv.fr> Hi, This patch leverages PKCS#11 support in nginx http module using libp11. This allows the private key to be stored in a dedicated hardware (or software) component. The following patch does not deal with the "configure" tools of nginx. I wanted to get feedback prior to writing nginx "autoconf" scripts to deal with multiple platforms. To test, apply the patch, run configure (with http/ssl enabled), and modify objs/Makefile to add "-lp11" to link the libp11 library. To configure use the following parameters: * ssl_pkcs11, on or off * ssl_certificate, no change the server certificate is fetched on the disk * ssl_certificate_key, string mapped to the PKCS#11 "label" attribute * ssl_pkcs11_pin, string of the token PIN * ssl_pkcs11_module, path to the PKCS#11 shared library Instead of tweaking ngx_ssl_certificate function, I have added the ngx_ssl_certificate_pkcs11 function which is used when ssl_pkcs11 is enabled. This approach could also be applied to the nginx mail module. Feedback appreciated. Regards, -- Cordialement, Thomas Calderon Laboratoire architectures mat?rielles et logicielles Sous-direction expertise ANSSI T?l: 01 71 75 88 55 -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-pkcs11-support-hg.patch Type: text/x-patch Size: 12805 bytes Desc: not available URL: From mdounin at mdounin.ru Mon Nov 3 21:44:46 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Nov 2014 00:44:46 +0300 Subject: NGINX Persistent Connection Limit Value In-Reply-To: References: <20141029191159.GR45418@mdounin.ru> Message-ID: <20141103214446.GE17248@mdounin.ru> Hello! On Sun, Nov 02, 2014 at 10:13:04PM +0530, Das, Jagannath wrote: > Hi Maxim, > I tried posting some of queries to nginx at nginx.org, but it > failed to deliver. If you have problems with posting to nginx@, please make sure you are posting from a email you've subscribed to the list, see http://nginx.org/en/support.html. Please also make sure to read bounce messages, if any. Unless you are working on the nginx code, please avoid writing to nginx-devel at . Thank you. -- Maxim Dounin http://nginx.org/ From piotr at cloudflare.com Mon Nov 3 21:50:25 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 3 Nov 2014 13:50:25 -0800 Subject: [PATCH] Add PKCS#11 support to nginx http module In-Reply-To: <5457B323.8080707@ssi.gouv.fr> References: <5457B323.8080707@ssi.gouv.fr> Message-ID: Hi Thomas, > This patch leverages PKCS#11 support in nginx http module using libp11. > This allows the private key to be stored in a dedicated hardware (or > software) component. Dmitrii Pichulin is already working on (IMHO) much better way to handle PKCS#11 via OpenSSL engines: http://mailman.nginx.org/pipermail/nginx-devel/2014-August/005740.html Best regards, Piotr Sikora From mdounin at mdounin.ru Mon Nov 3 22:03:55 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Nov 2014 01:03:55 +0300 Subject: Gzip Gunzip: always flush busy bufs when the incoming chain is NULL. In-Reply-To: References: Message-ID: <20141103220355.GF17248@mdounin.ru> Hello! On Fri, Oct 31, 2014 at 06:20:18PM -0700, Yichun Zhang (agentzh) wrote: > # HG changeset patch > # User Yichun Zhang > # Date 1414804249 25200 > # Fri Oct 31 18:10:49 2014 -0700 > # Node ID 38a74e59f199edafad0a8caae5cfc921ab3302e8 > # Parent dff86e2246a53b0f4a61935cd5c8c0a0f66d0ca2 > Gzip Gunzip: always flush busy bufs when the incoming chain is NULL. > > After the system send buffer is full, NULL incoming chains are used to > flush pending output upon new write events. The current gzip and gunzip > filters may intercept NULL chains and keep the data stalling in > nginx's own send buffers, leading to request hanging (until send > timeout). > > This regression had appeared in nginx 1.7.7. The change is 1.7.7 is intentional, please see http://hg.nginx.org/nginx/rev/973fded4f461. -- Maxim Dounin http://nginx.org/ From calderon.thomas at gmail.com Mon Nov 3 22:30:57 2014 From: calderon.thomas at gmail.com (Thomas Calderon) Date: Mon, 3 Nov 2014 23:30:57 +0100 Subject: [PATCH] Add PKCS#11 support to nginx http module In-Reply-To: References: <5457B323.8080707@ssi.gouv.fr> Message-ID: Hi Piotr, I was not aware that some efforts were ongoing to use PKCS#11 devices with nginx. However, my experience with OpenSSL engine support is that the code is dusty, rather limited and relies on external configuration files. Dmitrii's approach requires to stack the OpenSSL engine code and OpenSC's engine_pkcs11 which ends-up loading the real PKCS#11 middleware. OpenSSL tends to perform multiple engine initialization which can confuse the PKCS#11 shared library. Using the engine section in openssl.cnf ties you up with a system-wide defined middleware. I would rather advocate for a more direct and self-contained approach. Regards, Thomas Calderon. On Mon, Nov 3, 2014 at 10:50 PM, Piotr Sikora wrote: > Hi Thomas, > > > This patch leverages PKCS#11 support in nginx http module using libp11. > > This allows the private key to be stored in a dedicated hardware (or > > software) component. > > Dmitrii Pichulin is already working on (IMHO) much better way to > handle PKCS#11 via OpenSSL engines: > http://mailman.nginx.org/pipermail/nginx-devel/2014-August/005740.html > > Best regards, > Piotr Sikora > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Mon Nov 3 22:35:33 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Mon, 3 Nov 2014 14:35:33 -0800 Subject: Gzip Gunzip: always flush busy bufs when the incoming chain is NULL. In-Reply-To: <20141103220355.GF17248@mdounin.ru> References: <20141103220355.GF17248@mdounin.ru> Message-ID: Hello! On Mon, Nov 3, 2014 at 2:03 PM, Maxim Dounin wrote: > > The change is 1.7.7 is intentional, please see > http://hg.nginx.org/nginx/rev/973fded4f461. > Yes, I was aware of this commit. This commit introduces real issues on my side because even if the gzip filter does not have busy bufs, the downstream write filter can have busy bufs. And it's wrong to avoid flushing downstream busy bufs just when ngx_gzip itself does not have busy bufs. Regards, -agentzh From mdounin at mdounin.ru Tue Nov 4 00:54:25 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 Nov 2014 03:54:25 +0300 Subject: Gzip Gunzip: always flush busy bufs when the incoming chain is NULL. In-Reply-To: References: <20141103220355.GF17248@mdounin.ru> Message-ID: <20141104005425.GJ17248@mdounin.ru> Hello! On Mon, Nov 03, 2014 at 02:35:33PM -0800, Yichun Zhang (agentzh) wrote: > Hello! > > On Mon, Nov 3, 2014 at 2:03 PM, Maxim Dounin wrote: > > > > The change is 1.7.7 is intentional, please see > > http://hg.nginx.org/nginx/rev/973fded4f461. > > > > Yes, I was aware of this commit. This commit introduces real issues on > my side because even if the gzip filter does not have busy bufs, the > downstream write filter can have busy bufs. And it's wrong to avoid > flushing downstream busy bufs just when ngx_gzip itself does not have > busy bufs. The commit log in question explains the reason for the change. Work on the gzip stalls problem as fixed by 973fded4f461 clearly showed that just passing NULL chains is wrong unless last buffer was already sent or there are busy buffers. And after the c52a761a2029 change there were at least two reports about "output chain is empty" alerts. If you see an issue, you may want to share the issue details, to find out how to fix it properly. -- Maxim Dounin http://nginx.org/ From toshikuni-fukaya at cybozu.co.jp Tue Nov 4 11:18:44 2014 From: toshikuni-fukaya at cybozu.co.jp (Toshikuni Fukaya) Date: Tue, 04 Nov 2014 20:18:44 +0900 Subject: [PATCH] Upstream: support named location for X-Accel-Redirect Message-ID: <6f4517db02a8cd4068b9.1415099924@fukaya-VirtualBox> # HG changeset patch # User Toshikuni Fukaya # Date 1415098583 -32400 # Node ID 6f4517db02a8cd4068b9378bd93fe6290f54720d # Parent dff86e2246a53b0f4a61935cd5c8c0a0f66d0ca2 Upstream: support named location for X-Accel-Redirect. diff -r dff86e2246a5 -r 6f4517db02a8 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Mon Aug 25 13:41:31 2014 +0400 +++ b/src/http/ngx_http_upstream.c Tue Nov 04 19:56:23 2014 +0900 @@ -2218,19 +2218,25 @@ } uri = u->headers_in.x_accel_redirect->value; - ngx_str_null(&args); - flags = NGX_HTTP_LOG_UNSAFE; - - if (ngx_http_parse_unsafe_uri(r, &uri, &args, &flags) != NGX_OK) { - ngx_http_finalize_request(r, NGX_HTTP_NOT_FOUND); - return NGX_DONE; - } - - if (r->method != NGX_HTTP_HEAD) { - r->method = NGX_HTTP_GET; - } - - ngx_http_internal_redirect(r, &uri, &args); + + if (uri.len > 0 && uri.data[0] == '@') { + ngx_http_named_location(r, &uri); + } else { + ngx_str_null(&args); + flags = NGX_HTTP_LOG_UNSAFE; + + if (ngx_http_parse_unsafe_uri(r, &uri, &args, &flags) != NGX_OK) { + ngx_http_finalize_request(r, NGX_HTTP_NOT_FOUND); + return NGX_DONE; + } + + if (r->method != NGX_HTTP_HEAD) { + r->method = NGX_HTTP_GET; + } + + ngx_http_internal_redirect(r, &uri, &args); + } + ngx_http_finalize_request(r, NGX_DONE); return NGX_DONE; } From agentzh at gmail.com Tue Nov 4 20:42:56 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 4 Nov 2014 12:42:56 -0800 Subject: Gzip Gunzip: always flush busy bufs when the incoming chain is NULL. In-Reply-To: <20141104005425.GJ17248@mdounin.ru> References: <20141103220355.GF17248@mdounin.ru> <20141104005425.GJ17248@mdounin.ru> Message-ID: Hello! On Mon, Nov 3, 2014 at 4:54 PM, Maxim Dounin wrote: > The commit log in question explains the reason for the change. > Work on the gzip stalls problem as fixed by 973fded4f461 clearly > showed that just passing NULL chains is wrong unless last buffer > was already sent or there are busy buffers. AFAIK, this is the output filter caller's responsibility to ensure the requirements "last buffer was already sent or there are busy buffers", not every individual output body filter. Almost all the other standard nginx output filter modules including but not limited to ngx_addition, ngx_sub_filter, ngx_image_filter, ngx_chunked_filter, and ngx_xslt_filter. You meant to fix these modules as well? Also, the requirement "there are busy buffers" should not just be busy buffers of the output filter module itself, but rather busy buffers in all the other filters along the filter chain, include the busy bufs in r->out which is managed by ngx_http_write_filter_module at the bottom of the filter chain. > If you see an issue, you may want to share the issue details, > to find out how to fix it properly. The very issue of nginx 1.7.7 caught by my ngx_lua's test suite is in this condition: 1. the content handler runs out of buffers (i.e., having the maximum number of busy buffers), and 2. ngx_gzip or ngx_gunzip does not have any busy buffers for itself (i.e., no ctx->busy), and 3. ngx_http_write_filter_module has pending data for flush to the system send buffer, that is, having busy bufs in r->out. Upon new write events, I pass NULL to the output filter chain in order to flush the busy bufs of ngx_http_write_fiilter_module (in r->out), but ngx_gzip and ngx_gunzip quietly swallows it, wasting the write events, and leading to request hang. This use case of NULL chain in ngx_http_output_filter should considered valid because there are indeed busy bufs for ngx_http_write_filter_module (though not ngx_gzip/ngx_gunzip itself). Disabling the gzip or gunzip filter makes the problem go away immediately. Regarding the "output chain is empty" alert (generated by ngx_http_write_filter_module), it should only happen when there is *no* busy bufs and no data in r->out, which is apparently not the case explained above. In conclusion, your latest change does not even allow NULL flushing cases that perfectly *qualify* your own definition of the requirements "last buffer was already sent or there are busy buffers" because you explicitly prohibit flushing when there actually *is* busy bufs in ngx_http_write_filter_module, and just in ngx_gzip and ngx_gunzip (but not in any other output filter modules). Just my 2 cents :) Regards, -agentzh From lus at vmware.com Wed Nov 5 05:44:58 2014 From: lus at vmware.com (Steven Lu) Date: Tue, 4 Nov 2014 21:44:58 -0800 Subject: nginx test Message-ID: Hello I am looking for some test tools, test cases, test dataset for nginx to test if my changes introduce any regression issues. But I did not find them in the nginx community. Can anyone point me where I can find them if there are? If no, how to do something like pre-commit test for nginx? Thanks Steven -------------- next part -------------- An HTML attachment was scrubbed... URL: From wangxiaochen0 at gmail.com Wed Nov 5 06:39:51 2014 From: wangxiaochen0 at gmail.com (Xiaochen Wang) Date: Wed, 5 Nov 2014 14:39:51 +0800 Subject: nginx test In-Reply-To: References: Message-ID: On Wed, Nov 5, 2014 at 1:44 PM, Steven Lu wrote: > Hello > > I am looking for some test tools, test cases, test dataset for nginx to test > if my changes introduce any regression issues. But I did not find them in > the nginx community. Can anyone point me where I can find them if there are? You can get nginx test cases in http://hg.nginx.org/nginx-tests/. And the README file in this project will tell you how to run these cases. > If no, how to do something like pre-commit test for nginx? > > Thanks > Steven > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From lus at vmware.com Wed Nov 5 07:02:21 2014 From: lus at vmware.com (Steven Lu) Date: Tue, 4 Nov 2014 23:02:21 -0800 Subject: nginx test In-Reply-To: References: Message-ID: Thanks. This is what I want. But the readme is pretty simple. Are there any complete document guiding me to run them? More questions about the tests: 1. Do I need to install several web servers to test nginx reverse proxy features, or the test suites can simulate the web servers? 2. Any test tools for performance tests? On 11/5/14, 2:39 PM, "Xiaochen Wang" wrote: >On Wed, Nov 5, 2014 at 1:44 PM, Steven Lu wrote: >> Hello >> >> I am looking for some test tools, test cases, test dataset for nginx to >>test >> if my changes introduce any regression issues. But I did not find them >>in >> the nginx community. Can anyone point me where I can find them if there >>are? > >You can get nginx test cases in >https://urldefense.proofpoint.com/v2/url?u=http-3A__hg.nginx.org_nginx-2Dt >ests_&d=AAICAg&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=nroUCl0O45S >8LSaw43T8uA&m=fBeKEbQYLQxyiQmAudxuu5z5mxY89qENqZ4-Ch9UkpU&s=rFjB9IrZtWmT5b >xK-qsXPbqBTF8wLUOxWgUHZRJ_g-A&e= . >And the README file in this project will tell you how to run these cases. > > >> If no, how to do something like pre-commit test for nginx? >> >> Thanks >> Steven >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> >>https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mai >>lman_listinfo_nginx-2Ddevel&d=AAICAg&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihV >>MNtXt-uEs&r=nroUCl0O45S8LSaw43T8uA&m=fBeKEbQYLQxyiQmAudxuu5z5mxY89qENqZ4- >>Ch9UkpU&s=GhOZNLYqqu4xGr-JnLGUVvY58b07WKh0PhwSzkNAwX8&e= > >_______________________________________________ >nginx-devel mailing list >nginx-devel at nginx.org >https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mail >man_listinfo_nginx-2Ddevel&d=AAICAg&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMN >tXt-uEs&r=nroUCl0O45S8LSaw43T8uA&m=fBeKEbQYLQxyiQmAudxuu5z5mxY89qENqZ4-Ch9 >UkpU&s=GhOZNLYqqu4xGr-JnLGUVvY58b07WKh0PhwSzkNAwX8&e= From ru at nginx.com Wed Nov 5 08:55:35 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 5 Nov 2014 11:55:35 +0300 Subject: nginx test In-Reply-To: References: Message-ID: <20141105085535.GA32383@lo0.su> On Tue, Nov 04, 2014 at 09:44:58PM -0800, Steven Lu wrote: > Hello > > I am looking for some test tools, test cases, test dataset for nginx to test > if my changes introduce any regression issues. But I did not find them in the > nginx community. Can anyone point me where I can find them if there are? If > no, how to do something like pre-commit test for nginx? ??? ?????? ???????, ??? ???? ?????? ??? ?????? ??????????? ???-?? ???, ????? ????? ???? ?????????? ?????????? ? ????? test suite ???-?????? ? ????????????? ????????: diff --git a/xml/en/docs/contributing_changes.xml b/xml/en/docs/contributing_changes.xml --- a/xml/en/docs/contributing_changes.xml +++ b/xml/en/docs/contributing_changes.xml @@ -9,7 +9,7 @@
+ rev="2">
@@ -117,6 +117,16 @@ case, if possible. + +Passing your changes through the test suite is a good way to ensure +that they do not cause a regression. +The repository with +tests can be cloned with the following command: + +hg clone http://hg.nginx.org/nginx-tests + + + diff --git a/xml/ru/docs/contributing_changes.xml b/xml/ru/docs/contributing_changes.xml --- a/xml/ru/docs/contributing_changes.xml +++ b/xml/ru/docs/contributing_changes.xml @@ -9,7 +9,7 @@
+ rev="2">
@@ -117,6 +117,16 @@ ??????? ??????? ?????????????. + +???????? ????????? ??? ?????? ???????????? ?????? ?????? ???????? ?????????, +??? ??? ?? ???????? ?????????. +??????????? ? ??????? +????? ??????????? ????????? ????????: + +hg clone http://hg.nginx.org/nginx-tests + + + From wangxiaochen0 at gmail.com Wed Nov 5 11:54:14 2014 From: wangxiaochen0 at gmail.com (Xiaochen Wang) Date: Wed, 5 Nov 2014 19:54:14 +0800 Subject: nginx test In-Reply-To: References: Message-ID: On Wed, Nov 5, 2014 at 3:02 PM, Steven Lu wrote: > Thanks. This is what I want. But the readme is pretty simple. Are there > any complete document guiding me to run them? The document has clarified how to run test cases actually. You might want to know how to do unit testing in perl. All these cases in this project are written in perl and runned with the 'prove' command. > > More questions about the tests: > 1. Do I need to install several web servers to test nginx reverse proxy > features, or the test suites can simulate the web servers? Test suites will start nginx listening on port 8080 or other ports. > 2. Any test tools for performance tests? > > > > On 11/5/14, 2:39 PM, "Xiaochen Wang" wrote: > >>On Wed, Nov 5, 2014 at 1:44 PM, Steven Lu wrote: >>> Hello >>> >>> I am looking for some test tools, test cases, test dataset for nginx to >>>test >>> if my changes introduce any regression issues. But I did not find them >>>in >>> the nginx community. Can anyone point me where I can find them if there >>>are? >> >>You can get nginx test cases in >>https://urldefense.proofpoint.com/v2/url?u=http-3A__hg.nginx.org_nginx-2Dt >>ests_&d=AAICAg&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=nroUCl0O45S >>8LSaw43T8uA&m=fBeKEbQYLQxyiQmAudxuu5z5mxY89qENqZ4-Ch9UkpU&s=rFjB9IrZtWmT5b >>xK-qsXPbqBTF8wLUOxWgUHZRJ_g-A&e= . >>And the README file in this project will tell you how to run these cases. >> >> >>> If no, how to do something like pre-commit test for nginx? >>> >>> Thanks >>> Steven >>> >>> _______________________________________________ >>> nginx-devel mailing list >>> nginx-devel at nginx.org >>> >>>https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mai >>>lman_listinfo_nginx-2Ddevel&d=AAICAg&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihV >>>MNtXt-uEs&r=nroUCl0O45S8LSaw43T8uA&m=fBeKEbQYLQxyiQmAudxuu5z5mxY89qENqZ4- >>>Ch9UkpU&s=GhOZNLYqqu4xGr-JnLGUVvY58b07WKh0PhwSzkNAwX8&e= >> >>_______________________________________________ >>nginx-devel mailing list >>nginx-devel at nginx.org >>https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mail >>man_listinfo_nginx-2Ddevel&d=AAICAg&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMN >>tXt-uEs&r=nroUCl0O45S8LSaw43T8uA&m=fBeKEbQYLQxyiQmAudxuu5z5mxY89qENqZ4-Ch9 >>UkpU&s=GhOZNLYqqu4xGr-JnLGUVvY58b07WKh0PhwSzkNAwX8&e= > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From mdounin at mdounin.ru Wed Nov 5 12:45:47 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 5 Nov 2014 15:45:47 +0300 Subject: nginx test In-Reply-To: <20141105085535.GA32383@lo0.su> References: <20141105085535.GA32383@lo0.su> Message-ID: <20141105124547.GB10189@mdounin.ru> Hello! On Wed, Nov 05, 2014 at 11:55:35AM +0300, Ruslan Ermilov wrote: > On Tue, Nov 04, 2014 at 09:44:58PM -0800, Steven Lu wrote: > > Hello > > > > I am looking for some test tools, test cases, test dataset for nginx to test > > if my changes introduce any regression issues. But I did not find them in the > > nginx community. Can anyone point me where I can find them if there are? If > > no, how to do something like pre-commit test for nginx? > > ??? ?????? ???????, ??? ???? ?????? ??? ?????? ??????????? > ???-?? ???, ????? ????? ???? ?????????? ?????????? ? ????? > test suite ???-?????? ? ????????????? ????????: +1 ???????? ??????? ?????? ?????????. -- Maxim Dounin http://nginx.org/ From lus at vmware.com Wed Nov 5 13:17:39 2014 From: lus at vmware.com (Steven Lu) Date: Wed, 5 Nov 2014 05:17:39 -0800 Subject: nginx test In-Reply-To: <20141105085535.GA32383@lo0.su> References: <20141105085535.GA32383@lo0.su> Message-ID: Thank you. But I don?t know Russian:-( On 11/5/14, 4:55 PM, "Ruslan Ermilov" wrote: >On Tue, Nov 04, 2014 at 09:44:58PM -0800, Steven Lu wrote: >> Hello >> >> I am looking for some test tools, test cases, test dataset for nginx to >>test >> if my changes introduce any regression issues. But I did not find them >>in the >> nginx community. Can anyone point me where I can find them if there >>are? If >> no, how to do something like pre-commit test for nginx? > >??? ?????? ???????, ??? ???? ?????? ??? ?????? ??????????? >???-?? ???, ????? ????? ???? ?????????? ?????????? ? ????? >test suite ???-?????? ? ????????????? ????????: > >diff --git a/xml/en/docs/contributing_changes.xml >b/xml/en/docs/contributing_changes.xml >--- a/xml/en/docs/contributing_changes.xml >+++ b/xml/en/docs/contributing_changes.xml >@@ -9,7 +9,7 @@ >
link="/en/docs/contributing_changes.html" > lang="en" >- rev="1"> >+ rev="2"> > >
> >@@ -117,6 +117,16 @@ > case, if possible. > > >+ >+Passing your changes through the test suite is a good way to ensure >+that they do not cause a regression. >+The url="https://urldefense.proofpoint.com/v2/url?u=http-3A__hg.nginx.org_ngin >x-2Dtests&d=AAIG3g&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=nroUCl0 >O45S8LSaw43T8uA&m=MbMtdUvBMN2HKDayOvXnELkLZBib7vwz4Xgc8voL82A&s=bNQzSMSEyM >JpYqeGxefH6rV9gjrnDxzZ7SkCJRET1Ko&e= ">repository with >+tests can be cloned with the following command: >+ >+hg clone >https://urldefense.proofpoint.com/v2/url?u=http-3A__hg.nginx.org_nginx-2Dt >ests&d=AAIG3g&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=nroUCl0O45S8 >LSaw43T8uA&m=MbMtdUvBMN2HKDayOvXnELkLZBib7vwz4Xgc8voL82A&s=bNQzSMSEyMJpYqe >GxefH6rV9gjrnDxzZ7SkCJRET1Ko&e= >+ >+ >+ > > > >diff --git a/xml/ru/docs/contributing_changes.xml >b/xml/ru/docs/contributing_changes.xml >--- a/xml/ru/docs/contributing_changes.xml >+++ b/xml/ru/docs/contributing_changes.xml >@@ -9,7 +9,7 @@ >
link="/ru/docs/contributing_changes.html" > lang="ru" >- rev="1"> >+ rev="2"> > >
> >@@ -117,6 +117,16 @@ > ??????? ??????? ?????????????. > > >+ >+???????? ????????? ??? ?????? ???????????? ?????? ?????? ???????? >?????????, >+??? ??? ?? ???????? ?????????. >+url="https://urldefense.proofpoint.com/v2/url?u=http-3A__hg.nginx.org_ngin >x-2Dtests&d=AAIG3g&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=nroUCl0 >O45S8LSaw43T8uA&m=MbMtdUvBMN2HKDayOvXnELkLZBib7vwz4Xgc8voL82A&s=bNQzSMSEyM >JpYqeGxefH6rV9gjrnDxzZ7SkCJRET1Ko&e= ">??????????? ? ??????? >+????? ??????????? ????????? ????????: >+ >+hg clone >https://urldefense.proofpoint.com/v2/url?u=http-3A__hg.nginx.org_nginx-2Dt >ests&d=AAIG3g&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=nroUCl0O45S8 >LSaw43T8uA&m=MbMtdUvBMN2HKDayOvXnELkLZBib7vwz4Xgc8voL82A&s=bNQzSMSEyMJpYqe >GxefH6rV9gjrnDxzZ7SkCJRET1Ko&e= >+ >+ >+ > > > > >_______________________________________________ >nginx-devel mailing list >nginx-devel at nginx.org >https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mail >man_listinfo_nginx-2Ddevel&d=AAIG3g&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMN >tXt-uEs&r=nroUCl0O45S8LSaw43T8uA&m=MbMtdUvBMN2HKDayOvXnELkLZBib7vwz4Xgc8vo >L82A&s=oSDE6sTVPw6yLHg0LBp2n921VZARSYb7XeUce2-_GnI&e= From maxim at nginx.com Wed Nov 5 12:20:56 2014 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 05 Nov 2014 16:20:56 +0400 Subject: nginx test In-Reply-To: References: <20141105085535.GA32383@lo0.su> Message-ID: <545A1628.1080906@nginx.com> Hi Steven, sorry for that, Ruslan mixed up lists. On 11/5/14 5:17 PM, Steven Lu wrote: > Thank you. But I don?t know Russian:-( > > On 11/5/14, 4:55 PM, "Ruslan Ermilov" wrote: > >> On Tue, Nov 04, 2014 at 09:44:58PM -0800, Steven Lu wrote: >>> Hello >>> >>> I am looking for some test tools, test cases, test dataset for nginx to >>> test >>> if my changes introduce any regression issues. But I did not find them >>> in the >>> nginx community. Can anyone point me where I can find them if there >>> are? If >>> no, how to do something like pre-commit test for nginx? >> [...] -- Maxim Konovalov http://nginx.com From mdounin at mdounin.ru Wed Nov 5 15:41:28 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 5 Nov 2014 18:41:28 +0300 Subject: Gzip Gunzip: always flush busy bufs when the incoming chain is NULL. In-Reply-To: References: <20141103220355.GF17248@mdounin.ru> <20141104005425.GJ17248@mdounin.ru> Message-ID: <20141105154128.GE10189@mdounin.ru> Hello! On Tue, Nov 04, 2014 at 12:42:56PM -0800, Yichun Zhang (agentzh) wrote: > Hello! > > On Mon, Nov 3, 2014 at 4:54 PM, Maxim Dounin wrote: > > The commit log in question explains the reason for the change. > > Work on the gzip stalls problem as fixed by 973fded4f461 clearly > > showed that just passing NULL chains is wrong unless last buffer > > was already sent or there are busy buffers. > > AFAIK, this is the output filter caller's responsibility to ensure the > requirements "last buffer was already sent or there are busy buffers", > not every individual output body filter. The problem is that caller's idea about busy buffers isn't the same as gzip filter's one. Caller may call output filter chain with NULL because some of its buffers are busy, but since they are buffered by gzip filter it would be wrong to call next body filter with NULL unconditionally. So in this case it's gzip filter which is responsible to call next body filter properly. > Almost all the other standard > nginx output filter modules including but not limited to ngx_addition, > ngx_sub_filter, ngx_image_filter, ngx_chunked_filter, and > ngx_xslt_filter. You meant to fix these modules as well? Addition filter doesn't buffer anything. Sub filter only calls next body filter with NULL if it doesn't buffer anything. Image filter always releases all input buffers and so this shouldn't be a problem as long as a caller does the right thing. Chunked filter doesn't buffer anything. Xslt filter always releases all input buffers. That is, all these modules are fine and there is no need to fix them. > Also, the requirement "there are busy buffers" should not just be busy > buffers of the output filter module itself, but rather busy buffers in > all the other filters along the filter chain, include the busy bufs in > r->out which is managed by ngx_http_write_filter_module at the bottom > of the filter chain. Technically yes. But main purpose of calling output filter chain with NULL before the last buffer is sent is to free up some busy buffers of a calling module, and checking for the module own buffers should be enough - and this is what, e.g., unbuffered proxy do. > > If you see an issue, you may want to share the issue details, > > to find out how to fix it properly. > > The very issue of nginx 1.7.7 caught by my ngx_lua's test suite is in > this condition: So this isn't a real issue, but an artificial test case which fails, right? > 1. the content handler runs out of buffers (i.e., having the maximum > number of busy buffers), and > 2. ngx_gzip or ngx_gunzip does not have any busy buffers for itself > (i.e., no ctx->busy), and > 3. ngx_http_write_filter_module has pending data for flush to the > system send buffer, that is, having busy bufs in r->out. > > Upon new write events, I pass NULL to the output filter chain in order > to flush the busy bufs of ngx_http_write_fiilter_module (in r->out), > but ngx_gzip and ngx_gunzip quietly swallows it, wasting the write > events, and leading to request hang. This use case of NULL chain in > ngx_http_output_filter should considered valid because there are > indeed busy bufs for ngx_http_write_filter_module (though not > ngx_gzip/ngx_gunzip itself). Disabling the gzip or gunzip filter makes > the problem go away immediately. The questions are: - How it happened that all content handler's buffers are busy, while there are no busy buffers in gzip? - How it happened that there are no busy buffers in gzip, but there are buffers in r->out? - Why calling next filter in gzip with NULL is expected to help here? It only can free up some gzip filter buffers which are all already free, but not content handler module buffers. > Regarding the "output chain is empty" alert (generated by > ngx_http_write_filter_module), it should only happen when there is > *no* busy bufs and no data in r->out, which is apparently not the case > explained above. Sure. The problem is that the code change you suggest (and that was previously done in 1.5.7 / c52a761a2029) triggers alert in other valid cases, when there are no data in r->out. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Nov 5 15:55:27 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 05 Nov 2014 15:55:27 +0000 Subject: [nginx] Cache: removed dead store in ngx_http_file_cache_vary_he... Message-ID: details: http://hg.nginx.org/nginx/rev/f0af7ba616d8 branches: changeset: 5898:f0af7ba616d8 user: Maxim Dounin date: Wed Nov 05 18:53:26 2014 +0300 description: Cache: removed dead store in ngx_http_file_cache_vary_header(). Found by Clang Static Analyzer. diffstat: src/http/ngx_http_file_cache.c | 1 - 1 files changed, 0 insertions(+), 1 deletions(-) diffs (11 lines): diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -1022,7 +1022,6 @@ ngx_http_file_cache_vary_header(ngx_http /* normalize spaces */ p = header[i].value.data; - start = p; last = p + header[i].value.len; while (p < last) { From agentzh at gmail.com Thu Nov 6 01:02:46 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 5 Nov 2014 17:02:46 -0800 Subject: Gzip Gunzip: always flush busy bufs when the incoming chain is NULL. In-Reply-To: <20141105154128.GE10189@mdounin.ru> References: <20141103220355.GF17248@mdounin.ru> <20141104005425.GJ17248@mdounin.ru> <20141105154128.GE10189@mdounin.ru> Message-ID: Hello! On Wed, Nov 5, 2014 at 7:41 AM, Maxim Dounin wrote: > The questions are: > > - How it happened that all content handler's buffers are busy, > while there are no busy buffers in gzip? > Sorry, I was wrong in this part. The content handler actually checked the r->buffered flag instead of checking its own busy bufs. The content handler has no busy bufs at this point. > - How it happened that there are no busy buffers in gzip, but > there are buffers in r->out? > I've checked that the bufs in r->out is from ngx_chunked_filter. > - Why calling next filter in gzip with NULL is expected to help > here? It only can free up some gzip filter buffers which are > all already free, but not content handler module buffers. > It just helps flushing out the bufs generated by the ngx_chunked_filter module. The chunked overhead bufs stalled in r->out can usually be ignored because they're usually very small in practice. You're right the content handler should checks its own busy bufs instead of checking r->buffered. Thank you for your explanation and patience! Best regards, -agentzh From agentzh at gmail.com Thu Nov 6 01:08:11 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 5 Nov 2014 17:08:11 -0800 Subject: Gzip Gunzip: always flush busy bufs when the incoming chain is NULL. In-Reply-To: References: <20141103220355.GF17248@mdounin.ru> <20141104005425.GJ17248@mdounin.ru> <20141105154128.GE10189@mdounin.ru> Message-ID: Hello! On Wed, Nov 5, 2014 at 5:02 PM, Yichun Zhang (agentzh) wrote: > Sorry, I was wrong in this part. The content handler actually checked > the r->buffered flag instead of checking its own busy bufs. Sorry again, it actually checked "c->buffered & NGX_HTTP_LOWLEVEL_BUFFERED". This condition is indeed too strong and I've made it check its own busy bufs instead. Now there is no problem without my patch. Thanks! -agentzh From jadas at akamai.com Thu Nov 6 15:31:53 2014 From: jadas at akamai.com (Das, Jagannath) Date: Thu, 6 Nov 2014 21:01:53 +0530 Subject: Create a Subdir inside Client_body_temp_path Message-ID: Hi Folks, I am starting to dig into nginx source code. Sorry for being na?ve. How do we create a subdir inside the path read by the conf param client_body_temp_path? I was thinking of changing the code "ngx_int_t ngx_create_temp_file(ngx_file_t *file, ngx_path_t *path, ngx_pool_t *pool, ngx_uint_t persistent, ngx_uint_t clean, ngx_uint_t access, int aka_temp_dir_creation) { ? While walking through the code I got confused between path->name.len and path->len . Any help will be appreciated. //Jagannath -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Nov 7 15:26:59 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 07 Nov 2014 15:26:59 +0000 Subject: [nginx] SPDY: fixed "too long header line" logging. Message-ID: details: http://hg.nginx.org/nginx/rev/234c5ecb00c0 branches: changeset: 5899:234c5ecb00c0 user: Maxim Dounin date: Fri Nov 07 17:38:55 2014 +0300 description: SPDY: fixed "too long header line" logging. This fixes possible one byte buffer overrun and makes sure ellipsis are always added, see 21043ce2a005. diffstat: src/http/ngx_http_spdy.c | 3 +-- 1 files changed, 1 insertions(+), 2 deletions(-) diffs (16 lines): diff --git a/src/http/ngx_http_spdy.c b/src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c +++ b/src/http/ngx_http_spdy.c @@ -2656,11 +2656,10 @@ ngx_http_spdy_alloc_large_header_buffer( if (rest > NGX_MAX_ERROR_STR - 300) { rest = NGX_MAX_ERROR_STR - 300; - p[rest++] = '.'; p[rest++] = '.'; p[rest++] = '.'; } ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, - "client sent too long header name or value: \"%*s\"", + "client sent too long header name or value: \"%*s...\"", rest, p); return NGX_DECLINED; From igor at sysoev.ru Fri Nov 7 15:59:54 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 7 Nov 2014 18:59:54 +0300 Subject: [PATCH] Upstream: support named location for X-Accel-Redirect In-Reply-To: <6f4517db02a8cd4068b9.1415099924@fukaya-VirtualBox> References: <6f4517db02a8cd4068b9.1415099924@fukaya-VirtualBox> Message-ID: On 04 Nov 2014, at 14:18, Toshikuni Fukaya wrote: > Upstream: support named location for X-Accel-Redirect. Could please you provide usage examples? -- Igor Sysoev http://nginx.com From ari.aosved at gmail.com Fri Nov 7 21:19:43 2014 From: ari.aosved at gmail.com (Ari Aosved) Date: Fri, 07 Nov 2014 13:19:43 -0800 Subject: [PATCH] Core: parse octal/hexadecimal numeric config directives Message-ID: <742fb1a59b705ba71f2b.1415395183@Aris-MacBook-Pro.local> # HG changeset patch # User Ari Aosved # Date 1415392382 28800 # Fri Nov 07 12:33:02 2014 -0800 # Node ID 742fb1a59b705ba71f2bab5fede58ff5885ebc71 # Parent 234c5ecb00c04c67bcbb4a3be45fd844f2289462 Core: parse octal/hexadecimal numeric config directives. When working with modules which need configuration directives dealing with file modes, it's convenient to specify the numeric values in octal notation. For instance "hls_directory_create_mode 0775;" is recognizably rwxrwxr-x whereas "hls_directory_create_mode 509;" would be more prone to mistakes. diff -r 234c5ecb00c0 -r 742fb1a59b70 src/core/ngx_conf_file.c --- a/src/core/ngx_conf_file.c Fri Nov 07 17:38:55 2014 +0300 +++ b/src/core/ngx_conf_file.c Fri Nov 07 12:33:02 2014 -0800 @@ -1097,7 +1097,16 @@ } value = cf->args->elts; - *np = ngx_atoi(value[1].data, value[1].len); + if (value[1].len > 2 && *value[1].data == '0' && *value[1].data+1 == 'x') { + *np = ngx_hextoi(value[1].data+2, value[1].len - 2); + } + else + if (value[1].len > 1 && *value[1].data == '0') { + *np = ngx_octtoi(value[1].data+1, value[1].len - 1); + } + else { + *np = ngx_atoi(value[1].data, value[1].len); + } if (*np == NGX_ERROR) { return "invalid number"; } diff -r 234c5ecb00c0 -r 742fb1a59b70 src/core/ngx_string.c --- a/src/core/ngx_string.c Fri Nov 07 17:38:55 2014 +0300 +++ b/src/core/ngx_string.c Fri Nov 07 12:33:02 2014 -0800 @@ -1085,6 +1085,36 @@ } +ngx_int_t +ngx_octtoi(u_char *line, size_t n) +{ + u_char ch; + ngx_int_t value; + + if (n == 0) { + return NGX_ERROR; + } + + for (value = 0; n--; line++) { + ch = *line; + + if (ch >= '0' && ch <= '7') { + value = value * 8 + (ch - '0'); + continue; + } + + return NGX_ERROR; + } + + if (value < 0) { + return NGX_ERROR; + + } else { + return value; + } +} + + u_char * ngx_hex_dump(u_char *dst, u_char *src, size_t len) { diff -r 234c5ecb00c0 -r 742fb1a59b70 src/core/ngx_string.h --- a/src/core/ngx_string.h Fri Nov 07 17:38:55 2014 +0300 +++ b/src/core/ngx_string.h Fri Nov 07 12:33:02 2014 -0800 @@ -175,6 +175,7 @@ off_t ngx_atoof(u_char *line, size_t n); time_t ngx_atotm(u_char *line, size_t n); ngx_int_t ngx_hextoi(u_char *line, size_t n); +ngx_int_t ngx_octtoi(u_char *line, size_t n); u_char *ngx_hex_dump(u_char *dst, u_char *src, size_t len); From piotr at cloudflare.com Fri Nov 7 23:26:43 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Fri, 7 Nov 2014 15:26:43 -0800 Subject: [PATCH] Upstream: add "proxy_ssl_certificate" and friends In-Reply-To: References: Message-ID: Hey, > # HG changeset patch > # User Piotr Sikora > # Date 1414668641 25200 > # Thu Oct 30 04:30:41 2014 -0700 > # Node ID bb14c7659efb32d1d1f651bdf54a8c8157ef67f9 > # Parent 87ada3ba1392fadaf4d9193b5d345c248be32f77 > Upstream: add "proxy_ssl_certificate" and friends. Ping. Best regards, Piotr Sikora From jadas at akamai.com Sun Nov 9 04:25:27 2014 From: jadas at akamai.com (Das, Jagannath) Date: Sun, 9 Nov 2014 09:55:27 +0530 Subject: Appending a subdir to a client_body_temp_path Message-ID: Hi All, Today we have client_body_temp_path as a parameter in nginx.conf . How do we append to the path read from the path? Is it possible to realloc to the same memory read? One beginner level question is how to achieve append functionality for a string? Apologies for being na?ve. Thanks, Jagannath -------------- next part -------------- An HTML attachment was scrubbed... URL: From toshikuni-fukaya at cybozu.co.jp Mon Nov 10 00:54:13 2014 From: toshikuni-fukaya at cybozu.co.jp (Toshikuni Fukaya) Date: Mon, 10 Nov 2014 09:54:13 +0900 Subject: [PATCH] Upstream: support named location for X-Accel-Redirect In-Reply-To: References: <6f4517db02a8cd4068b9.1415099924@fukaya-VirtualBox> Message-ID: <54600CB5.5070209@cybozu.co.jp> Hi, (2014/11/08 0:59), Igor Sysoev wrote: > On 04 Nov 2014, at 14:18, Toshikuni Fukaya wrote: > >> Upstream: support named location for X-Accel-Redirect. > > Could please you provide usage examples? > > Here is my (simplified) config: server { location / { proxy_pass http://app; } location @contents { proxy_pass http://contents/$upstream_http_x_contents_url; } } app is a upstream application server, it processes all client requests. contents is a some of blob server (like as S3) to supply images, css and so on. When clients access to nginx, app will check a some of ACL to such requests, then reply with x-accel-redirect and x-contents-url headers. Finally, nginx will return a content from contents upstream. In this time, x-accel-redirect will be set to @contents and x-contents-url will be an appropriate content url. The key of this strategy is that all requests is passed to app and can be checked by it. If location @contents is a normal location such as /contents, a client request to /contents will not be passed app. It is not comfortable for me. Thanks, Toshikuni Fukaya From vbart at nginx.com Mon Nov 10 08:43:01 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 10 Nov 2014 11:43:01 +0300 Subject: [PATCH] Upstream: support named location for X-Accel-Redirect In-Reply-To: <54600CB5.5070209@cybozu.co.jp> References: <6f4517db02a8cd4068b9.1415099924@fukaya-VirtualBox> <54600CB5.5070209@cybozu.co.jp> Message-ID: <4397546.lFqJO2yNU0@vbart-laptop> On Monday 10 November 2014 09:54:13 Toshikuni Fukaya wrote: > Hi, > > (2014/11/08 0:59), Igor Sysoev wrote: > > On 04 Nov 2014, at 14:18, Toshikuni Fukaya wrote: > > > >> Upstream: support named location for X-Accel-Redirect. > > > > Could please you provide usage examples? > > > > > > Here is my (simplified) config: > > server { > location / { > proxy_pass http://app; > } > > location @contents { > proxy_pass http://contents/$upstream_http_x_contents_url; > } > } > > app is a upstream application server, it processes all client requests. > contents is a some of blob server (like as S3) to supply images, css and > so on. > > When clients access to nginx, app will check a some of ACL to such > requests, then reply with x-accel-redirect and x-contents-url headers. > Finally, nginx will return a content from contents upstream. > In this time, x-accel-redirect will be set to @contents and > x-contents-url will be an appropriate content url. > > The key of this strategy is that all requests is passed to app and can > be checked by it. > If location @contents is a normal location such as /contents, a client > request to /contents will not be passed app. It is not comfortable for me. Why don't you use the Auth Request module? http://nginx.org/en/docs/http/ngx_http_auth_request_module.html wbr, Valentin V. Bartenev From igor at sysoev.ru Mon Nov 10 09:08:25 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 10 Nov 2014 12:08:25 +0300 Subject: [PATCH] Upstream: support named location for X-Accel-Redirect In-Reply-To: <54600CB5.5070209@cybozu.co.jp> References: <6f4517db02a8cd4068b9.1415099924@fukaya-VirtualBox> <54600CB5.5070209@cybozu.co.jp> Message-ID: <5CC54655-E661-413C-AA7F-57B9CDA55095@sysoev.ru> On 10 Nov 2014, at 03:54, Toshikuni Fukaya wrote: > (2014/11/08 0:59), Igor Sysoev wrote: >> On 04 Nov 2014, at 14:18, Toshikuni Fukaya wrote: >> >>> Upstream: support named location for X-Accel-Redirect. >> >> Could please you provide usage examples? >> >> > > Here is my (simplified) config: > > server { > location / { > proxy_pass http://app; > } > > location @contents { > proxy_pass http://contents/$upstream_http_x_contents_url; > } > } > > app is a upstream application server, it processes all client requests. > contents is a some of blob server (like as S3) to supply images, css and so on. > > When clients access to nginx, app will check a some of ACL to such requests, then reply with x-accel-redirect and x-contents-url headers. > Finally, nginx will return a content from contents upstream. > In this time, x-accel-redirect will be set to @contents and x-contents-url will be an appropriate content url. > > The key of this strategy is that all requests is passed to app and can be checked by it. > If location @contents is a normal location such as /contents, a client request to /contents will not be passed app. It is not comfortable for me. The "/contents? location can be marked as "internal", and it can not be accessed outside directly. However, using named location is more convenient for this example. Thank you. -- Igor Sysoev http://nginx.com From calderon.thomas at gmail.com Mon Nov 10 14:54:20 2014 From: calderon.thomas at gmail.com (Thomas Calderon) Date: Mon, 10 Nov 2014 15:54:20 +0100 Subject: [PATCH] Add PKCS#11 support to nginx http module In-Reply-To: References: <5457B323.8080707@ssi.gouv.fr> Message-ID: Hi all, Is someone else interested in providing feedback for my patch ? Regards, Thomas. On Mon, Nov 3, 2014 at 11:30 PM, Thomas Calderon wrote: > Hi Piotr, > > I was not aware that some efforts were ongoing to use PKCS#11 devices with > nginx. > However, my experience with OpenSSL engine support is that the code is > dusty, rather limited and relies on external configuration files. > Dmitrii's approach requires to stack the OpenSSL engine code and OpenSC's > engine_pkcs11 which ends-up loading the real PKCS#11 middleware. > OpenSSL tends to perform multiple engine initialization which can confuse > the PKCS#11 shared library. Using the engine section in openssl.cnf ties > you up with a system-wide defined middleware. > > I would rather advocate for a more direct and self-contained approach. > > Regards, > > Thomas Calderon. > > On Mon, Nov 3, 2014 at 10:50 PM, Piotr Sikora > wrote: > >> Hi Thomas, >> >> > This patch leverages PKCS#11 support in nginx http module using libp11. >> > This allows the private key to be stored in a dedicated hardware (or >> > software) component. >> >> Dmitrii Pichulin is already working on (IMHO) much better way to >> handle PKCS#11 via OpenSSL engines: >> http://mailman.nginx.org/pipermail/nginx-devel/2014-August/005740.html >> >> Best regards, >> Piotr Sikora >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Nov 10 14:59:27 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 10 Nov 2014 17:59:27 +0300 Subject: [PATCH] Upstream: add "proxy_ssl_certificate" and friends In-Reply-To: References: Message-ID: <20141110145927.GS22132@mdounin.ru> Hello! On Fri, Nov 07, 2014 at 03:26:43PM -0800, Piotr Sikora wrote: > Hey, > > > # HG changeset patch > > # User Piotr Sikora > > # Date 1414668641 25200 > > # Thu Oct 30 04:30:41 2014 -0700 > > # Node ID bb14c7659efb32d1d1f651bdf54a8c8157ef67f9 > > # Parent 87ada3ba1392fadaf4d9193b5d345c248be32f77 > > Upstream: add "proxy_ssl_certificate" and friends. > > Ping. Yes, I'm looking into it. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Nov 10 15:11:09 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 10 Nov 2014 18:11:09 +0300 Subject: [PATCH] Add PKCS#11 support to nginx http module In-Reply-To: References: <5457B323.8080707@ssi.gouv.fr> Message-ID: <20141110151109.GT22132@mdounin.ru> Hello! On Mon, Nov 10, 2014 at 03:54:20PM +0100, Thomas Calderon wrote: > Hi all, > > Is someone else interested in providing feedback for my patch ? Dmitrii's patch is currently a primary candidate for inclusion. I agree with Piotr - it looks much better as it doesn't introduce additional dependencies and more configuration directives to do the same thing. > Regards, > > Thomas. > > On Mon, Nov 3, 2014 at 11:30 PM, Thomas Calderon > wrote: > > > Hi Piotr, > > > > I was not aware that some efforts were ongoing to use PKCS#11 devices with > > nginx. > > However, my experience with OpenSSL engine support is that the code is > > dusty, rather limited and relies on external configuration files. > > Dmitrii's approach requires to stack the OpenSSL engine code and OpenSC's > > engine_pkcs11 which ends-up loading the real PKCS#11 middleware. > > OpenSSL tends to perform multiple engine initialization which can confuse > > the PKCS#11 shared library. Using the engine section in openssl.cnf ties > > you up with a system-wide defined middleware. > > > > I would rather advocate for a more direct and self-contained approach. > > > > Regards, > > > > Thomas Calderon. > > > > On Mon, Nov 3, 2014 at 10:50 PM, Piotr Sikora > > wrote: > > > >> Hi Thomas, > >> > >> > This patch leverages PKCS#11 support in nginx http module using libp11. > >> > This allows the private key to be stored in a dedicated hardware (or > >> > software) component. > >> > >> Dmitrii Pichulin is already working on (IMHO) much better way to > >> handle PKCS#11 via OpenSSL engines: > >> http://mailman.nginx.org/pipermail/nginx-devel/2014-August/005740.html > >> > >> Best regards, > >> Piotr Sikora > >> > >> _______________________________________________ > >> nginx-devel mailing list > >> nginx-devel at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > >> > > > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Nov 10 15:21:50 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 10 Nov 2014 18:21:50 +0300 Subject: [PATCH] Core: parse octal/hexadecimal numeric config directives In-Reply-To: <742fb1a59b705ba71f2b.1415395183@Aris-MacBook-Pro.local> References: <742fb1a59b705ba71f2b.1415395183@Aris-MacBook-Pro.local> Message-ID: <20141110152150.GV22132@mdounin.ru> Hello! On Fri, Nov 07, 2014 at 01:19:43PM -0800, Ari Aosved wrote: > # HG changeset patch > # User Ari Aosved > # Date 1415392382 28800 > # Fri Nov 07 12:33:02 2014 -0800 > # Node ID 742fb1a59b705ba71f2bab5fede58ff5885ebc71 > # Parent 234c5ecb00c04c67bcbb4a3be45fd844f2289462 > Core: parse octal/hexadecimal numeric config directives. > > When working with modules which need configuration directives dealing with > file modes, it's convenient to specify the numeric values in octal notation. > For instance "hls_directory_create_mode 0775;" is recognizably rwxrwxr-x > whereas "hls_directory_create_mode 509;" would be more prone to mistakes. For this particular case, using some more special handler should be better, see ngx_conf_set_access_slot() and proxy_store_access for an example (http://nginx.org/r/proxy_store_access). In either case, I don't think that support for octal numbers is a good idea, especially for generic number slots. -- Maxim Dounin http://nginx.org/ From calderon.thomas at gmail.com Mon Nov 10 15:36:10 2014 From: calderon.thomas at gmail.com (Thomas Calderon) Date: Mon, 10 Nov 2014 16:36:10 +0100 Subject: [PATCH] Add PKCS#11 support to nginx http module In-Reply-To: <20141110151109.GT22132@mdounin.ru> References: <5457B323.8080707@ssi.gouv.fr> <20141110151109.GT22132@mdounin.ru> Message-ID: Hi Maxim, On Mon, Nov 10, 2014 at 4:11 PM, Maxim Dounin wrote: > Hello! > > On Mon, Nov 10, 2014 at 03:54:20PM +0100, Thomas Calderon wrote: > > > Hi all, > > > > Is someone else interested in providing feedback for my patch ? > > Dmitrii's patch is currently a primary candidate for inclusion. I > agree with Piotr - it looks much better as it doesn't introduce > additional dependencies and more configuration directives to do > the same thing. > Well a user will need to use OpenSC's engine_pkcs11 in order to use its own PKCS#11 library. Although, this is an external dependency, without it, Dmitrii's patch is pretty much useless. As for the addition of configuration directives, a user will need to use the global openssl.cnf in order to have a meaningful PKCS#11 configuration, with the various shortcomings I mentioned in my previous post. I understand that nginx team desires to minimize the various changes that are introduced in the code base. IMHO, adding support for PKCS#11 devices should not be overlook or simplified, it should be a first class feature and have its mainstream support, hence its configuration directives. Are you sure that Dmitrii's patch will allow to use dedicated key-pairs for each site declaration. Regards, Thomas. > > > Regards, > > > > Thomas. > > > > On Mon, Nov 3, 2014 at 11:30 PM, Thomas Calderon < > calderon.thomas at gmail.com> > > wrote: > > > > > Hi Piotr, > > > > > > I was not aware that some efforts were ongoing to use PKCS#11 devices > with > > > nginx. > > > However, my experience with OpenSSL engine support is that the code is > > > dusty, rather limited and relies on external configuration files. > > > Dmitrii's approach requires to stack the OpenSSL engine code and > OpenSC's > > > engine_pkcs11 which ends-up loading the real PKCS#11 middleware. > > > OpenSSL tends to perform multiple engine initialization which can > confuse > > > the PKCS#11 shared library. Using the engine section in openssl.cnf > ties > > > you up with a system-wide defined middleware. > > > > > > I would rather advocate for a more direct and self-contained approach. > > > > > > Regards, > > > > > > Thomas Calderon. > > > > > > On Mon, Nov 3, 2014 at 10:50 PM, Piotr Sikora > > > wrote: > > > > > >> Hi Thomas, > > >> > > >> > This patch leverages PKCS#11 support in nginx http module using > libp11. > > >> > This allows the private key to be stored in a dedicated hardware (or > > >> > software) component. > > >> > > >> Dmitrii Pichulin is already working on (IMHO) much better way to > > >> handle PKCS#11 via OpenSSL engines: > > >> > http://mailman.nginx.org/pipermail/nginx-devel/2014-August/005740.html > > >> > > >> Best regards, > > >> Piotr Sikora > > >> > > >> _______________________________________________ > > >> nginx-devel mailing list > > >> nginx-devel at nginx.org > > >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > > >> > > > > > > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pdn at cryptopro.ru Mon Nov 10 15:50:04 2014 From: pdn at cryptopro.ru (Dmitrii Pichulin) Date: Mon, 10 Nov 2014 18:50:04 +0300 Subject: [PATCH] Add PKCS#11 support to nginx http module In-Reply-To: References: <5457B323.8080707@ssi.gouv.fr> <20141110151109.GT22132@mdounin.ru> Message-ID: <5460DEAC.3050004@cryptopro.ru> > > Are you sure that Dmitrii's patch will allow to use dedicated key-pairs > for each site declaration. > It was tested, example: http://mailman.nginx.org/pipermail/nginx-devel/2014-October/006151.html From calderon.thomas at gmail.com Mon Nov 10 15:56:02 2014 From: calderon.thomas at gmail.com (Thomas Calderon) Date: Mon, 10 Nov 2014 16:56:02 +0100 Subject: [PATCH] Add PKCS#11 support to nginx http module In-Reply-To: <5460DEAC.3050004@cryptopro.ru> References: <5457B323.8080707@ssi.gouv.fr> <20141110151109.GT22132@mdounin.ru> <5460DEAC.3050004@cryptopro.ru> Message-ID: On Mon, Nov 10, 2014 at 4:50 PM, Dmitrii Pichulin wrote: > >> Are you sure that Dmitrii's patch will allow to use dedicated key-pairs >> for each site declaration. >> >> > It was tested, example: > > http://mailman.nginx.org/pipermail/nginx-devel/2014-October/006151.html Indeed, but the example runs a single server instance. How does it behave if you have "multiple" server instance (say 443, 8443) with different key-pairs (slot_0-id_00 and slot_0-id_01). Could you make sure everything work as expected by creating another key-pair on the token, sign a new certificate and check that the two instances can run concurrently ? > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Nov 10 16:59:58 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 10 Nov 2014 19:59:58 +0300 Subject: [PATCH] Upstream: add "proxy_ssl_certificate" and friends In-Reply-To: References: Message-ID: <20141110165958.GY22132@mdounin.ru> Hello! On Thu, Oct 30, 2014 at 04:31:37AM -0700, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1414668641 25200 > # Thu Oct 30 04:30:41 2014 -0700 > # Node ID bb14c7659efb32d1d1f651bdf54a8c8157ef67f9 > # Parent 87ada3ba1392fadaf4d9193b5d345c248be32f77 > Upstream: add "proxy_ssl_certificate" and friends. > > Signed-off-by: Piotr Sikora > > diff -r 87ada3ba1392 -r bb14c7659efb src/http/modules/ngx_http_proxy_module.c > --- a/src/http/modules/ngx_http_proxy_module.c Mon Oct 27 14:25:56 2014 -0700 > +++ b/src/http/modules/ngx_http_proxy_module.c Thu Oct 30 04:30:41 2014 -0700 > @@ -84,6 +84,9 @@ typedef struct { > ngx_uint_t ssl_verify_depth; > ngx_str_t ssl_trusted_certificate; > ngx_str_t ssl_crl; > + ngx_str_t ssl_certificate; > + ngx_str_t ssl_certificate_key; > + ngx_array_t *ssl_passwords; > #endif > } ngx_http_proxy_loc_conf_t; > > @@ -169,6 +172,8 @@ static ngx_int_t ngx_http_proxy_rewrite_ > ngx_http_proxy_rewrite_t *pr, ngx_str_t *regex, ngx_uint_t caseless); > > #if (NGX_HTTP_SSL) > +static char *ngx_http_proxy_ssl_password_file(ngx_conf_t *cf, > + ngx_command_t *cmd, void *conf); > static ngx_int_t ngx_http_proxy_set_ssl(ngx_conf_t *cf, > ngx_http_proxy_loc_conf_t *plcf); > #endif I think that it would be better to preserve current style used in the proxy module by placing configuration directive handling into the block with other configuration directives, like this: @@ -162,6 +165,10 @@ static char *ngx_http_proxy_cache(ngx_co static char *ngx_http_proxy_cache_key(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); #endif +#if (NGX_HTTP_SSL) +static char *ngx_http_proxy_ssl_password_file(ngx_conf_t *cf, + ngx_command_t *cmd, void *conf); +#endif static char *ngx_http_proxy_lowat_check(ngx_conf_t *cf, void *post, void *data); And the same in the code. (The uwsgi module part looks fine as is, as the module uses slightly different style for function declarations, and there is no problem in the code.) I'm about to commit your patch with the following changes on top of it (only style, no functional changes), please let me know if it looks ok for you: diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -165,15 +165,17 @@ static char *ngx_http_proxy_cache(ngx_co static char *ngx_http_proxy_cache_key(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); #endif - -static char *ngx_http_proxy_lowat_check(ngx_conf_t *cf, void *post, void *data); - -static ngx_int_t ngx_http_proxy_rewrite_regex(ngx_conf_t *cf, - ngx_http_proxy_rewrite_t *pr, ngx_str_t *regex, ngx_uint_t caseless); - #if (NGX_HTTP_SSL) static char *ngx_http_proxy_ssl_password_file(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); +#endif + +static char *ngx_http_proxy_lowat_check(ngx_conf_t *cf, void *post, void *data); + +static ngx_int_t ngx_http_proxy_rewrite_regex(ngx_conf_t *cf, + ngx_http_proxy_rewrite_t *pr, ngx_str_t *regex, ngx_uint_t caseless); + +#if (NGX_HTTP_SSL) static ngx_int_t ngx_http_proxy_set_ssl(ngx_conf_t *cf, ngx_http_proxy_loc_conf_t *plcf); #endif @@ -3872,6 +3874,33 @@ ngx_http_proxy_cache_key(ngx_conf_t *cf, #endif +#if (NGX_HTTP_SSL) + +static char * +ngx_http_proxy_ssl_password_file(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) +{ + ngx_http_proxy_loc_conf_t *plcf = conf; + + ngx_str_t *value; + + if (plcf->ssl_passwords != NGX_CONF_UNSET_PTR) { + return "is duplicate"; + } + + value = cf->args->elts; + + plcf->ssl_passwords = ngx_ssl_read_password_file(cf, &value[1]); + + if (plcf->ssl_passwords == NULL) { + return NGX_CONF_ERROR; + } + + return NGX_CONF_OK; +} + +#endif + + static char * ngx_http_proxy_lowat_check(ngx_conf_t *cf, void *post, void *data) { @@ -3903,29 +3932,6 @@ ngx_http_proxy_lowat_check(ngx_conf_t *c #if (NGX_HTTP_SSL) -static char * -ngx_http_proxy_ssl_password_file(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) -{ - ngx_http_proxy_loc_conf_t *plcf = conf; - - ngx_str_t *value; - - if (plcf->ssl_passwords != NGX_CONF_UNSET_PTR) { - return "is duplicate"; - } - - value = cf->args->elts; - - plcf->ssl_passwords = ngx_ssl_read_password_file(cf, &value[1]); - - if (plcf->ssl_passwords == NULL) { - return NGX_CONF_ERROR; - } - - return NGX_CONF_OK; -} - - static ngx_int_t ngx_http_proxy_set_ssl(ngx_conf_t *cf, ngx_http_proxy_loc_conf_t *plcf) { -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Nov 10 18:02:44 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 10 Nov 2014 21:02:44 +0300 Subject: [PATCH] Upstream: support named location for X-Accel-Redirect In-Reply-To: <6f4517db02a8cd4068b9.1415099924@fukaya-VirtualBox> References: <6f4517db02a8cd4068b9.1415099924@fukaya-VirtualBox> Message-ID: <20141110180244.GZ22132@mdounin.ru> Hello! On Tue, Nov 04, 2014 at 08:18:44PM +0900, Toshikuni Fukaya wrote: > # HG changeset patch > # User Toshikuni Fukaya > # Date 1415098583 -32400 > # Node ID 6f4517db02a8cd4068b9378bd93fe6290f54720d > # Parent dff86e2246a53b0f4a61935cd5c8c0a0f66d0ca2 > Upstream: support named location for X-Accel-Redirect. > > diff -r dff86e2246a5 -r 6f4517db02a8 src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c Mon Aug 25 13:41:31 2014 +0400 > +++ b/src/http/ngx_http_upstream.c Tue Nov 04 19:56:23 2014 +0900 > @@ -2218,19 +2218,25 @@ > } > > uri = u->headers_in.x_accel_redirect->value; > - ngx_str_null(&args); > - flags = NGX_HTTP_LOG_UNSAFE; > - > - if (ngx_http_parse_unsafe_uri(r, &uri, &args, &flags) != NGX_OK) { > - ngx_http_finalize_request(r, NGX_HTTP_NOT_FOUND); > - return NGX_DONE; > - } > - > - if (r->method != NGX_HTTP_HEAD) { > - r->method = NGX_HTTP_GET; > - } > - > - ngx_http_internal_redirect(r, &uri, &args); > + > + if (uri.len > 0 && uri.data[0] == '@') { > + ngx_http_named_location(r, &uri); > + } else { > + ngx_str_null(&args); > + flags = NGX_HTTP_LOG_UNSAFE; The uri here is required to be null-terminated, so the uri.len check can be safely dropped. With this and an extra line added as per style: diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2219,8 +2219,9 @@ ngx_http_upstream_process_headers(ngx_ht uri = u->headers_in.x_accel_redirect->value; - if (uri.len > 0 && uri.data[0] == '@') { + if (uri.data[0] == '@') { ngx_http_named_location(r, &uri); + } else { ngx_str_null(&args); flags = NGX_HTTP_LOG_UNSAFE; Please let me know if looks good for you. > + > + if (ngx_http_parse_unsafe_uri(r, &uri, &args, &flags) != NGX_OK) { > + ngx_http_finalize_request(r, NGX_HTTP_NOT_FOUND); > + return NGX_DONE; > + } > + > + if (r->method != NGX_HTTP_HEAD) { > + r->method = NGX_HTTP_GET; > + } > + > + ngx_http_internal_redirect(r, &uri, &args); > + } > + > ngx_http_finalize_request(r, NGX_DONE); > return NGX_DONE; > } > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://nginx.org/ From piotr at cloudflare.com Mon Nov 10 20:59:29 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 10 Nov 2014 12:59:29 -0800 Subject: [PATCH] Upstream: add "proxy_ssl_certificate" and friends In-Reply-To: <20141110165958.GY22132@mdounin.ru> References: <20141110165958.GY22132@mdounin.ru> Message-ID: Hey Maxim, > I think that it would be better to preserve current style used in > the proxy module by placing configuration directive handling into > the block with other configuration directives, like this: Looks good, thanks. Best regards, Piotr Sikora From agentzh at gmail.com Mon Nov 10 22:25:12 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Mon, 10 Nov 2014 14:25:12 -0800 Subject: Gzip Gunzip: always flush busy bufs when the incoming chain is NULL. In-Reply-To: References: <20141103220355.GF17248@mdounin.ru> <20141104005425.GJ17248@mdounin.ru> <20141105154128.GE10189@mdounin.ru> Message-ID: Hello! On Wed, Nov 5, 2014 at 5:08 PM, Yichun Zhang (agentzh) wrote: > Sorry again, it actually checked "c->buffered & > NGX_HTTP_LOWLEVEL_BUFFERED". This condition is indeed too strong and > I've made it check its own busy bufs instead. > Hmm, the problem here is more complicated than I originally thought. It seems that I still need to ensure that *all* the pending data has been indeed flushed into the system send buffer in some special cases, for example, in case of cosockets, I need to ensure that the response header has indeed been flushed into the system send buffer *completely* before proceeding to write to the socket directly. The solution I end up with is like this: if the content handler has its own busy bufs, then it uses NULL chain to flush the output filters; otherwise construct a "special buf" with ->flush set to 1, and feed it into the output filters. It seems to work quite well :) Regards, -agentzh From toshikuni-fukaya at cybozu.co.jp Tue Nov 11 00:33:00 2014 From: toshikuni-fukaya at cybozu.co.jp (Toshikuni Fukaya) Date: Tue, 11 Nov 2014 09:33:00 +0900 Subject: [PATCH] Upstream: support named location for X-Accel-Redirect In-Reply-To: <20141110180244.GZ22132@mdounin.ru> References: <6f4517db02a8cd4068b9.1415099924@fukaya-VirtualBox> <20141110180244.GZ22132@mdounin.ru> Message-ID: <5461593C.2070803@cybozu.co.jp> Hi, (2014/11/11 3:02), Maxim Dounin wrote: > Hello! > > On Tue, Nov 04, 2014 at 08:18:44PM +0900, Toshikuni Fukaya wrote: > >> # HG changeset patch >> # User Toshikuni Fukaya >> # Date 1415098583 -32400 >> # Node ID 6f4517db02a8cd4068b9378bd93fe6290f54720d >> # Parent dff86e2246a53b0f4a61935cd5c8c0a0f66d0ca2 >> Upstream: support named location for X-Accel-Redirect. >> >> diff -r dff86e2246a5 -r 6f4517db02a8 src/http/ngx_http_upstream.c >> --- a/src/http/ngx_http_upstream.c Mon Aug 25 13:41:31 2014 +0400 >> +++ b/src/http/ngx_http_upstream.c Tue Nov 04 19:56:23 2014 +0900 >> @@ -2218,19 +2218,25 @@ >> } >> >> uri = u->headers_in.x_accel_redirect->value; >> - ngx_str_null(&args); >> - flags = NGX_HTTP_LOG_UNSAFE; >> - >> - if (ngx_http_parse_unsafe_uri(r, &uri, &args, &flags) != NGX_OK) { >> - ngx_http_finalize_request(r, NGX_HTTP_NOT_FOUND); >> - return NGX_DONE; >> - } >> - >> - if (r->method != NGX_HTTP_HEAD) { >> - r->method = NGX_HTTP_GET; >> - } >> - >> - ngx_http_internal_redirect(r, &uri, &args); >> + >> + if (uri.len > 0 && uri.data[0] == '@') { >> + ngx_http_named_location(r, &uri); >> + } else { >> + ngx_str_null(&args); >> + flags = NGX_HTTP_LOG_UNSAFE; > > > The uri here is required to be null-terminated, so the uri.len > check can be safely dropped. With this and an extra line added as > per style: > > diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c > +++ b/src/http/ngx_http_upstream.c > @@ -2219,8 +2219,9 @@ ngx_http_upstream_process_headers(ngx_ht > > uri = u->headers_in.x_accel_redirect->value; > > - if (uri.len > 0 && uri.data[0] == '@') { > + if (uri.data[0] == '@') { > ngx_http_named_location(r, &uri); > + > } else { > ngx_str_null(&args); > flags = NGX_HTTP_LOG_UNSAFE; > > > Please let me know if looks good for you. > It's good. Thanks a lot for explanation of nginx internals. >> + >> + if (ngx_http_parse_unsafe_uri(r, &uri, &args, &flags) != NGX_OK) { >> + ngx_http_finalize_request(r, NGX_HTTP_NOT_FOUND); >> + return NGX_DONE; >> + } >> + >> + if (r->method != NGX_HTTP_HEAD) { >> + r->method = NGX_HTTP_GET; >> + } >> + >> + ngx_http_internal_redirect(r, &uri, &args); >> + } >> + >> ngx_http_finalize_request(r, NGX_DONE); >> return NGX_DONE; >> } >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > Toshikuni Fukaya From pdn at cryptopro.ru Tue Nov 11 10:07:03 2014 From: pdn at cryptopro.ru (Dmitrii Pichulin) Date: Tue, 11 Nov 2014 13:07:03 +0300 Subject: [PATCH] Add PKCS#11 support to nginx http module In-Reply-To: References: <5457B323.8080707@ssi.gouv.fr> <20141110151109.GT22132@mdounin.ru> <5460DEAC.3050004@cryptopro.ru> Message-ID: <5461DFC7.9070905@cryptopro.ru> > It was tested, example: > > http://mailman.nginx.org/__pipermail/nginx-devel/2014-__October/006151.html > > > > Indeed, but the example runs a single server instance. > How does it behave if you have "multiple" server instance (say 443, > 8443) with different key-pairs (slot_0-id_00 and slot_0-id_01). > Could you make sure everything work as expected by creating another > key-pair on the token, sign a new certificate and check that the two > instances can run concurrently ? It's just an example, you can repeat some steps. And, yes, we tested it on a multiple servers config, it scales well. From alexx.todorov at gmail.com Tue Nov 11 12:37:17 2014 From: alexx.todorov at gmail.com (Alexander Todorov) Date: Tue, 11 Nov 2014 14:37:17 +0200 Subject: How does nginx keep up with logging thousands requests per second In-Reply-To: <5461F7D3.7070303@gmail.com> References: <5461F7D3.7070303@gmail.com> Message-ID: <546202FD.2040004@gmail.com> Hi guys, I've seen some reports on the web claiming nginx can serve in the order of 10000 requests per second (correct me if I'm wrong). I also see that logging to syslog is supported. Is logging to syslog or filesystem comparable with the speed which nginx can serve content ? If not how does nginx keep up with logging (being a slow operation) so many requests ? Thanks, Alex From steven.hartland at multiplay.co.uk Tue Nov 11 13:39:16 2014 From: steven.hartland at multiplay.co.uk (Steven Hartland) Date: Tue, 11 Nov 2014 13:39:16 +0000 Subject: How does nginx keep up with logging thousands requests per second In-Reply-To: <546202FD.2040004@gmail.com> References: <5461F7D3.7070303@gmail.com> <546202FD.2040004@gmail.com> Message-ID: <54621184.7010802@multiplay.co.uk> For file logging it supports buffering see the docs here: http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log On 11/11/2014 12:37, Alexander Todorov wrote: > Hi guys, > I've seen some reports on the web claiming nginx can serve in the > order of 10000 > requests per second (correct me if I'm wrong). > > I also see that logging to syslog is supported. > > Is logging to syslog or filesystem comparable with the speed which > nginx can > serve content ? If not how does nginx keep up with logging (being a slow > operation) so many requests ? > > > Thanks, > Alex > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From mdounin at mdounin.ru Tue Nov 11 14:02:26 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Nov 2014 14:02:26 +0000 Subject: [nginx] Upstream: add "proxy_ssl_certificate" and friends. Message-ID: details: http://hg.nginx.org/nginx/rev/20d966ad5e89 branches: changeset: 5900:20d966ad5e89 user: Piotr Sikora date: Thu Oct 30 04:30:41 2014 -0700 description: Upstream: add "proxy_ssl_certificate" and friends. Signed-off-by: Piotr Sikora diffstat: src/http/modules/ngx_http_proxy_module.c | 81 ++++++++++++++++++++++++++++++++ src/http/modules/ngx_http_uwsgi_module.c | 73 ++++++++++++++++++++++++++++ 2 files changed, 154 insertions(+), 0 deletions(-) diffs (265 lines): diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c +++ b/src/http/modules/ngx_http_proxy_module.c @@ -84,6 +84,9 @@ typedef struct { ngx_uint_t ssl_verify_depth; ngx_str_t ssl_trusted_certificate; ngx_str_t ssl_crl; + ngx_str_t ssl_certificate; + ngx_str_t ssl_certificate_key; + ngx_array_t *ssl_passwords; #endif } ngx_http_proxy_loc_conf_t; @@ -162,6 +165,10 @@ static char *ngx_http_proxy_cache(ngx_co static char *ngx_http_proxy_cache_key(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); #endif +#if (NGX_HTTP_SSL) +static char *ngx_http_proxy_ssl_password_file(ngx_conf_t *cf, + ngx_command_t *cmd, void *conf); +#endif static char *ngx_http_proxy_lowat_check(ngx_conf_t *cf, void *post, void *data); @@ -626,6 +633,27 @@ static ngx_command_t ngx_http_proxy_com offsetof(ngx_http_proxy_loc_conf_t, ssl_crl), NULL }, + { ngx_string("proxy_ssl_certificate"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, ssl_certificate), + NULL }, + + { ngx_string("proxy_ssl_certificate_key"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, ssl_certificate_key), + NULL }, + + { ngx_string("proxy_ssl_password_file"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_http_proxy_ssl_password_file, + NGX_HTTP_LOC_CONF_OFFSET, + 0, + NULL }, + #endif ngx_null_command @@ -2479,6 +2507,8 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ * conf->ssl_ciphers = { 0, NULL }; * conf->ssl_trusted_certificate = { 0, NULL }; * conf->ssl_crl = { 0, NULL }; + * conf->ssl_certificate = { 0, NULL }; + * conf->ssl_certificate_key = { 0, NULL }; */ conf->upstream.store = NGX_CONF_UNSET; @@ -2527,6 +2557,7 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ conf->upstream.ssl_server_name = NGX_CONF_UNSET; conf->upstream.ssl_verify = NGX_CONF_UNSET; conf->ssl_verify_depth = NGX_CONF_UNSET_UINT; + conf->ssl_passwords = NGX_CONF_UNSET_PTR; #endif /* "proxy_cyclic_temp_file" is disabled */ @@ -2836,6 +2867,12 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t prev->ssl_trusted_certificate, ""); ngx_conf_merge_str_value(conf->ssl_crl, prev->ssl_crl, ""); + ngx_conf_merge_str_value(conf->ssl_certificate, + prev->ssl_certificate, ""); + ngx_conf_merge_str_value(conf->ssl_certificate_key, + prev->ssl_certificate_key, ""); + ngx_conf_merge_ptr_value(conf->ssl_passwords, prev->ssl_passwords, NULL); + if (conf->ssl && ngx_http_proxy_set_ssl(cf, conf) != NGX_OK) { return NGX_CONF_ERROR; } @@ -3837,6 +3874,33 @@ ngx_http_proxy_cache_key(ngx_conf_t *cf, #endif +#if (NGX_HTTP_SSL) + +static char * +ngx_http_proxy_ssl_password_file(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) +{ + ngx_http_proxy_loc_conf_t *plcf = conf; + + ngx_str_t *value; + + if (plcf->ssl_passwords != NGX_CONF_UNSET_PTR) { + return "is duplicate"; + } + + value = cf->args->elts; + + plcf->ssl_passwords = ngx_ssl_read_password_file(cf, &value[1]); + + if (plcf->ssl_passwords == NULL) { + return NGX_CONF_ERROR; + } + + return NGX_CONF_OK; +} + +#endif + + static char * ngx_http_proxy_lowat_check(ngx_conf_t *cf, void *post, void *data) { @@ -3894,6 +3958,23 @@ ngx_http_proxy_set_ssl(ngx_conf_t *cf, n cln->handler = ngx_ssl_cleanup_ctx; cln->data = plcf->upstream.ssl; + if (plcf->ssl_certificate.len) { + + if (plcf->ssl_certificate_key.len == 0) { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, + "no \"proxy_ssl_certificate_key\" is defined " + "for certificate \"%V\"", &plcf->ssl_certificate); + return NGX_ERROR; + } + + if (ngx_ssl_certificate(cf, plcf->upstream.ssl, &plcf->ssl_certificate, + &plcf->ssl_certificate_key, plcf->ssl_passwords) + != NGX_OK) + { + return NGX_ERROR; + } + } + if (SSL_CTX_set_cipher_list(plcf->upstream.ssl->ctx, (const char *) plcf->ssl_ciphers.data) == 0) diff --git a/src/http/modules/ngx_http_uwsgi_module.c b/src/http/modules/ngx_http_uwsgi_module.c --- a/src/http/modules/ngx_http_uwsgi_module.c +++ b/src/http/modules/ngx_http_uwsgi_module.c @@ -42,6 +42,9 @@ typedef struct { ngx_uint_t ssl_verify_depth; ngx_str_t ssl_trusted_certificate; ngx_str_t ssl_crl; + ngx_str_t ssl_certificate; + ngx_str_t ssl_certificate_key; + ngx_array_t *ssl_passwords; #endif } ngx_http_uwsgi_loc_conf_t; @@ -76,6 +79,8 @@ static char *ngx_http_uwsgi_cache_key(ng #endif #if (NGX_HTTP_SSL) +static char *ngx_http_uwsgi_ssl_password_file(ngx_conf_t *cf, + ngx_command_t *cmd, void *conf); static ngx_int_t ngx_http_uwsgi_set_ssl(ngx_conf_t *cf, ngx_http_uwsgi_loc_conf_t *uwcf); #endif @@ -482,6 +487,27 @@ static ngx_command_t ngx_http_uwsgi_comm offsetof(ngx_http_uwsgi_loc_conf_t, ssl_crl), NULL }, + { ngx_string("uwsgi_ssl_certificate"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_uwsgi_loc_conf_t, ssl_certificate), + NULL }, + + { ngx_string("uwsgi_ssl_certificate_key"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_str_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_uwsgi_loc_conf_t, ssl_certificate_key), + NULL }, + + { ngx_string("uwsgi_ssl_password_file"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_http_uwsgi_ssl_password_file, + NGX_HTTP_LOC_CONF_OFFSET, + 0, + NULL }, + #endif ngx_null_command @@ -1326,6 +1352,7 @@ ngx_http_uwsgi_create_loc_conf(ngx_conf_ conf->upstream.ssl_server_name = NGX_CONF_UNSET; conf->upstream.ssl_verify = NGX_CONF_UNSET; conf->ssl_verify_depth = NGX_CONF_UNSET_UINT; + conf->ssl_passwords = NGX_CONF_UNSET_PTR; #endif /* "uwsgi_cyclic_temp_file" is disabled */ @@ -1619,6 +1646,12 @@ ngx_http_uwsgi_merge_loc_conf(ngx_conf_t prev->ssl_trusted_certificate, ""); ngx_conf_merge_str_value(conf->ssl_crl, prev->ssl_crl, ""); + ngx_conf_merge_str_value(conf->ssl_certificate, + prev->ssl_certificate, ""); + ngx_conf_merge_str_value(conf->ssl_certificate_key, + prev->ssl_certificate_key, ""); + ngx_conf_merge_ptr_value(conf->ssl_passwords, prev->ssl_passwords, NULL); + if (conf->ssl && ngx_http_uwsgi_set_ssl(cf, conf) != NGX_OK) { return NGX_CONF_ERROR; } @@ -2109,6 +2142,29 @@ ngx_http_uwsgi_cache_key(ngx_conf_t *cf, #if (NGX_HTTP_SSL) +static char * +ngx_http_uwsgi_ssl_password_file(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) +{ + ngx_http_uwsgi_loc_conf_t *uwcf = conf; + + ngx_str_t *value; + + if (uwcf->ssl_passwords != NGX_CONF_UNSET_PTR) { + return "is duplicate"; + } + + value = cf->args->elts; + + uwcf->ssl_passwords = ngx_ssl_read_password_file(cf, &value[1]); + + if (uwcf->ssl_passwords == NULL) { + return NGX_CONF_ERROR; + } + + return NGX_CONF_OK; +} + + static ngx_int_t ngx_http_uwsgi_set_ssl(ngx_conf_t *cf, ngx_http_uwsgi_loc_conf_t *uwcf) { @@ -2135,6 +2191,23 @@ ngx_http_uwsgi_set_ssl(ngx_conf_t *cf, n cln->handler = ngx_ssl_cleanup_ctx; cln->data = uwcf->upstream.ssl; + if (uwcf->ssl_certificate.len) { + + if (uwcf->ssl_certificate_key.len == 0) { + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, + "no \"uwsgi_ssl_certificate_key\" is defined " + "for certificate \"%V\"", &uwcf->ssl_certificate); + return NGX_ERROR; + } + + if (ngx_ssl_certificate(cf, uwcf->upstream.ssl, &uwcf->ssl_certificate, + &uwcf->ssl_certificate_key, uwcf->ssl_passwords) + != NGX_OK) + { + return NGX_ERROR; + } + } + if (SSL_CTX_set_cipher_list(uwcf->upstream.ssl->ctx, (const char *) uwcf->ssl_ciphers.data) == 0) From mdounin at mdounin.ru Tue Nov 11 14:02:29 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Nov 2014 14:02:29 +0000 Subject: [nginx] Upstream: support named location for X-Accel-Redirect. Message-ID: details: http://hg.nginx.org/nginx/rev/7d7eac6e31df branches: changeset: 5901:7d7eac6e31df user: Toshikuni Fukaya date: Tue Nov 04 19:56:23 2014 +0900 description: Upstream: support named location for X-Accel-Redirect. diffstat: src/http/ngx_http_upstream.c | 33 ++++++++++++++++++++------------- 1 files changed, 20 insertions(+), 13 deletions(-) diffs (43 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2218,19 +2218,26 @@ ngx_http_upstream_process_headers(ngx_ht } uri = u->headers_in.x_accel_redirect->value; - ngx_str_null(&args); - flags = NGX_HTTP_LOG_UNSAFE; - - if (ngx_http_parse_unsafe_uri(r, &uri, &args, &flags) != NGX_OK) { - ngx_http_finalize_request(r, NGX_HTTP_NOT_FOUND); - return NGX_DONE; - } - - if (r->method != NGX_HTTP_HEAD) { - r->method = NGX_HTTP_GET; - } - - ngx_http_internal_redirect(r, &uri, &args); + + if (uri.data[0] == '@') { + ngx_http_named_location(r, &uri); + + } else { + ngx_str_null(&args); + flags = NGX_HTTP_LOG_UNSAFE; + + if (ngx_http_parse_unsafe_uri(r, &uri, &args, &flags) != NGX_OK) { + ngx_http_finalize_request(r, NGX_HTTP_NOT_FOUND); + return NGX_DONE; + } + + if (r->method != NGX_HTTP_HEAD) { + r->method = NGX_HTTP_GET; + } + + ngx_http_internal_redirect(r, &uri, &args); + } + ngx_http_finalize_request(r, NGX_DONE); return NGX_DONE; } From mdounin at mdounin.ru Tue Nov 11 14:02:49 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Nov 2014 17:02:49 +0300 Subject: [PATCH] Upstream: add "proxy_ssl_certificate" and friends In-Reply-To: References: <20141110165958.GY22132@mdounin.ru> Message-ID: <20141111140249.GC90224@mdounin.ru> Hello! On Mon, Nov 10, 2014 at 12:59:29PM -0800, Piotr Sikora wrote: > Hey Maxim, > > > I think that it would be better to preserve current style used in > > the proxy module by placing configuration directive handling into > > the block with other configuration directives, like this: > > Looks good, thanks. Thanks, committed. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Nov 11 14:03:28 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Nov 2014 17:03:28 +0300 Subject: [PATCH] Upstream: support named location for X-Accel-Redirect In-Reply-To: <5461593C.2070803@cybozu.co.jp> References: <6f4517db02a8cd4068b9.1415099924@fukaya-VirtualBox> <20141110180244.GZ22132@mdounin.ru> <5461593C.2070803@cybozu.co.jp> Message-ID: <20141111140328.GD90224@mdounin.ru> Hello! On Tue, Nov 11, 2014 at 09:33:00AM +0900, Toshikuni Fukaya wrote: > Hi, > > (2014/11/11 3:02), Maxim Dounin wrote: > >Hello! > > > >On Tue, Nov 04, 2014 at 08:18:44PM +0900, Toshikuni Fukaya wrote: > > > >># HG changeset patch > >># User Toshikuni Fukaya > >># Date 1415098583 -32400 > >># Node ID 6f4517db02a8cd4068b9378bd93fe6290f54720d > >># Parent dff86e2246a53b0f4a61935cd5c8c0a0f66d0ca2 > >>Upstream: support named location for X-Accel-Redirect. > >> > >>diff -r dff86e2246a5 -r 6f4517db02a8 src/http/ngx_http_upstream.c > >>--- a/src/http/ngx_http_upstream.c Mon Aug 25 13:41:31 2014 +0400 > >>+++ b/src/http/ngx_http_upstream.c Tue Nov 04 19:56:23 2014 +0900 > >>@@ -2218,19 +2218,25 @@ > >> } > >> > >> uri = u->headers_in.x_accel_redirect->value; > >>- ngx_str_null(&args); > >>- flags = NGX_HTTP_LOG_UNSAFE; > >>- > >>- if (ngx_http_parse_unsafe_uri(r, &uri, &args, &flags) != NGX_OK) { > >>- ngx_http_finalize_request(r, NGX_HTTP_NOT_FOUND); > >>- return NGX_DONE; > >>- } > >>- > >>- if (r->method != NGX_HTTP_HEAD) { > >>- r->method = NGX_HTTP_GET; > >>- } > >>- > >>- ngx_http_internal_redirect(r, &uri, &args); > >>+ > >>+ if (uri.len > 0 && uri.data[0] == '@') { > >>+ ngx_http_named_location(r, &uri); > >>+ } else { > >>+ ngx_str_null(&args); > >>+ flags = NGX_HTTP_LOG_UNSAFE; > > > > > >The uri here is required to be null-terminated, so the uri.len > >check can be safely dropped. With this and an extra line added as > >per style: > > > >diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c > >--- a/src/http/ngx_http_upstream.c > >+++ b/src/http/ngx_http_upstream.c > >@@ -2219,8 +2219,9 @@ ngx_http_upstream_process_headers(ngx_ht > > > > uri = u->headers_in.x_accel_redirect->value; > > > >- if (uri.len > 0 && uri.data[0] == '@') { > >+ if (uri.data[0] == '@') { > > ngx_http_named_location(r, &uri); > >+ > > } else { > > ngx_str_null(&args); > > flags = NGX_HTTP_LOG_UNSAFE; > > > > > >Please let me know if looks good for you. > > > > It's good. Thanks a lot for explanation of nginx internals. Thanks, committed. -- Maxim Dounin http://nginx.org/ From flevionnois at gmail.com Thu Nov 13 16:11:25 2014 From: flevionnois at gmail.com (Franck Levionnois) Date: Thu, 13 Nov 2014 17:11:25 +0100 Subject: [PATCH] SSL support for the mail proxy module In-Reply-To: <1672749644.4651636.1414693504176.JavaMail.zimbra@zimbra.com> References: <5c2524403ab7c870b1fa.1410650071@zdev-vm048.eng.zimbra.com> <8B3FA009-99B4-43AD-A207-5BE41FA58ECD@gmail.com> <607846621.1950104.1410852617470.JavaMail.zimbra@zimbra.com> <20140916120306.GF59236@mdounin.ru> <897609716.2187306.1410913349045.JavaMail.zimbra@zimbra.com> <1497581210.2187358.1410913427096.JavaMail.zimbra@zimbra.com> <655993562.4651608.1414693499438.JavaMail.zimbra@zimbra.com> <1672749644.4651636.1414693504176.JavaMail.zimbra@zimbra.com> Message-ID: Hello, I think that the mail haven't been seen by devel forum. Kind regards, Franck. 2014-10-30 19:25 GMT+01:00 Kunal Pariani : Hello, Any reason for this patch not being committed upstream yet ? Thanks -Kunal 2014-10-30 19:25 GMT+01:00 Kunal Pariani : > Hello, > Any reason for this patch not being committed upstream yet ? > > Thanks > -Kunal > > ------------------------------ > *From: *"Franck Levionnois" > *To: *"nginx-devel" , "Kunal Pariani" < > kpariani at zimbra.com> > *Sent: *Tuesday, October 21, 2014 12:59:04 AM > > *Subject: *Re: [PATCH] SSL support for the mail proxy module > > Hello, > > The patch below has been submited some months ago. It do about the same, > and it support to return a name by the auth script. > > Kind regards > Franck Levionnois. > > ---------- Forwarded message ---------- > From: > Date: 2014-01-24 21:40 GMT+01:00 > Subject: [PATCH 1 of 1] Mail: added support for SSL client certificate > To: nginx-devel at nginx.org > > > # HG changeset patch > # User Franck Levionnois > # Date 1390577176 -3600 > # Fri Jan 24 16:26:16 2014 +0100 > # Node ID d7b8381c200e300c2b6729574f4c2a > b537804f56 > # Parent a387ce36744aa36b50e8171dbf01ef716748327e > Mail: added support for SSL client certificate > > Add support for SSL module like HTTP. > > Added mail configuration directives (like http): > ssl_verify_client, ssl_verify_depth, ssl_client_certificate, > ssl_trusted_certificate, ssl_crl > > Added headers: > Auth-Certificate, Auth-Certificate-Verify, Auth-Issuer-DN, > Auth-Subject-DN, Auth-Subject-Serial > > diff -r a387ce36744a -r d7b8381c200e src/mail/ngx_mail_auth_http_module.c > --- a/src/mail/ngx_mail_auth_http_module.c Thu Jan 23 22:09:59 2014 > +0900 > +++ b/src/mail/ngx_mail_auth_http_module.c Fri Jan 24 16:26:16 2014 > +0100 > @@ -1135,6 +1135,32 @@ > "mail auth http dummy handler"); > } > > +#if (NGX_MAIL_SSL) > +static ngx_int_t > +ngx_ssl_get_certificate_oneline(ngx_connection_t *c, ngx_pool_t *pool, > + ngx_str_t *b64_cert) > +{ > + ngx_str_t pemCert; > + if (ngx_ssl_get_raw_certificate(c, pool, &pemCert) != NGX_OK) { > + return NGX_ERROR; > + } > + > + if (pemCert.len == 0) { > + b64_cert->len = 0; > + return NGX_OK; > + } > + > + b64_cert->len = ngx_base64_encoded_length(pemCert.len); > + b64_cert->data = ngx_palloc( pool, b64_cert->len); > + if (b64_cert->data == NULL) { > + b64_cert->len = 0; > + return NGX_ERROR; > + } > + ngx_encode_base64(b64_cert, &pemCert); > + > + return NGX_OK; > +} > +#endif > > static ngx_buf_t * > ngx_mail_auth_http_create_request(ngx_mail_session_t *s, ngx_pool_t *pool, > @@ -1142,7 +1168,9 @@ > { > size_t len; > ngx_buf_t *b; > - ngx_str_t login, passwd; > + ngx_str_t login, passwd, client_cert, client_verify, > + client_subject, client_issuer, > + client_serial; > ngx_mail_core_srv_conf_t *cscf; > > if (ngx_mail_auth_http_escape(pool, &s->login, &login) != NGX_OK) { > @@ -1155,6 +1183,42 @@ > > cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); > > +#if (NGX_MAIL_SSL) > + if (s->connection->ssl) { > + if (ngx_ssl_get_client_verify(s->connection, pool, > + &client_verify) != NGX_OK) { > + return NULL; > + } > + > + if (ngx_ssl_get_subject_dn(s->connection, pool, > + &client_subject) != NGX_OK) { > + return NULL; > + } > + > + if (ngx_ssl_get_issuer_dn(s->connection, pool, > + &client_issuer) != NGX_OK) { > + return NULL; > + } > + > + if (ngx_ssl_get_serial_number(s->connection, pool, > + &client_serial) != NGX_OK) { > + return NULL; > + } > + > + if (ngx_ssl_get_certificate_oneline(s->connection, pool, > + &client_cert) != NGX_OK) { > + return NULL; > + } > + } else { > + client_verify.len = 0; > + client_issuer.len = 0; > + client_subject.len = 0; > + client_serial.len = 0; > + client_cert.len = 0; > + } > + > +#endif > + > len = sizeof("GET ") - 1 + ahcf->uri.len + sizeof(" HTTP/1.0" CRLF) - > 1 > + sizeof("Host: ") - 1 + ahcf->host_header.len + sizeof(CRLF) - > 1 > + sizeof("Auth-Method: ") - 1 > @@ -1163,6 +1227,18 @@ > + sizeof("Auth-User: ") - 1 + login.len + sizeof(CRLF) - 1 > + sizeof("Auth-Pass: ") - 1 + passwd.len + sizeof(CRLF) - 1 > + sizeof("Auth-Salt: ") - 1 + s->salt.len > +#if (NGX_MAIL_SSL) > + + sizeof("Auth-Certificate: ") - 1 + client_cert.len > + + sizeof(CRLF) - 1 > + + sizeof("Auth-Certificate-Verify: ") - 1 + client_verify.len > + + sizeof(CRLF) - 1 > + + sizeof("Auth-Issuer-DN: ") - 1 + client_issuer.len > + + sizeof(CRLF) - 1 > + + sizeof("Auth-Subject-DN: ") - 1 + client_subject.len > + + sizeof(CRLF) - 1 > + + sizeof("Auth-Subject-Serial: ") - 1 + client_serial.len > + + sizeof(CRLF) - 1 > +#endif > + sizeof("Auth-Protocol: ") - 1 + cscf->protocol->name.len > + sizeof(CRLF) - 1 > + sizeof("Auth-Login-Attempt: ") - 1 + NGX_INT_T_LEN > @@ -1212,7 +1288,43 @@ > > s->passwd.data = NULL; > } > - > +#if (NGX_MAIL_SSL) > + if ( client_cert.len ) > + { > + b->last = ngx_cpymem(b->last, "Auth-Certificate: ", > + sizeof("Auth-Certificate: ") - 1); > + b->last = ngx_copy(b->last, client_cert.data, client_cert.len); > + *b->last++ = CR; *b->last++ = LF; > + } > + if ( client_verify.len ) > + { > + b->last = ngx_cpymem(b->last, "Auth-Certificate-Verify: ", > + sizeof("Auth-Certificate-Verify: ") - 1); > + b->last = ngx_copy(b->last, client_verify.data, > client_verify.len); > + *b->last++ = CR; *b->last++ = LF; > + } > + if ( client_issuer.len ) > + { > + b->last = ngx_cpymem(b->last, "Auth-Issuer-DN: ", > + sizeof("Auth-Issuer-DN: ") - 1); > + b->last = ngx_copy(b->last, client_issuer.data, > client_issuer.len); > + *b->last++ = CR; *b->last++ = LF; > + } > + if ( client_subject.len ) > + { > + b->last = ngx_cpymem(b->last, "Auth-Subject-DN: ", > + sizeof("Auth-Subject-DN: ") - 1); > + b->last = ngx_copy(b->last, client_subject.data, > client_subject.len); > + *b->last++ = CR; *b->last++ = LF; > + } > + if ( client_serial.len ) > + { > + b->last = ngx_cpymem(b->last, "Auth-Subject-Serial: ", > + sizeof("Auth-Subject-Serial: ") - 1); > + b->last = ngx_copy(b->last, client_serial.data, > client_serial.len); > + *b->last++ = CR; *b->last++ = LF; > + } > +#endif > b->last = ngx_cpymem(b->last, "Auth-Protocol: ", > sizeof("Auth-Protocol: ") - 1); > b->last = ngx_cpymem(b->last, cscf->protocol->name.data, > diff -r a387ce36744a -r d7b8381c200e src/mail/ngx_mail_handler.c > --- a/src/mail/ngx_mail_handler.c Thu Jan 23 22:09:59 2014 +0900 > +++ b/src/mail/ngx_mail_handler.c Fri Jan 24 16:26:16 2014 +0100 > @@ -236,11 +236,59 @@ > { > ngx_mail_session_t *s; > ngx_mail_core_srv_conf_t *cscf; > +#if (NGX_MAIL_SSL) > + ngx_mail_ssl_conf_t *sslcf; > +#endif > + > + ngx_log_debug1(NGX_LOG_DEBUG_MAIL, c->log, 0, > + "ngx_mail_ssl_handshake_handler handshaked: %d ", > + c->ssl->handshaked ); > > if (c->ssl->handshaked) { > > s = c->data; > > +#if (NGX_MAIL_SSL) > + sslcf = ngx_mail_get_module_srv_conf(s, ngx_mail_ssl_module); > + if (sslcf->verify) { > + long rc; > + > + rc = SSL_get_verify_result(c->ssl->connection); > + > + if (rc != X509_V_OK > + && (sslcf->verify != 3 || > !ngx_ssl_verify_error_optional(rc))) > + { > + ngx_log_error(NGX_LOG_INFO, c->log, 0, > + "client SSL certificate verify error: (%l:%s)", > + rc, X509_verify_cert_error_string(rc)); > + > + ngx_ssl_remove_cached_session(sslcf->ssl.ctx, > + (SSL_get0_session(c->ssl->connection))); > + > + ngx_mail_close_connection(c); > + return; > + } > + > + if (sslcf->verify == 1) { > + X509 *cert; > + cert = SSL_get_peer_certificate(c->ssl->connection); > + > + if (cert == NULL) { > + ngx_log_error(NGX_LOG_INFO, c->log, 0, > + "client sent no required SSL certificate"); > + > + ngx_ssl_remove_cached_session(sslcf->ssl.ctx, > + (SSL_get0_session(c->ssl->connection))); > + > + ngx_mail_close_connection(c); > + return; > + } > + > + X509_free(cert); > + } > + } > +#endif > + > if (s->starttls) { > cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module); > > @@ -276,6 +324,10 @@ > > s->protocol = cscf->protocol->type; > > + ngx_log_debug1(NGX_LOG_DEBUG_MAIL, c->log, 0, > + "ngx_mail_init_session protocol: %d ", > + cscf->protocol->type ); > + > s->ctx = ngx_pcalloc(c->pool, sizeof(void *) * ngx_mail_max_module); > if (s->ctx == NULL) { > ngx_mail_session_internal_server_error(s); > diff -r a387ce36744a -r d7b8381c200e src/mail/ngx_mail_ssl_module.c > --- a/src/mail/ngx_mail_ssl_module.c Thu Jan 23 22:09:59 2014 +0900 > +++ b/src/mail/ngx_mail_ssl_module.c Fri Jan 24 16:26:16 2014 +0100 > @@ -43,6 +43,13 @@ > { ngx_null_string, 0 } > }; > > +static ngx_conf_enum_t ngx_mail_ssl_verify[] = { > + { ngx_string("off"), 0 }, > + { ngx_string("on"), 1 }, > + { ngx_string("optional"), 2 }, > + { ngx_string("optional_no_ca"), 3 }, > + { ngx_null_string, 0 } > +}; > > static ngx_command_t ngx_mail_ssl_commands[] = { > > @@ -102,6 +109,34 @@ > offsetof(ngx_mail_ssl_conf_t, ciphers), > NULL }, > > + { ngx_string("ssl_verify_client"), > + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, > + ngx_conf_set_enum_slot, > + NGX_MAIL_SRV_CONF_OFFSET, > + offsetof(ngx_mail_ssl_conf_t, verify), > + &ngx_mail_ssl_verify }, > + > + { ngx_string("ssl_verify_depth"), > + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_1MORE, > + ngx_conf_set_num_slot, > + NGX_MAIL_SRV_CONF_OFFSET, > + offsetof(ngx_mail_ssl_conf_t, verify_depth), > + NULL }, > + > + { ngx_string("ssl_client_certificate"), > + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, > + ngx_conf_set_str_slot, > + NGX_MAIL_SRV_CONF_OFFSET, > + offsetof(ngx_mail_ssl_conf_t, client_certificate), > + NULL }, > + > + { ngx_string("ssl_trusted_certificate"), > + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, > + ngx_conf_set_str_slot, > + NGX_MAIL_SRV_CONF_OFFSET, > + offsetof(ngx_mail_ssl_conf_t, trusted_certificate), > + NULL }, > + > { ngx_string("ssl_prefer_server_ciphers"), > NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_FLAG, > ngx_conf_set_flag_slot, > @@ -137,6 +172,13 @@ > offsetof(ngx_mail_ssl_conf_t, session_timeout), > NULL }, > > + { ngx_string("ssl_crl"), > + NGX_MAIL_MAIN_CONF|NGX_MAIL_SRV_CONF|NGX_CONF_TAKE1, > + ngx_conf_set_str_slot, > + NGX_MAIL_SRV_CONF_OFFSET, > + offsetof(ngx_mail_ssl_conf_t, crl), > + NULL }, > + > ngx_null_command > }; > > @@ -196,6 +238,8 @@ > scf->enable = NGX_CONF_UNSET; > scf->starttls = NGX_CONF_UNSET_UINT; > scf->prefer_server_ciphers = NGX_CONF_UNSET; > + scf->verify = NGX_CONF_UNSET_UINT; > + scf->verify_depth = NGX_CONF_UNSET_UINT; > scf->builtin_session_cache = NGX_CONF_UNSET; > scf->session_timeout = NGX_CONF_UNSET; > scf->session_tickets = NGX_CONF_UNSET; > @@ -228,11 +272,20 @@ > (NGX_CONF_BITMASK_SET|NGX_SSL_SSLv3|NGX_SSL_TLSv1 > |NGX_SSL_TLSv1_1|NGX_SSL_TLSv1_2)); > > + ngx_conf_merge_uint_value(conf->verify, prev->verify, 0); > + ngx_conf_merge_uint_value(conf->verify_depth, prev->verify_depth, 1); > + > ngx_conf_merge_str_value(conf->certificate, prev->certificate, ""); > ngx_conf_merge_str_value(conf->certificate_key, > prev->certificate_key, ""); > > ngx_conf_merge_str_value(conf->dhparam, prev->dhparam, ""); > > + ngx_conf_merge_str_value(conf->client_certificate, > prev->client_certificate, > + ""); > + ngx_conf_merge_str_value(conf->trusted_certificate, > + prev->trusted_certificate, ""); > + ngx_conf_merge_str_value(conf->crl, prev->crl, ""); > + > ngx_conf_merge_str_value(conf->ecdh_curve, prev->ecdh_curve, > NGX_DEFAULT_ECDH_CURVE); > > @@ -318,6 +371,35 @@ > return NGX_CONF_ERROR; > } > > + if (conf->verify) { > + > + if (conf->client_certificate.len == 0 && conf->verify != 3) { > + ngx_log_error(NGX_LOG_EMERG, cf->log, 0, > + "no ssl_client_certificate for > ssl_client_verify"); > + return NGX_CONF_ERROR; > + } > + > + if (ngx_ssl_client_certificate(cf, &conf->ssl, > + &conf->client_certificate, > + conf->verify_depth) > + != NGX_OK) > + { > + return NGX_CONF_ERROR; > + } > + } > + > + if (ngx_ssl_trusted_certificate(cf, &conf->ssl, > + &conf->trusted_certificate, > + conf->verify_depth) > + != NGX_OK) > + { > + return NGX_CONF_ERROR; > + } > + > + if (ngx_ssl_crl(cf, &conf->ssl, &conf->crl) != NGX_OK) { > + return NGX_CONF_ERROR; > + } > + > if (conf->prefer_server_ciphers) { > SSL_CTX_set_options(conf->ssl.ctx, > SSL_OP_CIPHER_SERVER_PREFERENCE); > } > diff -r a387ce36744a -r d7b8381c200e src/mail/ngx_mail_ssl_module.h > --- a/src/mail/ngx_mail_ssl_module.h Thu Jan 23 22:09:59 2014 +0900 > +++ b/src/mail/ngx_mail_ssl_module.h Fri Jan 24 16:26:16 2014 +0100 > @@ -28,6 +28,8 @@ > ngx_uint_t starttls; > ngx_uint_t protocols; > > + ngx_uint_t verify; > + ngx_uint_t verify_depth; > ssize_t builtin_session_cache; > > time_t session_timeout; > @@ -36,6 +38,9 @@ > ngx_str_t certificate_key; > ngx_str_t dhparam; > ngx_str_t ecdh_curve; > + ngx_str_t client_certificate; > + ngx_str_t trusted_certificate; > + ngx_str_t crl; > > ngx_str_t ciphers; > > 2014-09-17 2:23 GMT+02:00 Kunal Pariani : > >> I guess these diffs can still be applied with limitation of having the >> same ssl setting for all upstreams until the support to return a name by >> the auth script is added later ? >> >> Thanks >> -Kunal >> >> ----- Original Message ----- >> From: "Maxim Dounin" >> To: "nginx-devel" >> Sent: Tuesday, September 16, 2014 5:03:06 AM >> Subject: Re: [PATCH] SSL support for the mail proxy module >> >> Hello! >> >> On Tue, Sep 16, 2014 at 02:30:17AM -0500, Kunal Pariani wrote: >> >> > Updated the diffs after addressing the first 2 issues. >> > Regarding the 3rd comment, you are correct. Only 1 set of ssl >> > settings for all the mail backends with my patch. I guess this >> > will be a limitation with the current mail proxy workflow ? Am >> > not sure of the exact changes that will be required to address >> > this issue completely. >> >> Probably, returning a name by the auth script is a way to go. >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From admin at jtlebi.fr Sat Nov 15 16:25:36 2014 From: admin at jtlebi.fr (Jean-Tiare LE BIGOT) Date: Sat, 15 Nov 2014 17:25:36 +0100 Subject: [PATCH] RFC: upgrade connection for proxied docker streams Message-ID: <54677E80.1020306@jtlebi.fr> Hi, This patch adds support for proxying Docker's "hijacked" HTTP connections. It shares most logic with websocket's "upgrade" mechanism. Basically, it detects the 'application/vnd.docker.raw-stream" Content-Type header and marks the connection for upgrade in this case. I also hit a strange behavior with the chunked transfer terminating sequence "0\r\n\r\n" being sent even on this upgraded connection hence the "u->headers_in.chunked = 0;" in "ngx_http_upstream_upgrade". Feedback appreciated. Regards, PS: I failed to register to this mailing list with jt AT yadutaf DOT fr, any idea why ? -- Jean-Tiare -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.patch Type: text/x-patch Size: 2752 bytes Desc: not available URL: From sepherosa at gmail.com Sun Nov 16 09:07:12 2014 From: sepherosa at gmail.com (Sepherosa Ziehau) Date: Sun, 16 Nov 2014 17:07:12 +0800 Subject: [Patch] SO_REUSEPORT support from master process In-Reply-To: <9ACD5B67AAC5594CB6268234CF29CF9AA37EA433@ORSMSX113.amr.corp.intel.com> References: <9ACD5B67AAC5594CB6268234CF29CF9AA37A52F5@ORSMSX113.amr.corp.intel.com> <9ACD5B67AAC5594CB6268234CF29CF9AA37D0C4E@ORSMSX113.amr.corp.intel.com> <20141008125848.GA31276@mdounin.ru> <9ACD5B67AAC5594CB6268234CF29CF9AA37D1997@ORSMSX113.amr.corp.intel.com> <9ACD5B67AAC5594CB6268234CF29CF9AA37D1B44@ORSMSX113.amr.corp.intel.com> <9ACD5B67AAC5594CB6268234CF29CF9AA37EA433@ORSMSX113.amr.corp.intel.com> Message-ID: Heh, I never made that patch for Linux and I don't use Linux ;). And since Linux does not inherit sockets, once SO_REUSEPORT listen socket is closed you should have seen the strange timeout as you have described (as I had told you linux missing the socket inheritance feature). On DFLY, we could max out all CPUs w/o problem and have no strange timeout or connection drops when doing binary upgrading. Well, at least your test result suggests for SO_REUSEPORT, different OSes may need OS-specific implementation eventually to achieve better performance or keep binary upgrading works. You probably could keep your patch as Linux specific patch as what we do for the dports. Best Regards, sephe On Fri, Oct 31, 2014 at 6:24 AM, Lu, Yingqi wrote: > Hi All, > > We tested the dragonfly approach on Linux (RHEL 6.5 with kernel 3.13.9). We used the same testing environment for both our patch and the dragonfly patch. Here is what we found: > > 1. Our patch has 36% better performance (operations/sec) comparing to dragonfly patch. > 2. Our patch has 53% lower response time comparing to dragonfly approach even at 36% higher throughput level. > 3. Our patch can scale the CPU utilization and frequency to the max capacity while dragonfly patch cannot. > 4. Our patch does not have any issues with "upgrade binary on the fly". However, dragonfly patch creates a spike in the response time during the upgrade. It also has lots of connection timeouts/losses with high load. > > Above findings are based on Linux OS. > > Thanks, > Yingqi > > -----Original Message----- > From: nginx-devel-bounces at nginx.org [mailto:nginx-devel-bounces at nginx.org] On Behalf Of Lu, Yingqi > Sent: Wednesday, October 08, 2014 11:24 AM > To: nginx-devel at nginx.org > Subject: RE: [Patch] SO_REUSEPORT support from master process > > One more comment from me: duplicate listen sockets in kernel is not a trivia thing to do and it may take long time before people can see it. Addressing it Nginx may not be as ideal as in kernel, but at least user can see the performance improvement sooner. In fact, we see up to 48% performance improvement on modern Intel system. Just my two cents. > > Again, thanks very much for everyone for helping us review this. > > Thanks, > Yingqi > > -----Original Message----- > From: nginx-devel-bounces at nginx.org [mailto:nginx-devel-bounces at nginx.org] On Behalf Of Lu, Yingqi > Sent: Wednesday, October 08, 2014 10:05 AM > To: nginx-devel at nginx.org > Subject: RE: [Patch] SO_REUSEPORT support from master process > > Hi Maxim, > > Thanks for letting us know. > > Our updated patch is located at http://forum.nginx.org/read.php?29,253446,253446#msg-253446 > > It supposes to address all the style issues and fixes the restart and binary upgrade issues. This is just a FYI in case you are not aware of. > > Thanks, > Yingqi > > -----Original Message----- > From: nginx-devel-bounces at nginx.org [mailto:nginx-devel-bounces at nginx.org] On Behalf Of Maxim Dounin > Sent: Wednesday, October 08, 2014 5:59 AM > To: nginx-devel at nginx.org > Subject: Re: [Patch] SO_REUSEPORT support from master process > > Hello! > > On Tue, Oct 07, 2014 at 07:32:08PM +0000, Lu, Yingqi wrote: > >> Dear All, >> >> It has been quiet for a while on this patch. I am checking to see if >> there is any questions/feedbacks/concerns we need to address? >> >> Please let me know. Thanks very much for your help! > > Apart from style/coding issues, I disagree with the whole approach. > > As far as I understand the patch idea, it tries to introduce multiple listening sockets to avoid in-kernel lock contention. > This is something that can be done completely in kernel though, and I see no reason to introduce any changes to nginx here. > > The approach previously discussed with Sepherosa Ziehau looks much more interesting. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Tomorrow Will Never Die From mdounin at mdounin.ru Sun Nov 16 11:38:35 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 16 Nov 2014 14:38:35 +0300 Subject: [PATCH] RFC: upgrade connection for proxied docker streams In-Reply-To: <54677E80.1020306@jtlebi.fr> References: <54677E80.1020306@jtlebi.fr> Message-ID: <20141116113835.GV98044@mdounin.ru> Hello! On Sat, Nov 15, 2014 at 05:25:36PM +0100, Jean-Tiare LE BIGOT wrote: > Hi, > > This patch adds support for proxying Docker's "hijacked" HTTP > connections. It shares most logic with websocket's "upgrade" mechanism. > > Basically, it detects the 'application/vnd.docker.raw-stream" > Content-Type header and marks the connection for upgrade in this case. This looks like a dirty hack for me. It may be a better idea to focus on fixing Docker to use HTTP properly. [...] -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Sun Nov 16 11:50:56 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 16 Nov 2014 14:50:56 +0300 Subject: [Patch] SO_REUSEPORT support from master process In-Reply-To: References: <9ACD5B67AAC5594CB6268234CF29CF9AA37A52F5@ORSMSX113.amr.corp.intel.com> <9ACD5B67AAC5594CB6268234CF29CF9AA37D0C4E@ORSMSX113.amr.corp.intel.com> <20141008125848.GA31276@mdounin.ru> <9ACD5B67AAC5594CB6268234CF29CF9AA37D1997@ORSMSX113.amr.corp.intel.com> <9ACD5B67AAC5594CB6268234CF29CF9AA37D1B44@ORSMSX113.amr.corp.intel.com> <9ACD5B67AAC5594CB6268234CF29CF9AA37EA433@ORSMSX113.amr.corp.intel.com> Message-ID: <20141116115056.GX98044@mdounin.ru> Hello! On Sun, Nov 16, 2014 at 05:07:12PM +0800, Sepherosa Ziehau wrote: > Heh, I never made that patch for Linux and I don't use Linux ;). And > since Linux does not inherit sockets, once SO_REUSEPORT listen socket > is closed you should have seen the strange timeout as you have > described (as I had told you linux missing the socket inheritance > feature). On DFLY, we could max out all CPUs w/o problem and have no > strange timeout or connection drops when doing binary upgrading. > > Well, at least your test result suggests for SO_REUSEPORT, different > OSes may need OS-specific implementation eventually to achieve better > performance or keep binary upgrading works. You probably could keep > your patch as Linux specific patch as what we do for the dports. I would rather suggest that tests are flawed and needs to be redone from scratch by someone who will be able to share testing details. > > Best Regards, > sephe > > > On Fri, Oct 31, 2014 at 6:24 AM, Lu, Yingqi wrote: > > Hi All, > > > > We tested the dragonfly approach on Linux (RHEL 6.5 with kernel 3.13.9). We used the same testing environment for both our patch and the dragonfly patch. Here is what we found: > > > > 1. Our patch has 36% better performance (operations/sec) comparing to dragonfly patch. > > 2. Our patch has 53% lower response time comparing to dragonfly approach even at 36% higher throughput level. > > 3. Our patch can scale the CPU utilization and frequency to the max capacity while dragonfly patch cannot. > > 4. Our patch does not have any issues with "upgrade binary on the fly". However, dragonfly patch creates a spike in the response time during the upgrade. It also has lots of connection timeouts/losses with high load. > > > > Above findings are based on Linux OS. > > > > Thanks, > > Yingqi > > > > -----Original Message----- > > From: nginx-devel-bounces at nginx.org [mailto:nginx-devel-bounces at nginx.org] On Behalf Of Lu, Yingqi > > Sent: Wednesday, October 08, 2014 11:24 AM > > To: nginx-devel at nginx.org > > Subject: RE: [Patch] SO_REUSEPORT support from master process > > > > One more comment from me: duplicate listen sockets in kernel is not a trivia thing to do and it may take long time before people can see it. Addressing it Nginx may not be as ideal as in kernel, but at least user can see the performance improvement sooner. In fact, we see up to 48% performance improvement on modern Intel system. Just my two cents. > > > > Again, thanks very much for everyone for helping us review this. > > > > Thanks, > > Yingqi > > > > -----Original Message----- > > From: nginx-devel-bounces at nginx.org [mailto:nginx-devel-bounces at nginx.org] On Behalf Of Lu, Yingqi > > Sent: Wednesday, October 08, 2014 10:05 AM > > To: nginx-devel at nginx.org > > Subject: RE: [Patch] SO_REUSEPORT support from master process > > > > Hi Maxim, > > > > Thanks for letting us know. > > > > Our updated patch is located at http://forum.nginx.org/read.php?29,253446,253446#msg-253446 > > > > It supposes to address all the style issues and fixes the restart and binary upgrade issues. This is just a FYI in case you are not aware of. > > > > Thanks, > > Yingqi > > > > -----Original Message----- > > From: nginx-devel-bounces at nginx.org [mailto:nginx-devel-bounces at nginx.org] On Behalf Of Maxim Dounin > > Sent: Wednesday, October 08, 2014 5:59 AM > > To: nginx-devel at nginx.org > > Subject: Re: [Patch] SO_REUSEPORT support from master process > > > > Hello! > > > > On Tue, Oct 07, 2014 at 07:32:08PM +0000, Lu, Yingqi wrote: > > > >> Dear All, > >> > >> It has been quiet for a while on this patch. I am checking to see if > >> there is any questions/feedbacks/concerns we need to address? > >> > >> Please let me know. Thanks very much for your help! > > > > Apart from style/coding issues, I disagree with the whole approach. > > > > As far as I understand the patch idea, it tries to introduce multiple listening sockets to avoid in-kernel lock contention. > > This is something that can be done completely in kernel though, and I see no reason to introduce any changes to nginx here. > > > > The approach previously discussed with Sepherosa Ziehau looks much more interesting. > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > _______________________________________________ > > nginx-devel mailing list > > nginx-devel at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > -- > Tomorrow Will Never Die > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Nov 17 13:40:06 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 17 Nov 2014 13:40:06 +0000 Subject: [nginx] SSL: logging level of "inappropriate fallback" (ticket #... Message-ID: details: http://hg.nginx.org/nginx/rev/b7a37f6a25ea branches: changeset: 5902:b7a37f6a25ea user: Maxim Dounin date: Mon Nov 17 16:38:48 2014 +0300 description: SSL: logging level of "inappropriate fallback" (ticket #662). Patch by Erik Dubbelboer. diffstat: src/event/ngx_event_openssl.c | 3 +++ 1 files changed, 3 insertions(+), 0 deletions(-) diffs (13 lines): diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -1858,6 +1858,9 @@ ngx_ssl_connection_error(ngx_connection_ #ifdef SSL_R_SCSV_RECEIVED_WHEN_RENEGOTIATING || n == SSL_R_SCSV_RECEIVED_WHEN_RENEGOTIATING /* 345 */ #endif +#ifdef SSL_R_INAPPROPRIATE_FALLBACK + || n == SSL_R_INAPPROPRIATE_FALLBACK /* 373 */ +#endif || n == 1000 /* SSL_R_SSLV3_ALERT_CLOSE_NOTIFY */ || n == SSL_R_SSLV3_ALERT_UNEXPECTED_MESSAGE /* 1010 */ || n == SSL_R_SSLV3_ALERT_BAD_RECORD_MAC /* 1020 */ From vbart at nginx.com Mon Nov 17 18:20:37 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 17 Nov 2014 18:20:37 +0000 Subject: [nginx] SPDY: improved debug logging of inflate() calls. Message-ID: details: http://hg.nginx.org/nginx/rev/571e66f7c12c branches: changeset: 5903:571e66f7c12c user: Valentin Bartenev date: Fri Nov 07 17:19:12 2014 +0300 description: SPDY: improved debug logging of inflate() calls. diffstat: src/http/ngx_http_spdy.c | 20 ++++++++++++++++---- 1 files changed, 16 insertions(+), 4 deletions(-) diffs (50 lines): diff -r b7a37f6a25ea -r 571e66f7c12c src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Mon Nov 17 16:38:48 2014 +0300 +++ b/src/http/ngx_http_spdy.c Fri Nov 07 17:19:12 2014 +0300 @@ -1065,16 +1065,16 @@ ngx_http_spdy_state_headers(ngx_http_spd : Z_OK; } - if (z != Z_OK) { - return ngx_http_spdy_state_inflate_error(sc, z); - } - ngx_log_debug5(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "spdy inflate out: ni:%p no:%p ai:%ud ao:%ud rc:%d", sc->zstream_in.next_in, sc->zstream_in.next_out, sc->zstream_in.avail_in, sc->zstream_in.avail_out, z); + if (z != Z_OK) { + return ngx_http_spdy_state_inflate_error(sc, z); + } + sc->length -= sc->zstream_in.next_in - pos; pos = sc->zstream_in.next_in; @@ -1164,6 +1164,12 @@ ngx_http_spdy_state_headers(ngx_http_spd z = inflate(&sc->zstream_in, Z_NO_FLUSH); + ngx_log_debug5(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, + "spdy inflate out: ni:%p no:%p ai:%ud ao:%ud rc:%d", + sc->zstream_in.next_in, sc->zstream_in.next_out, + sc->zstream_in.avail_in, sc->zstream_in.avail_out, + z); + if (z != Z_OK) { return ngx_http_spdy_state_inflate_error(sc, z); } @@ -1265,6 +1271,12 @@ ngx_http_spdy_state_headers_skip(ngx_htt n = inflate(&sc->zstream_in, Z_NO_FLUSH); + ngx_log_debug5(NGX_LOG_DEBUG_HTTP, sc->connection->log, 0, + "spdy inflate out: ni:%p no:%p ai:%ud ao:%ud rc:%d", + sc->zstream_in.next_in, sc->zstream_in.next_out, + sc->zstream_in.avail_in, sc->zstream_in.avail_out, + n); + if (n != Z_OK) { return ngx_http_spdy_state_inflate_error(sc, n); } From vbart at nginx.com Mon Nov 17 18:20:41 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Mon, 17 Nov 2014 18:20:41 +0000 Subject: [nginx] SPDY: fixed check for too long header name or value. Message-ID: details: http://hg.nginx.org/nginx/rev/abb466a57a22 branches: changeset: 5904:abb466a57a22 user: Valentin Bartenev date: Fri Nov 07 17:22:19 2014 +0300 description: SPDY: fixed check for too long header name or value. For further progress a new buffer must be at least two bytes larger than the remaining unparsed data. One more byte is needed for null-termination and another one for further progress. Otherwise inflate() fails with Z_BUF_ERROR. diffstat: src/http/ngx_http_spdy.c | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diffs (17 lines): diff -r 571e66f7c12c -r abb466a57a22 src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Fri Nov 07 17:19:12 2014 +0300 +++ b/src/http/ngx_http_spdy.c Fri Nov 07 17:22:19 2014 +0300 @@ -2660,10 +2660,10 @@ ngx_http_spdy_alloc_large_header_buffer( rest = r->header_in->last - r->header_in->pos; /* - * equality is prohibited since one more byte is needed - * for null-termination + * One more byte is needed for null-termination + * and another one for further progress. */ - if (rest >= cscf->large_client_header_buffers.size) { + if (rest > cscf->large_client_header_buffers.size - 2) { p = r->header_in->pos; if (rest > NGX_MAX_ERROR_STR - 300) { From ru at nginx.com Tue Nov 18 10:24:58 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 18 Nov 2014 13:24:58 +0300 Subject: [BUG] New memory invalid read regression in resolver since nginx 1.7.5 In-Reply-To: <20141001003209.GQ69200@mdounin.ru> References: <20141001003209.GQ69200@mdounin.ru> Message-ID: <20141118102458.GE40928@lo0.su> On Wed, Oct 01, 2014 at 04:32:09AM +0400, Maxim Dounin wrote: > Hello! > > On Tue, Sep 30, 2014 at 03:10:42PM -0700, Yichun Zhang (agentzh) wrote: > > > Hello! > > > > I've noticed that the code re-factoring in the resolver in nginx 1.7.5 > > introduces a new regression that can cause memory invalid reads when > > --with-debug is used to build the nginx. The issue still exists in > > nginx 1.7.6. > > [...] > > > ngx_log_debug2(NGX_LOG_DEBUG_EVENT, ev->log, 0, > > "event timer del: %d: %M", > > ngx_event_ident(ev->data), ev->timer.key); > > > > while ev->data here is the resolver node that has already been freed > > up earlier in ngx_resolver_free_node. > > Yes, thanks, it's known (though mostly harmless) issue. > Ruslan is looking into it. Here's a series of two patches that fix the issue. The first one fixes the use-after-free memory access. The second one fixes debug event logging for resolver timer events. # HG changeset patch # User Ruslan Ermilov # Date 1416301613 -10800 # Tue Nov 18 12:06:53 2014 +0300 # Node ID e7406dc8f6e0e662d3c738ef193adf8da4b4dae0 # Parent f1d87edc493b80c743c9512ae815bb19c76faf65 Resolver: fixed use-after-free memory access. In 954867a2f0a6, we switched to using resolver node as the timer event data, so make sure we do not free resolver node memory until the corresponding timer is deleted. diff --git a/src/core/ngx_resolver.c b/src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c +++ b/src/core/ngx_resolver.c @@ -1568,8 +1568,6 @@ ngx_resolver_process_a(ngx_resolver_t *r ngx_rbtree_delete(&r->name_rbtree, &rn->node); - ngx_resolver_free_node(r, rn); - /* unlock name mutex */ while (next) { @@ -1580,6 +1578,8 @@ ngx_resolver_process_a(ngx_resolver_t *r ctx->handler(ctx); } + ngx_resolver_free_node(r, rn); + return; } @@ -2143,8 +2143,6 @@ valid: ngx_rbtree_delete(tree, &rn->node); - ngx_resolver_free_node(r, rn); - /* unlock addr mutex */ while (next) { @@ -2155,6 +2153,8 @@ valid: ctx->handler(ctx); } + ngx_resolver_free_node(r, rn); + return; } # HG changeset patch # User Ruslan Ermilov # Date 1416302028 -10800 # Tue Nov 18 12:13:48 2014 +0300 # Node ID 8c5ec1928063baf7bf0dc52db680856858808c62 # Parent e7406dc8f6e0e662d3c738ef193adf8da4b4dae0 Resolver: fixed debug event logging. In 954867a2f0a6, we switched to using resolver node as the timer event data. This broke debug event logging. Emulate enough of ngx_connection_t in ngx_resolver_node_t for ngx_event_ident() to work. Redo it similarly in ngx_resolver_t. Removed now unused ngx_resolver_ctx_t.ident. diff --git a/src/core/ngx_resolver.c b/src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c +++ b/src/core/ngx_resolver.c @@ -101,6 +101,10 @@ static ngx_resolver_node_t *ngx_resolver #endif +#define ngx_resolver_node(n) \ + (ngx_resolver_node_t *) ((u_char *) n - offsetof(ngx_resolver_node_t, node)) + + ngx_resolver_t * ngx_resolver_create(ngx_conf_t *cf, ngx_str_t *names, ngx_uint_t n) { @@ -156,7 +160,7 @@ ngx_resolver_create(ngx_conf_t *cf, ngx_ r->event->handler = ngx_resolver_resend_handler; r->event->data = r; r->event->log = &cf->cycle->new_log; - r->ident = -1; + r->fd = NGX_INVALID_FILE; r->resend_timeout = 5; r->expire = 30; @@ -288,7 +292,7 @@ ngx_resolver_cleanup_tree(ngx_resolver_t while (tree->root != tree->sentinel) { - rn = (ngx_resolver_node_t *) ngx_rbtree_min(tree->root, tree->sentinel); + rn = ngx_resolver_node(ngx_rbtree_min(tree->root, tree->sentinel)); ngx_queue_remove(&rn->queue); @@ -666,7 +670,7 @@ ngx_resolve_name_locked(ngx_resolver_t * ctx->event->handler = ngx_resolver_timeout_handler; ctx->event->data = rn; ctx->event->log = r->log; - ctx->ident = -1; + rn->fd = NGX_INVALID_FILE; ngx_add_timer(ctx->event, ctx->timeout); } @@ -859,7 +863,7 @@ ngx_resolve_addr(ngx_resolver_ctx_t *ctx ctx->event->handler = ngx_resolver_timeout_handler; ctx->event->data = rn; ctx->event->log = r->log; - ctx->ident = -1; + rn->fd = NGX_INVALID_FILE; ngx_add_timer(ctx->event, ctx->timeout); @@ -2290,7 +2294,7 @@ ngx_resolver_lookup_name(ngx_resolver_t /* hash == node->key */ - rn = (ngx_resolver_node_t *) node; + rn = ngx_resolver_node(node); rc = ngx_memn2cmp(name->data, rn->name, name->len, rn->nlen); @@ -2329,7 +2333,7 @@ ngx_resolver_lookup_addr(ngx_resolver_t /* addr == node->key */ - return (ngx_resolver_node_t *) node; + return ngx_resolver_node(node); } /* not found */ @@ -2365,7 +2369,7 @@ ngx_resolver_lookup_addr6(ngx_resolver_t /* hash == node->key */ - rn = (ngx_resolver_node_t *) node; + rn = ngx_resolver_node(node); rc = ngx_memcmp(addr, &rn->addr6, 16); @@ -2403,8 +2407,8 @@ ngx_resolver_rbtree_insert_value(ngx_rbt } else { /* node->key == temp->key */ - rn = (ngx_resolver_node_t *) node; - rn_temp = (ngx_resolver_node_t *) temp; + rn = ngx_resolver_node(node); + rn_temp = ngx_resolver_node(temp); p = (ngx_memn2cmp(rn->name, rn_temp->name, rn->nlen, rn_temp->nlen) < 0) ? &temp->left : &temp->right; @@ -2446,8 +2450,8 @@ ngx_resolver_rbtree_insert_addr6_value(n } else { /* node->key == temp->key */ - rn = (ngx_resolver_node_t *) node; - rn_temp = (ngx_resolver_node_t *) temp; + rn = ngx_resolver_node(node); + rn_temp = ngx_resolver_node(temp); p = (ngx_memcmp(&rn->addr6, &rn_temp->addr6, 16) < 0) ? &temp->left : &temp->right; diff --git a/src/core/ngx_resolver.h b/src/core/ngx_resolver.h --- a/src/core/ngx_resolver.h +++ b/src/core/ngx_resolver.h @@ -51,11 +51,15 @@ typedef void (*ngx_resolver_handler_pt)( typedef struct { - ngx_rbtree_node_t node; + /* PTR: resolved name, A: name to resolve */ + u_char *name; + ngx_queue_t queue; - /* PTR: resolved name, A: name to resolve */ - u_char *name; + /* ident must be after 3 pointers */ + ngx_fd_t fd; + + ngx_rbtree_node_t node; #if (NGX_HAVE_INET6) /* PTR: IPv6 address to resolve (IPv4 address is in rbtree node key) */ @@ -104,7 +108,7 @@ typedef struct { ngx_log_t *log; /* ident must be after 3 pointers */ - ngx_int_t ident; + ngx_fd_t fd; /* simple round robin DNS peers balancer */ ngx_array_t udp_connections; @@ -143,9 +147,6 @@ struct ngx_resolver_ctx_s { ngx_resolver_t *resolver; ngx_udp_connection_t *udp_connection; - /* ident must be after 3 pointers */ - ngx_int_t ident; - ngx_int_t state; ngx_str_t name; From arut at nginx.com Tue Nov 18 17:42:36 2014 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 18 Nov 2014 17:42:36 +0000 Subject: [nginx] Cache: proxy_cache_lock_age and friends. Message-ID: details: http://hg.nginx.org/nginx/rev/2f7e557eab5b branches: changeset: 5905:2f7e557eab5b user: Roman Arutyunyan date: Tue Nov 18 20:41:12 2014 +0300 description: Cache: proxy_cache_lock_age and friends. Once this age is reached, the cache lock is discarded and another request can acquire the lock. Requests which failed to acquire the lock are not allowed to cache the response. diffstat: src/http/modules/ngx_http_fastcgi_module.c | 11 ++++++++++ src/http/modules/ngx_http_proxy_module.c | 11 ++++++++++ src/http/modules/ngx_http_scgi_module.c | 11 ++++++++++ src/http/modules/ngx_http_uwsgi_module.c | 11 ++++++++++ src/http/ngx_http_cache.h | 3 ++ src/http/ngx_http_file_cache.c | 33 ++++++++++++++++++++--------- src/http/ngx_http_upstream.c | 1 + src/http/ngx_http_upstream.h | 1 + 8 files changed, 72 insertions(+), 10 deletions(-) diffs (280 lines): diff -r abb466a57a22 -r 2f7e557eab5b src/http/modules/ngx_http_fastcgi_module.c --- a/src/http/modules/ngx_http_fastcgi_module.c Fri Nov 07 17:22:19 2014 +0300 +++ b/src/http/modules/ngx_http_fastcgi_module.c Tue Nov 18 20:41:12 2014 +0300 @@ -419,6 +419,13 @@ static ngx_command_t ngx_http_fastcgi_c offsetof(ngx_http_fastcgi_loc_conf_t, upstream.cache_lock_timeout), NULL }, + { ngx_string("fastcgi_cache_lock_age"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_msec_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_fastcgi_loc_conf_t, upstream.cache_lock_age), + NULL }, + { ngx_string("fastcgi_cache_revalidate"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -2374,6 +2381,7 @@ ngx_http_fastcgi_create_loc_conf(ngx_con conf->upstream.cache_valid = NGX_CONF_UNSET_PTR; conf->upstream.cache_lock = NGX_CONF_UNSET; conf->upstream.cache_lock_timeout = NGX_CONF_UNSET_MSEC; + conf->upstream.cache_lock_age = NGX_CONF_UNSET_MSEC; conf->upstream.cache_revalidate = NGX_CONF_UNSET; #endif @@ -2638,6 +2646,9 @@ ngx_http_fastcgi_merge_loc_conf(ngx_conf ngx_conf_merge_msec_value(conf->upstream.cache_lock_timeout, prev->upstream.cache_lock_timeout, 5000); + ngx_conf_merge_msec_value(conf->upstream.cache_lock_age, + prev->upstream.cache_lock_age, 5000); + ngx_conf_merge_value(conf->upstream.cache_revalidate, prev->upstream.cache_revalidate, 0); diff -r abb466a57a22 -r 2f7e557eab5b src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c Fri Nov 07 17:22:19 2014 +0300 +++ b/src/http/modules/ngx_http_proxy_module.c Tue Nov 18 20:41:12 2014 +0300 @@ -489,6 +489,13 @@ static ngx_command_t ngx_http_proxy_com offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_lock_timeout), NULL }, + { ngx_string("proxy_cache_lock_age"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_msec_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_proxy_loc_conf_t, upstream.cache_lock_age), + NULL }, + { ngx_string("proxy_cache_revalidate"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -2544,6 +2551,7 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ conf->upstream.cache_valid = NGX_CONF_UNSET_PTR; conf->upstream.cache_lock = NGX_CONF_UNSET; conf->upstream.cache_lock_timeout = NGX_CONF_UNSET_MSEC; + conf->upstream.cache_lock_age = NGX_CONF_UNSET_MSEC; conf->upstream.cache_revalidate = NGX_CONF_UNSET; #endif @@ -2818,6 +2826,9 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t ngx_conf_merge_msec_value(conf->upstream.cache_lock_timeout, prev->upstream.cache_lock_timeout, 5000); + ngx_conf_merge_msec_value(conf->upstream.cache_lock_age, + prev->upstream.cache_lock_age, 5000); + ngx_conf_merge_value(conf->upstream.cache_revalidate, prev->upstream.cache_revalidate, 0); diff -r abb466a57a22 -r 2f7e557eab5b src/http/modules/ngx_http_scgi_module.c --- a/src/http/modules/ngx_http_scgi_module.c Fri Nov 07 17:22:19 2014 +0300 +++ b/src/http/modules/ngx_http_scgi_module.c Tue Nov 18 20:41:12 2014 +0300 @@ -276,6 +276,13 @@ static ngx_command_t ngx_http_scgi_comma offsetof(ngx_http_scgi_loc_conf_t, upstream.cache_lock_timeout), NULL }, + { ngx_string("scgi_cache_lock_age"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_msec_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_scgi_loc_conf_t, upstream.cache_lock_age), + NULL }, + { ngx_string("scgi_cache_revalidate"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -1133,6 +1140,7 @@ ngx_http_scgi_create_loc_conf(ngx_conf_t conf->upstream.cache_valid = NGX_CONF_UNSET_PTR; conf->upstream.cache_lock = NGX_CONF_UNSET; conf->upstream.cache_lock_timeout = NGX_CONF_UNSET_MSEC; + conf->upstream.cache_lock_age = NGX_CONF_UNSET_MSEC; conf->upstream.cache_revalidate = NGX_CONF_UNSET; #endif @@ -1392,6 +1400,9 @@ ngx_http_scgi_merge_loc_conf(ngx_conf_t ngx_conf_merge_msec_value(conf->upstream.cache_lock_timeout, prev->upstream.cache_lock_timeout, 5000); + ngx_conf_merge_msec_value(conf->upstream.cache_lock_age, + prev->upstream.cache_lock_age, 5000); + ngx_conf_merge_value(conf->upstream.cache_revalidate, prev->upstream.cache_revalidate, 0); diff -r abb466a57a22 -r 2f7e557eab5b src/http/modules/ngx_http_uwsgi_module.c --- a/src/http/modules/ngx_http_uwsgi_module.c Fri Nov 07 17:22:19 2014 +0300 +++ b/src/http/modules/ngx_http_uwsgi_module.c Tue Nov 18 20:41:12 2014 +0300 @@ -336,6 +336,13 @@ static ngx_command_t ngx_http_uwsgi_comm offsetof(ngx_http_uwsgi_loc_conf_t, upstream.cache_lock_timeout), NULL }, + { ngx_string("uwsgi_cache_lock_age"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_msec_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_uwsgi_loc_conf_t, upstream.cache_lock_age), + NULL }, + { ngx_string("uwsgi_cache_revalidate"), NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG, ngx_conf_set_flag_slot, @@ -1339,6 +1346,7 @@ ngx_http_uwsgi_create_loc_conf(ngx_conf_ conf->upstream.cache_valid = NGX_CONF_UNSET_PTR; conf->upstream.cache_lock = NGX_CONF_UNSET; conf->upstream.cache_lock_timeout = NGX_CONF_UNSET_MSEC; + conf->upstream.cache_lock_age = NGX_CONF_UNSET_MSEC; conf->upstream.cache_revalidate = NGX_CONF_UNSET; #endif @@ -1606,6 +1614,9 @@ ngx_http_uwsgi_merge_loc_conf(ngx_conf_t ngx_conf_merge_msec_value(conf->upstream.cache_lock_timeout, prev->upstream.cache_lock_timeout, 5000); + ngx_conf_merge_msec_value(conf->upstream.cache_lock_age, + prev->upstream.cache_lock_age, 5000); + ngx_conf_merge_value(conf->upstream.cache_revalidate, prev->upstream.cache_revalidate, 0); diff -r abb466a57a22 -r 2f7e557eab5b src/http/ngx_http_cache.h --- a/src/http/ngx_http_cache.h Fri Nov 07 17:22:19 2014 +0300 +++ b/src/http/ngx_http_cache.h Tue Nov 18 20:41:12 2014 +0300 @@ -57,6 +57,7 @@ typedef struct { time_t valid_sec; size_t body_start; off_t fs_size; + ngx_msec_t lock_time; } ngx_http_file_cache_node_t; @@ -91,6 +92,8 @@ struct ngx_http_cache_s { ngx_http_file_cache_node_t *node; ngx_msec_t lock_timeout; + ngx_msec_t lock_age; + ngx_msec_t lock_time; ngx_msec_t wait_time; ngx_event_t wait_event; diff -r abb466a57a22 -r 2f7e557eab5b src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c Fri Nov 07 17:22:19 2014 +0300 +++ b/src/http/ngx_http_file_cache.c Tue Nov 18 20:41:12 2014 +0300 @@ -396,13 +396,19 @@ ngx_http_file_cache_lock(ngx_http_reques return NGX_DECLINED; } + now = ngx_current_msec; + cache = c->file_cache; ngx_shmtx_lock(&cache->shpool->mutex); - if (!c->node->updating) { + timer = c->node->lock_time - now; + + if (!c->node->updating || (ngx_msec_int_t) timer <= 0) { c->node->updating = 1; + c->node->lock_time = now + c->lock_age; c->updating = 1; + c->lock_time = c->node->lock_time; } ngx_shmtx_unlock(&cache->shpool->mutex); @@ -415,10 +421,12 @@ ngx_http_file_cache_lock(ngx_http_reques return NGX_DECLINED; } + if (c->lock_timeout == 0) { + return NGX_HTTP_CACHE_SCARCE; + } + c->waiting = 1; - now = ngx_current_msec; - if (c->wait_time == 0) { c->wait_time = now + c->lock_timeout; @@ -441,7 +449,7 @@ static void ngx_http_file_cache_lock_wait_handler(ngx_event_t *ev) { ngx_uint_t wait; - ngx_msec_t timer; + ngx_msec_t now, timer; ngx_http_cache_t *c; ngx_http_request_t *r; ngx_http_file_cache_t *cache; @@ -449,15 +457,17 @@ ngx_http_file_cache_lock_wait_handler(ng r = ev->data; c = r->cache; + now = ngx_current_msec; + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, ev->log, 0, "http file cache wait handler wt:%M cur:%M", - c->wait_time, ngx_current_msec); - - timer = c->wait_time - ngx_current_msec; + c->wait_time, now); + + timer = c->wait_time - now; if ((ngx_msec_int_t) timer <= 0) { ngx_log_error(NGX_LOG_INFO, ev->log, 0, "cache lock timeout"); - c->lock = 0; + c->lock_timeout = 0; goto wakeup; } @@ -466,7 +476,9 @@ ngx_http_file_cache_lock_wait_handler(ng ngx_shmtx_lock(&cache->shpool->mutex); - if (c->node->updating) { + timer = c->node->lock_time - now; + + if (c->node->updating && (ngx_msec_int_t) timer > 0) { wait = 1; } @@ -588,6 +600,7 @@ ngx_http_file_cache_read(ngx_http_reques } else { c->node->updating = 1; c->updating = 1; + c->lock_time = c->node->lock_time; rc = NGX_HTTP_CACHE_STALE; } @@ -1453,7 +1466,7 @@ ngx_http_file_cache_free(ngx_http_cache_ fcn = c->node; fcn->count--; - if (c->updating) { + if (c->updating && fcn->lock_time == c->lock_time) { fcn->updating = 0; } diff -r abb466a57a22 -r 2f7e557eab5b src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Fri Nov 07 17:22:19 2014 +0300 +++ b/src/http/ngx_http_upstream.c Tue Nov 18 20:41:12 2014 +0300 @@ -784,6 +784,7 @@ ngx_http_upstream_cache(ngx_http_request c->lock = u->conf->cache_lock; c->lock_timeout = u->conf->cache_lock_timeout; + c->lock_age = u->conf->cache_lock_age; u->cache_status = NGX_HTTP_CACHE_MISS; } diff -r abb466a57a22 -r 2f7e557eab5b src/http/ngx_http_upstream.h --- a/src/http/ngx_http_upstream.h Fri Nov 07 17:22:19 2014 +0300 +++ b/src/http/ngx_http_upstream.h Tue Nov 18 20:41:12 2014 +0300 @@ -183,6 +183,7 @@ typedef struct { ngx_flag_t cache_lock; ngx_msec_t cache_lock_timeout; + ngx_msec_t cache_lock_age; ngx_flag_t cache_revalidate; From piotr at cloudflare.com Wed Nov 19 01:15:46 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 18 Nov 2014 17:15:46 -0800 Subject: [PATCH] Not Modified: prefer entity tags over date validators Message-ID: <3efade6bb02f7962a512.1416359746@piotrs-macbook-pro.local> # HG changeset patch # User Piotr Sikora # Date 1416359232 28800 # Tue Nov 18 17:07:12 2014 -0800 # Node ID 3efade6bb02f7962a5120e1a1f95a1dc8f0b6a4c # Parent 2f7e557eab5b501ba71418febd3de9ef1c0ab4f1 Not Modified: prefer entity tags over date validators. RFC7232 says: A recipient MUST ignore If-Modified-Since if the request contains an If-None-Match header field; the condition in If-None-Match is considered to be a more accurate replacement for the condition in If-Modified-Since, and the two are only combined for the sake of interoperating with older intermediaries that might not implement If-None-Match. and: A recipient MUST ignore If-Unmodified-Since if the request contains an If-Match header field; the condition in If-Match is considered to be a more accurate replacement for the condition in If-Unmodified-Since, and the two are only combined for the sake of interoperating with older intermediaries that might not implement If-Match. Signed-off-by: Piotr Sikora diff -r 2f7e557eab5b -r 3efade6bb02f src/http/modules/ngx_http_not_modified_filter_module.c --- a/src/http/modules/ngx_http_not_modified_filter_module.c Tue Nov 18 20:41:12 2014 +0300 +++ b/src/http/modules/ngx_http_not_modified_filter_module.c Tue Nov 18 17:07:12 2014 -0800 @@ -61,48 +61,47 @@ ngx_http_not_modified_header_filter(ngx_ return ngx_http_next_header_filter(r); } - if (r->headers_in.if_unmodified_since - && !ngx_http_test_if_unmodified(r)) - { - return ngx_http_filter_finalize_request(r, NULL, - NGX_HTTP_PRECONDITION_FAILED); + if (r->headers_in.if_match) { + + if (!ngx_http_test_if_match(r, r->headers_in.if_match, 0)) { + return ngx_http_filter_finalize_request(r, NULL, + NGX_HTTP_PRECONDITION_FAILED); + } + + } else if (r->headers_in.if_unmodified_since) { + + if (!ngx_http_test_if_unmodified(r)) { + return ngx_http_filter_finalize_request(r, NULL, + NGX_HTTP_PRECONDITION_FAILED); + } } - if (r->headers_in.if_match - && !ngx_http_test_if_match(r, r->headers_in.if_match, 0)) - { - return ngx_http_filter_finalize_request(r, NULL, - NGX_HTTP_PRECONDITION_FAILED); + if (r->headers_in.if_none_match) { + + if (ngx_http_test_if_match(r, r->headers_in.if_none_match, 1)) { + goto not_modified; + } + + } else if (r->headers_in.if_modified_since) { + + if (!ngx_http_test_if_modified(r)) { + goto not_modified; + } } - if (r->headers_in.if_modified_since || r->headers_in.if_none_match) { + return ngx_http_next_header_filter(r); - if (r->headers_in.if_modified_since - && ngx_http_test_if_modified(r)) - { - return ngx_http_next_header_filter(r); - } +not_modified: - if (r->headers_in.if_none_match - && !ngx_http_test_if_match(r, r->headers_in.if_none_match, 1)) - { - return ngx_http_next_header_filter(r); - } + r->headers_out.status = NGX_HTTP_NOT_MODIFIED; + r->headers_out.status_line.len = 0; + r->headers_out.content_type.len = 0; + ngx_http_clear_content_length(r); + ngx_http_clear_accept_ranges(r); - /* not modified */ - - r->headers_out.status = NGX_HTTP_NOT_MODIFIED; - r->headers_out.status_line.len = 0; - r->headers_out.content_type.len = 0; - ngx_http_clear_content_length(r); - ngx_http_clear_accept_ranges(r); - - if (r->headers_out.content_encoding) { - r->headers_out.content_encoding->hash = 0; - r->headers_out.content_encoding = NULL; - } - - return ngx_http_next_header_filter(r); + if (r->headers_out.content_encoding) { + r->headers_out.content_encoding->hash = 0; + r->headers_out.content_encoding = NULL; } return ngx_http_next_header_filter(r); From piotr at cloudflare.com Wed Nov 19 01:15:48 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 18 Nov 2014 17:15:48 -0800 Subject: [PATCH 1 of 2] Cache: remove unused valid_msec fields Message-ID: <99e65578bc80960b2fdf.1416359748@piotrs-macbook-pro.local> # HG changeset patch # User Piotr Sikora # Date 1416359233 28800 # Tue Nov 18 17:07:13 2014 -0800 # Node ID 99e65578bc80960b2fdf494e048678dd97bba029 # Parent 2f7e557eab5b501ba71418febd3de9ef1c0ab4f1 Cache: remove unused valid_msec fields. Signed-off-by: Piotr Sikora diff -r 2f7e557eab5b -r 99e65578bc80 src/http/ngx_http_cache.h --- a/src/http/ngx_http_cache.h Tue Nov 18 20:41:12 2014 +0300 +++ b/src/http/ngx_http_cache.h Tue Nov 18 17:07:13 2014 -0800 @@ -27,7 +27,7 @@ #define NGX_HTTP_CACHE_ETAG_LEN 42 #define NGX_HTTP_CACHE_VARY_LEN 42 -#define NGX_HTTP_CACHE_VERSION 3 +#define NGX_HTTP_CACHE_VERSION 4 typedef struct { @@ -45,7 +45,6 @@ typedef struct { unsigned count:20; unsigned uses:10; - unsigned valid_msec:10; unsigned error:10; unsigned exists:1; unsigned updating:1; @@ -84,7 +83,6 @@ struct ngx_http_cache_s { ngx_uint_t min_uses; ngx_uint_t error; - ngx_uint_t valid_msec; ngx_buf_t *buf; @@ -116,7 +114,6 @@ typedef struct { time_t last_modified; time_t date; uint32_t crc32; - u_short valid_msec; u_short header_start; u_short body_start; u_char etag_len; diff -r 2f7e557eab5b -r 99e65578bc80 src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c Tue Nov 18 20:41:12 2014 +0300 +++ b/src/http/ngx_http_file_cache.c Tue Nov 18 17:07:13 2014 -0800 @@ -561,7 +561,6 @@ ngx_http_file_cache_read(ngx_http_reques c->valid_sec = h->valid_sec; c->last_modified = h->last_modified; c->date = h->date; - c->valid_msec = h->valid_msec; c->header_start = h->header_start; c->body_start = h->body_start; c->etag.len = h->etag_len; @@ -762,7 +761,6 @@ renew: rc = NGX_DECLINED; - fcn->valid_msec = 0; fcn->error = 0; fcn->exists = 0; fcn->valid_sec = 0; @@ -1119,7 +1117,6 @@ ngx_http_file_cache_set_header(ngx_http_ h->last_modified = c->last_modified; h->date = c->date; h->crc32 = c->crc32; - h->valid_msec = (u_short) c->valid_msec; h->header_start = (u_short) c->header_start; h->body_start = (u_short) c->body_start; @@ -1359,7 +1356,6 @@ ngx_http_file_cache_update_header(ngx_ht h.last_modified = c->last_modified; h.date = c->date; h.crc32 = c->crc32; - h.valid_msec = (u_short) c->valid_msec; h.header_start = (u_short) c->header_start; h.body_start = (u_short) c->body_start; @@ -1475,7 +1471,6 @@ ngx_http_file_cache_free(ngx_http_cache_ if (c->valid_sec) { fcn->valid_sec = c->valid_sec; - fcn->valid_msec = c->valid_msec; } } else if (!fcn->exists && fcn->count == 0 && c->min_uses == 1) { From piotr at cloudflare.com Wed Nov 19 01:15:49 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 18 Nov 2014 17:15:49 -0800 Subject: [PATCH 2 of 2] Cache: send conditional requests only for cached 200 OK responses In-Reply-To: <99e65578bc80960b2fdf.1416359748@piotrs-macbook-pro.local> References: <99e65578bc80960b2fdf.1416359748@piotrs-macbook-pro.local> Message-ID: <16f4ca8391ddd98ba99b.1416359749@piotrs-macbook-pro.local> # HG changeset patch # User Piotr Sikora # Date 1416359233 28800 # Tue Nov 18 17:07:13 2014 -0800 # Node ID 16f4ca8391ddd98ba99b00a46c0b56390f38e0a2 # Parent 99e65578bc80960b2fdf494e048678dd97bba029 Cache: send conditional requests only for cached 200 OK responses. RFC7232 says: The 304 (Not Modified) status code indicates that a conditional GET or HEAD request has been received and would have resulted in a 200 (OK) response if it were not for the fact that the condition evaluated to false. which means that there is no reason to send requests with "If-None-Match" and/or "If-Modified-Since" headers for responses cached with other status codes. Also, sending conditional requests for responses cached with other status codes could result in a strange behavior, e.g. upstream server returning 304 Not Modified for cached 404 Not Found responses, etc. Signed-off-by: Piotr Sikora diff -r 99e65578bc80 -r 16f4ca8391dd src/http/ngx_http_cache.h --- a/src/http/ngx_http_cache.h Tue Nov 18 17:07:13 2014 -0800 +++ b/src/http/ngx_http_cache.h Tue Nov 18 17:07:13 2014 -0800 @@ -27,7 +27,7 @@ #define NGX_HTTP_CACHE_ETAG_LEN 42 #define NGX_HTTP_CACHE_VARY_LEN 42 -#define NGX_HTTP_CACHE_VERSION 4 +#define NGX_HTTP_CACHE_VERSION 5 typedef struct { @@ -83,6 +83,7 @@ struct ngx_http_cache_s { ngx_uint_t min_uses; ngx_uint_t error; + ngx_uint_t status; ngx_buf_t *buf; @@ -114,6 +115,7 @@ typedef struct { time_t last_modified; time_t date; uint32_t crc32; + u_short status; u_short header_start; u_short body_start; u_char etag_len; diff -r 99e65578bc80 -r 16f4ca8391dd src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c Tue Nov 18 17:07:13 2014 -0800 +++ b/src/http/ngx_http_file_cache.c Tue Nov 18 17:07:13 2014 -0800 @@ -561,6 +561,7 @@ ngx_http_file_cache_read(ngx_http_reques c->valid_sec = h->valid_sec; c->last_modified = h->last_modified; c->date = h->date; + c->status = h->status; c->header_start = h->header_start; c->body_start = h->body_start; c->etag.len = h->etag_len; @@ -1117,6 +1118,7 @@ ngx_http_file_cache_set_header(ngx_http_ h->last_modified = c->last_modified; h->date = c->date; h->crc32 = c->crc32; + h->status = (u_short) c->status; h->header_start = (u_short) c->header_start; h->body_start = (u_short) c->body_start; @@ -1335,6 +1337,7 @@ ngx_http_file_cache_update_header(ngx_ht if (h.version != NGX_HTTP_CACHE_VERSION || h.last_modified != c->last_modified || h.crc32 != c->crc32 + || h.status != c->status || h.header_start != c->header_start || h.body_start != c->body_start) { @@ -1356,6 +1359,7 @@ ngx_http_file_cache_update_header(ngx_ht h.last_modified = c->last_modified; h.date = c->date; h.crc32 = c->crc32; + h.status = (u_short) c->status; h.header_start = (u_short) c->header_start; h.body_start = (u_short) c->body_start; diff -r 99e65578bc80 -r 16f4ca8391dd src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Tue Nov 18 17:07:13 2014 -0800 +++ b/src/http/ngx_http_upstream.c Tue Nov 18 17:07:13 2014 -0800 @@ -2560,6 +2560,7 @@ ngx_http_upstream_send_response(ngx_http } if (valid) { + r->cache->status = u->headers_in.status_n; r->cache->last_modified = u->headers_in.last_modified_time; r->cache->date = now; r->cache->body_start = (u_short) (u->buffer.pos - u->buffer.start); @@ -4924,6 +4925,7 @@ ngx_http_upstream_cache_last_modified(ng if (r->upstream == NULL || !r->upstream->conf->cache_revalidate || r->upstream->cache_status != NGX_HTTP_CACHE_EXPIRED + || r->cache->status != NGX_HTTP_OK || r->cache->last_modified == -1) { v->not_found = 1; @@ -4952,6 +4954,7 @@ ngx_http_upstream_cache_etag(ngx_http_re if (r->upstream == NULL || !r->upstream->conf->cache_revalidate || r->upstream->cache_status != NGX_HTTP_CACHE_EXPIRED + || r->cache->status != NGX_HTTP_OK || r->cache->etag.len == 0) { v->not_found = 1; From piotr at cloudflare.com Wed Nov 19 01:15:51 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 18 Nov 2014 17:15:51 -0800 Subject: [PATCH] Cache: add support for Cache-Control's s-maxage response directive Message-ID: # HG changeset patch # User Piotr Sikora # Date 1416359234 28800 # Tue Nov 18 17:07:14 2014 -0800 # Node ID b7b345ad11d81cbf6c17e5933aad9ce3af4f16c8 # Parent 2f7e557eab5b501ba71418febd3de9ef1c0ab4f1 Cache: add support for Cache-Control's s-maxage response directive. Signed-off-by: Piotr Sikora diff -r 2f7e557eab5b -r b7b345ad11d8 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Tue Nov 18 20:41:12 2014 +0300 +++ b/src/http/ngx_http_upstream.c Tue Nov 18 17:07:14 2014 -0800 @@ -3934,7 +3934,7 @@ ngx_http_upstream_process_cache_control( #if (NGX_HTTP_CACHE) { - u_char *p, *last; + u_char *p, *start, *last; ngx_int_t n; if (u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_CACHE_CONTROL) { @@ -3949,18 +3949,24 @@ ngx_http_upstream_process_cache_control( return NGX_OK; } - p = h->value.data; - last = p + h->value.len; - - if (ngx_strlcasestrn(p, last, (u_char *) "no-cache", 8 - 1) != NULL - || ngx_strlcasestrn(p, last, (u_char *) "no-store", 8 - 1) != NULL - || ngx_strlcasestrn(p, last, (u_char *) "private", 7 - 1) != NULL) + start = h->value.data; + last = start + h->value.len; + + if (ngx_strlcasestrn(start, last, (u_char *) "no-cache", 8 - 1) != NULL + || ngx_strlcasestrn(start, last, (u_char *) "no-store", 8 - 1) != NULL + || ngx_strlcasestrn(start, last, (u_char *) "private", 7 - 1) != NULL) { u->cacheable = 0; return NGX_OK; } - p = ngx_strlcasestrn(p, last, (u_char *) "max-age=", 8 - 1); + p = ngx_strlcasestrn(start, last, (u_char *) "s-maxage=", 9 - 1); + offset = 9; + + if (p == NULL) { + p = ngx_strlcasestrn(start, last, (u_char *) "max-age=", 8 - 1); + offset = 8; + } if (p == NULL) { return NGX_OK; @@ -3968,7 +3974,7 @@ ngx_http_upstream_process_cache_control( n = 0; - for (p += 8; p < last; p++) { + for (p += offset; p < last; p++) { if (*p == ',' || *p == ';' || *p == ' ') { break; } From arut at nginx.com Wed Nov 19 14:37:55 2014 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 19 Nov 2014 14:37:55 +0000 Subject: [nginx] Scgi: do not push redundant NULL element into conf->params. Message-ID: details: http://hg.nginx.org/nginx/rev/548f704c1907 branches: changeset: 5906:548f704c1907 user: Roman Arutyunyan date: Wed Nov 19 17:33:21 2014 +0300 description: Scgi: do not push redundant NULL element into conf->params. diffstat: src/http/modules/ngx_http_scgi_module.c | 7 ------- 1 files changed, 0 insertions(+), 7 deletions(-) diffs (17 lines): diff -r 2f7e557eab5b -r 548f704c1907 src/http/modules/ngx_http_scgi_module.c --- a/src/http/modules/ngx_http_scgi_module.c Tue Nov 18 20:41:12 2014 +0300 +++ b/src/http/modules/ngx_http_scgi_module.c Wed Nov 19 17:33:21 2014 +0300 @@ -1670,13 +1670,6 @@ ngx_http_scgi_merge_params(ngx_conf_t *c *code = (uintptr_t) NULL; - code = ngx_array_push_n(conf->params, sizeof(uintptr_t)); - if (code == NULL) { - return NGX_ERROR; - } - - *code = (uintptr_t) NULL; - conf->header_params = headers_names.nelts; hash.hash = &conf->headers_hash; From arut at nginx.com Wed Nov 19 14:37:59 2014 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 19 Nov 2014 14:37:59 +0000 Subject: [nginx] Upstream: moved header initializations to separate funct... Message-ID: details: http://hg.nginx.org/nginx/rev/195561ef367f branches: changeset: 5907:195561ef367f user: Roman Arutyunyan date: Wed Nov 19 17:33:21 2014 +0300 description: Upstream: moved header initializations to separate functions. No functional changes. diffstat: src/http/modules/ngx_http_fastcgi_module.c | 44 ++++++++++++++--------------- src/http/modules/ngx_http_proxy_module.c | 39 ++++++++++++------------- src/http/modules/ngx_http_scgi_module.c | 44 ++++++++++++++--------------- src/http/modules/ngx_http_uwsgi_module.c | 44 ++++++++++++++--------------- 4 files changed, 82 insertions(+), 89 deletions(-) diffs (298 lines): diff -r 548f704c1907 -r 195561ef367f src/http/modules/ngx_http_fastcgi_module.c --- a/src/http/modules/ngx_http_fastcgi_module.c Wed Nov 19 17:33:21 2014 +0300 +++ b/src/http/modules/ngx_http_fastcgi_module.c Wed Nov 19 17:33:21 2014 +0300 @@ -150,8 +150,8 @@ static ngx_int_t ngx_http_fastcgi_add_va static void *ngx_http_fastcgi_create_loc_conf(ngx_conf_t *cf); static char *ngx_http_fastcgi_merge_loc_conf(ngx_conf_t *cf, void *parent, void *child); -static ngx_int_t ngx_http_fastcgi_merge_params(ngx_conf_t *cf, - ngx_http_fastcgi_loc_conf_t *conf, ngx_http_fastcgi_loc_conf_t *prev); +static ngx_int_t ngx_http_fastcgi_init_params(ngx_conf_t *cf, + ngx_http_fastcgi_loc_conf_t *conf); static ngx_int_t ngx_http_fastcgi_script_name_variable(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); @@ -2703,7 +2703,22 @@ ngx_http_fastcgi_merge_loc_conf(ngx_conf } #endif - if (ngx_http_fastcgi_merge_params(cf, conf, prev) != NGX_OK) { + if (conf->params_source == NULL) { + conf->params_source = prev->params_source; + +#if (NGX_HTTP_CACHE) + if ((conf->upstream.cache == NULL) == (prev->upstream.cache == NULL)) +#endif + { + conf->flushes = prev->flushes; + conf->params_len = prev->params_len; + conf->params = prev->params; + conf->headers_hash = prev->headers_hash; + conf->header_params = prev->header_params; + } + } + + if (ngx_http_fastcgi_init_params(cf, conf) != NGX_OK) { return NGX_CONF_ERROR; } @@ -2712,8 +2727,7 @@ ngx_http_fastcgi_merge_loc_conf(ngx_conf static ngx_int_t -ngx_http_fastcgi_merge_params(ngx_conf_t *cf, - ngx_http_fastcgi_loc_conf_t *conf, ngx_http_fastcgi_loc_conf_t *prev) +ngx_http_fastcgi_init_params(ngx_conf_t *cf, ngx_http_fastcgi_loc_conf_t *conf) { u_char *p; size_t size; @@ -2729,24 +2743,8 @@ ngx_http_fastcgi_merge_params(ngx_conf_t ngx_http_script_compile_t sc; ngx_http_script_copy_code_t *copy; - if (conf->params_source == NULL) { - conf->params_source = prev->params_source; - - if (prev->headers_hash.buckets -#if (NGX_HTTP_CACHE) - && ((conf->upstream.cache == NULL) - == (prev->upstream.cache == NULL)) -#endif - ) - { - conf->flushes = prev->flushes; - conf->params_len = prev->params_len; - conf->params = prev->params; - conf->headers_hash = prev->headers_hash; - conf->header_params = prev->header_params; - - return NGX_OK; - } + if (conf->headers_hash.buckets) { + return NGX_OK; } if (conf->params_source == NULL diff -r 548f704c1907 -r 195561ef367f src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c Wed Nov 19 17:33:21 2014 +0300 +++ b/src/http/modules/ngx_http_proxy_module.c Wed Nov 19 17:33:21 2014 +0300 @@ -146,8 +146,8 @@ static ngx_int_t ngx_http_proxy_add_vari static void *ngx_http_proxy_create_loc_conf(ngx_conf_t *cf); static char *ngx_http_proxy_merge_loc_conf(ngx_conf_t *cf, void *parent, void *child); -static ngx_int_t ngx_http_proxy_merge_headers(ngx_conf_t *cf, - ngx_http_proxy_loc_conf_t *conf, ngx_http_proxy_loc_conf_t *prev); +static ngx_int_t ngx_http_proxy_init_headers(ngx_conf_t *cf, + ngx_http_proxy_loc_conf_t *conf); static char *ngx_http_proxy_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); @@ -3015,7 +3015,21 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t } } - if (ngx_http_proxy_merge_headers(cf, conf, prev) != NGX_OK) { + if (conf->headers_source == NULL) { + conf->flushes = prev->flushes; + conf->headers_set_len = prev->headers_set_len; + conf->headers_set = prev->headers_set; + conf->headers_set_hash = prev->headers_set_hash; + conf->headers_source = prev->headers_source; + } + +#if (NGX_HTTP_CACHE) + if ((conf->upstream.cache == NULL) != (prev->upstream.cache == NULL)) { + conf->headers_set_hash.buckets = NULL; + } +#endif + + if (ngx_http_proxy_init_headers(cf, conf) != NGX_OK) { return NGX_CONF_ERROR; } @@ -3024,8 +3038,7 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t static ngx_int_t -ngx_http_proxy_merge_headers(ngx_conf_t *cf, ngx_http_proxy_loc_conf_t *conf, - ngx_http_proxy_loc_conf_t *prev) +ngx_http_proxy_init_headers(ngx_conf_t *cf, ngx_http_proxy_loc_conf_t *conf) { u_char *p; size_t size; @@ -3038,24 +3051,10 @@ ngx_http_proxy_merge_headers(ngx_conf_t ngx_http_script_compile_t sc; ngx_http_script_copy_code_t *copy; - if (conf->headers_source == NULL) { - conf->flushes = prev->flushes; - conf->headers_set_len = prev->headers_set_len; - conf->headers_set = prev->headers_set; - conf->headers_set_hash = prev->headers_set_hash; - conf->headers_source = prev->headers_source; - } - - if (conf->headers_set_hash.buckets -#if (NGX_HTTP_CACHE) - && ((conf->upstream.cache == NULL) == (prev->upstream.cache == NULL)) -#endif - ) - { + if (conf->headers_set_hash.buckets) { return NGX_OK; } - if (ngx_array_init(&headers_names, cf->temp_pool, 4, sizeof(ngx_hash_key_t)) != NGX_OK) { diff -r 548f704c1907 -r 195561ef367f src/http/modules/ngx_http_scgi_module.c --- a/src/http/modules/ngx_http_scgi_module.c Wed Nov 19 17:33:21 2014 +0300 +++ b/src/http/modules/ngx_http_scgi_module.c Wed Nov 19 17:33:21 2014 +0300 @@ -43,8 +43,8 @@ static void ngx_http_scgi_finalize_reque static void *ngx_http_scgi_create_loc_conf(ngx_conf_t *cf); static char *ngx_http_scgi_merge_loc_conf(ngx_conf_t *cf, void *parent, void *child); -static ngx_int_t ngx_http_scgi_merge_params(ngx_conf_t *cf, - ngx_http_scgi_loc_conf_t *conf, ngx_http_scgi_loc_conf_t *prev); +static ngx_int_t ngx_http_scgi_init_params(ngx_conf_t *cf, + ngx_http_scgi_loc_conf_t *conf); static char *ngx_http_scgi_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); static char *ngx_http_scgi_store(ngx_conf_t *cf, ngx_command_t *cmd, @@ -1443,7 +1443,22 @@ ngx_http_scgi_merge_loc_conf(ngx_conf_t } } - if (ngx_http_scgi_merge_params(cf, conf, prev) != NGX_OK) { + if (conf->params_source == NULL) { + conf->params_source = prev->params_source; + +#if (NGX_HTTP_CACHE) + if ((conf->upstream.cache == NULL) == (prev->upstream.cache == NULL)) +#endif + { + conf->flushes = prev->flushes; + conf->params_len = prev->params_len; + conf->params = prev->params; + conf->headers_hash = prev->headers_hash; + conf->header_params = prev->header_params; + } + } + + if (ngx_http_scgi_init_params(cf, conf) != NGX_OK) { return NGX_CONF_ERROR; } @@ -1452,8 +1467,7 @@ ngx_http_scgi_merge_loc_conf(ngx_conf_t static ngx_int_t -ngx_http_scgi_merge_params(ngx_conf_t *cf, ngx_http_scgi_loc_conf_t *conf, - ngx_http_scgi_loc_conf_t *prev) +ngx_http_scgi_init_params(ngx_conf_t *cf, ngx_http_scgi_loc_conf_t *conf) { u_char *p; size_t size; @@ -1469,24 +1483,8 @@ ngx_http_scgi_merge_params(ngx_conf_t *c ngx_http_script_compile_t sc; ngx_http_script_copy_code_t *copy; - if (conf->params_source == NULL) { - conf->params_source = prev->params_source; - - if (prev->headers_hash.buckets -#if (NGX_HTTP_CACHE) - && ((conf->upstream.cache == NULL) - == (prev->upstream.cache == NULL)) -#endif - ) - { - conf->flushes = prev->flushes; - conf->params_len = prev->params_len; - conf->params = prev->params; - conf->headers_hash = prev->headers_hash; - conf->header_params = prev->header_params; - - return NGX_OK; - } + if (conf->headers_hash.buckets) { + return NGX_OK; } if (conf->params_source == NULL diff -r 548f704c1907 -r 195561ef367f src/http/modules/ngx_http_uwsgi_module.c --- a/src/http/modules/ngx_http_uwsgi_module.c Wed Nov 19 17:33:21 2014 +0300 +++ b/src/http/modules/ngx_http_uwsgi_module.c Wed Nov 19 17:33:21 2014 +0300 @@ -62,8 +62,8 @@ static void ngx_http_uwsgi_finalize_requ static void *ngx_http_uwsgi_create_loc_conf(ngx_conf_t *cf); static char *ngx_http_uwsgi_merge_loc_conf(ngx_conf_t *cf, void *parent, void *child); -static ngx_int_t ngx_http_uwsgi_merge_params(ngx_conf_t *cf, - ngx_http_uwsgi_loc_conf_t *conf, ngx_http_uwsgi_loc_conf_t *prev); +static ngx_int_t ngx_http_uwsgi_init_params(ngx_conf_t *cf, + ngx_http_uwsgi_loc_conf_t *conf); static char *ngx_http_uwsgi_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); @@ -1705,7 +1705,22 @@ ngx_http_uwsgi_merge_loc_conf(ngx_conf_t ngx_conf_merge_uint_value(conf->modifier1, prev->modifier1, 0); ngx_conf_merge_uint_value(conf->modifier2, prev->modifier2, 0); - if (ngx_http_uwsgi_merge_params(cf, conf, prev) != NGX_OK) { + if (conf->params_source == NULL) { + conf->params_source = prev->params_source; + +#if (NGX_HTTP_CACHE) + if ((conf->upstream.cache == NULL) == (prev->upstream.cache == NULL)) +#endif + { + conf->flushes = prev->flushes; + conf->params_len = prev->params_len; + conf->params = prev->params; + conf->headers_hash = prev->headers_hash; + conf->header_params = prev->header_params; + } + } + + if (ngx_http_uwsgi_init_params(cf, conf) != NGX_OK) { return NGX_CONF_ERROR; } @@ -1714,8 +1729,7 @@ ngx_http_uwsgi_merge_loc_conf(ngx_conf_t static ngx_int_t -ngx_http_uwsgi_merge_params(ngx_conf_t *cf, ngx_http_uwsgi_loc_conf_t *conf, - ngx_http_uwsgi_loc_conf_t *prev) +ngx_http_uwsgi_init_params(ngx_conf_t *cf, ngx_http_uwsgi_loc_conf_t *conf) { u_char *p; size_t size; @@ -1731,24 +1745,8 @@ ngx_http_uwsgi_merge_params(ngx_conf_t * ngx_http_script_compile_t sc; ngx_http_script_copy_code_t *copy; - if (conf->params_source == NULL) { - conf->params_source = prev->params_source; - - if (prev->headers_hash.buckets -#if (NGX_HTTP_CACHE) - && ((conf->upstream.cache == NULL) - == (prev->upstream.cache == NULL)) -#endif - ) - { - conf->flushes = prev->flushes; - conf->params_len = prev->params_len; - conf->params = prev->params; - conf->headers_hash = prev->headers_hash; - conf->header_params = prev->header_params; - - return NGX_OK; - } + if (conf->headers_hash.buckets) { + return NGX_OK; } if (conf->params_source == NULL From arut at nginx.com Wed Nov 19 14:38:01 2014 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 19 Nov 2014 14:38:01 +0000 Subject: [nginx] Upstream: moved header lists to separate structures. Message-ID: details: http://hg.nginx.org/nginx/rev/f8e80f8c7fc7 branches: changeset: 5908:f8e80f8c7fc7 user: Roman Arutyunyan date: Wed Nov 19 17:33:22 2014 +0300 description: Upstream: moved header lists to separate structures. No functional changes. diffstat: src/http/modules/ngx_http_fastcgi_module.c | 85 ++++++++++++++------------- src/http/modules/ngx_http_proxy_module.c | 91 ++++++++++++++++------------- src/http/modules/ngx_http_scgi_module.c | 82 ++++++++++++++------------ src/http/modules/ngx_http_uwsgi_module.c | 84 ++++++++++++++------------- 4 files changed, 182 insertions(+), 160 deletions(-) diffs (truncated from 1050 to 300 lines): diff -r 195561ef367f -r f8e80f8c7fc7 src/http/modules/ngx_http_fastcgi_module.c --- a/src/http/modules/ngx_http_fastcgi_module.c Wed Nov 19 17:33:21 2014 +0300 +++ b/src/http/modules/ngx_http_fastcgi_module.c Wed Nov 19 17:33:22 2014 +0300 @@ -11,22 +11,27 @@ typedef struct { + ngx_array_t *flushes; + ngx_array_t *lengths; + ngx_array_t *values; + ngx_uint_t number; + ngx_hash_t hash; +} ngx_http_fastcgi_params_t; + + +typedef struct { ngx_http_upstream_conf_t upstream; ngx_str_t index; - ngx_array_t *flushes; - ngx_array_t *params_len; - ngx_array_t *params; + ngx_http_fastcgi_params_t params; + ngx_array_t *params_source; ngx_array_t *catch_stderr; ngx_array_t *fastcgi_lengths; ngx_array_t *fastcgi_values; - ngx_hash_t headers_hash; - ngx_uint_t header_params; - ngx_flag_t keep_conn; #if (NGX_HTTP_CACHE) @@ -151,7 +156,7 @@ static void *ngx_http_fastcgi_create_loc static char *ngx_http_fastcgi_merge_loc_conf(ngx_conf_t *cf, void *parent, void *child); static ngx_int_t ngx_http_fastcgi_init_params(ngx_conf_t *cf, - ngx_http_fastcgi_loc_conf_t *conf); + ngx_http_fastcgi_loc_conf_t *conf, ngx_http_fastcgi_params_t *params); static ngx_int_t ngx_http_fastcgi_script_name_variable(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); @@ -780,6 +785,7 @@ ngx_http_fastcgi_create_request(ngx_http ngx_http_script_code_pt code; ngx_http_script_engine_t e, le; ngx_http_fastcgi_header_t *h; + ngx_http_fastcgi_params_t *params; ngx_http_fastcgi_loc_conf_t *flcf; ngx_http_script_len_code_pt lcode; @@ -789,13 +795,15 @@ ngx_http_fastcgi_create_request(ngx_http flcf = ngx_http_get_module_loc_conf(r, ngx_http_fastcgi_module); - if (flcf->params_len) { + params = &flcf->params; + + if (params->lengths) { ngx_memzero(&le, sizeof(ngx_http_script_engine_t)); - ngx_http_script_flush_no_cacheable_variables(r, flcf->flushes); + ngx_http_script_flush_no_cacheable_variables(r, params->flushes); le.flushed = 1; - le.ip = flcf->params_len->elts; + le.ip = params->lengths->elts; le.request = r; while (*(uintptr_t *) le.ip) { @@ -824,7 +832,7 @@ ngx_http_fastcgi_create_request(ngx_http allocated = 0; lowcase_key = NULL; - if (flcf->header_params) { + if (params->number) { n = 0; part = &r->headers_in.headers.part; @@ -854,7 +862,7 @@ ngx_http_fastcgi_create_request(ngx_http i = 0; } - if (flcf->header_params) { + if (params->number) { if (allocated < header[i].key.len) { allocated = header[i].key.len + 16; lowcase_key = ngx_pnalloc(r->pool, allocated); @@ -879,7 +887,7 @@ ngx_http_fastcgi_create_request(ngx_http lowcase_key[n] = ch; } - if (ngx_hash_find(&flcf->headers_hash, hash, lowcase_key, n)) { + if (ngx_hash_find(¶ms->hash, hash, lowcase_key, n)) { ignored[header_params++] = &header[i]; continue; } @@ -949,15 +957,15 @@ ngx_http_fastcgi_create_request(ngx_http + sizeof(ngx_http_fastcgi_header_t); - if (flcf->params_len) { + if (params->lengths) { ngx_memzero(&e, sizeof(ngx_http_script_engine_t)); - e.ip = flcf->params->elts; + e.ip = params->values->elts; e.pos = b->last; e.request = r; e.flushed = 1; - le.ip = flcf->params_len->elts; + le.ip = params->lengths->elts; while (*(uintptr_t *) le.ip) { @@ -2710,15 +2718,11 @@ ngx_http_fastcgi_merge_loc_conf(ngx_conf if ((conf->upstream.cache == NULL) == (prev->upstream.cache == NULL)) #endif { - conf->flushes = prev->flushes; - conf->params_len = prev->params_len; conf->params = prev->params; - conf->headers_hash = prev->headers_hash; - conf->header_params = prev->header_params; } } - if (ngx_http_fastcgi_init_params(cf, conf) != NGX_OK) { + if (ngx_http_fastcgi_init_params(cf, conf, &conf->params) != NGX_OK) { return NGX_CONF_ERROR; } @@ -2727,7 +2731,8 @@ ngx_http_fastcgi_merge_loc_conf(ngx_conf static ngx_int_t -ngx_http_fastcgi_init_params(ngx_conf_t *cf, ngx_http_fastcgi_loc_conf_t *conf) +ngx_http_fastcgi_init_params(ngx_conf_t *cf, ngx_http_fastcgi_loc_conf_t *conf, + ngx_http_fastcgi_params_t *params) { u_char *p; size_t size; @@ -2743,7 +2748,7 @@ ngx_http_fastcgi_init_params(ngx_conf_t ngx_http_script_compile_t sc; ngx_http_script_copy_code_t *copy; - if (conf->headers_hash.buckets) { + if (params->hash.buckets) { return NGX_OK; } @@ -2753,17 +2758,17 @@ ngx_http_fastcgi_init_params(ngx_conf_t #endif ) { - conf->headers_hash.buckets = (void *) 1; + params->hash.buckets = (void *) 1; return NGX_OK; } - conf->params_len = ngx_array_create(cf->pool, 64, 1); - if (conf->params_len == NULL) { + params->lengths = ngx_array_create(cf->pool, 64, 1); + if (params->lengths == NULL) { return NGX_ERROR; } - conf->params = ngx_array_create(cf->pool, 512, 1); - if (conf->params == NULL) { + params->values = ngx_array_create(cf->pool, 512, 1); + if (params->values == NULL) { return NGX_ERROR; } @@ -2858,7 +2863,7 @@ ngx_http_fastcgi_init_params(ngx_conf_t } } - copy = ngx_array_push_n(conf->params_len, + copy = ngx_array_push_n(params->lengths, sizeof(ngx_http_script_copy_code_t)); if (copy == NULL) { return NGX_ERROR; @@ -2867,7 +2872,7 @@ ngx_http_fastcgi_init_params(ngx_conf_t copy->code = (ngx_http_script_code_pt) ngx_http_script_copy_len_code; copy->len = src[i].key.len; - copy = ngx_array_push_n(conf->params_len, + copy = ngx_array_push_n(params->lengths, sizeof(ngx_http_script_copy_code_t)); if (copy == NULL) { return NGX_ERROR; @@ -2881,7 +2886,7 @@ ngx_http_fastcgi_init_params(ngx_conf_t + src[i].key.len + sizeof(uintptr_t) - 1) & ~(sizeof(uintptr_t) - 1); - copy = ngx_array_push_n(conf->params, size); + copy = ngx_array_push_n(params->values, size); if (copy == NULL) { return NGX_ERROR; } @@ -2897,15 +2902,15 @@ ngx_http_fastcgi_init_params(ngx_conf_t sc.cf = cf; sc.source = &src[i].value; - sc.flushes = &conf->flushes; - sc.lengths = &conf->params_len; - sc.values = &conf->params; + sc.flushes = ¶ms->flushes; + sc.lengths = ¶ms->lengths; + sc.values = ¶ms->values; if (ngx_http_script_compile(&sc) != NGX_OK) { return NGX_ERROR; } - code = ngx_array_push_n(conf->params_len, sizeof(uintptr_t)); + code = ngx_array_push_n(params->lengths, sizeof(uintptr_t)); if (code == NULL) { return NGX_ERROR; } @@ -2913,7 +2918,7 @@ ngx_http_fastcgi_init_params(ngx_conf_t *code = (uintptr_t) NULL; - code = ngx_array_push_n(conf->params, sizeof(uintptr_t)); + code = ngx_array_push_n(params->values, sizeof(uintptr_t)); if (code == NULL) { return NGX_ERROR; } @@ -2921,16 +2926,16 @@ ngx_http_fastcgi_init_params(ngx_conf_t *code = (uintptr_t) NULL; } - code = ngx_array_push_n(conf->params_len, sizeof(uintptr_t)); + code = ngx_array_push_n(params->lengths, sizeof(uintptr_t)); if (code == NULL) { return NGX_ERROR; } *code = (uintptr_t) NULL; - conf->header_params = headers_names.nelts; - - hash.hash = &conf->headers_hash; + params->number = headers_names.nelts; + + hash.hash = ¶ms->hash; hash.key = ngx_hash_key_lc; hash.max_size = 512; hash.bucket_size = 64; diff -r 195561ef367f -r f8e80f8c7fc7 src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c Wed Nov 19 17:33:21 2014 +0300 +++ b/src/http/modules/ngx_http_proxy_module.c Wed Nov 19 17:33:22 2014 +0300 @@ -40,15 +40,21 @@ typedef struct { typedef struct { + ngx_array_t *flushes; + ngx_array_t *lengths; + ngx_array_t *values; + ngx_hash_t hash; +} ngx_http_proxy_headers_t; + + +typedef struct { ngx_http_upstream_conf_t upstream; - ngx_array_t *flushes; + ngx_array_t *body_flushes; ngx_array_t *body_set_len; ngx_array_t *body_set; - ngx_array_t *headers_set_len; - ngx_array_t *headers_set; - ngx_hash_t headers_set_hash; - + + ngx_http_proxy_headers_t headers; ngx_array_t *headers_source; ngx_array_t *proxy_lengths; @@ -147,7 +153,7 @@ static void *ngx_http_proxy_create_loc_c static char *ngx_http_proxy_merge_loc_conf(ngx_conf_t *cf, void *parent, void *child); static ngx_int_t ngx_http_proxy_init_headers(ngx_conf_t *cf, - ngx_http_proxy_loc_conf_t *conf); + ngx_http_proxy_loc_conf_t *conf, ngx_http_proxy_headers_t *headers); static char *ngx_http_proxy_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); @@ -1080,6 +1086,7 @@ ngx_http_proxy_create_request(ngx_http_r ngx_http_upstream_t *u; ngx_http_proxy_ctx_t *ctx; ngx_http_script_code_pt code; + ngx_http_proxy_headers_t *headers; ngx_http_script_engine_t e, le; From arut at nginx.com Wed Nov 19 14:38:04 2014 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 19 Nov 2014 14:38:04 +0000 Subject: [nginx] Upstream: different header lists for cached and uncached... Message-ID: details: http://hg.nginx.org/nginx/rev/8d0cf26ce071 branches: changeset: 5909:8d0cf26ce071 user: Roman Arutyunyan date: Wed Nov 19 17:33:23 2014 +0300 description: Upstream: different header lists for cached and uncached requests. The upstream modules remove and alter a number of client headers before sending the request to upstream. This set of headers is smaller or even empty when cache is disabled. It's still possible that a request in a cache-enabled location is uncached, for example, if cache entry counter is below min_uses. In this case it's better to alter a smaller set of headers and pass more client headers to backend unchanged. One of the benefits is enabling server-side byte ranges in such requests. diffstat: src/http/modules/ngx_http_fastcgi_module.c | 63 ++++++++++++++++------------- src/http/modules/ngx_http_proxy_module.c | 54 ++++++++++++++++--------- src/http/modules/ngx_http_scgi_module.c | 63 ++++++++++++++++------------- src/http/modules/ngx_http_uwsgi_module.c | 63 ++++++++++++++++------------- 4 files changed, 136 insertions(+), 107 deletions(-) diffs (truncated from 576 to 300 lines): diff -r f8e80f8c7fc7 -r 8d0cf26ce071 src/http/modules/ngx_http_fastcgi_module.c --- a/src/http/modules/ngx_http_fastcgi_module.c Wed Nov 19 17:33:22 2014 +0300 +++ b/src/http/modules/ngx_http_fastcgi_module.c Wed Nov 19 17:33:23 2014 +0300 @@ -25,6 +25,9 @@ typedef struct { ngx_str_t index; ngx_http_fastcgi_params_t params; +#if (NGX_HTTP_CACHE) + ngx_http_fastcgi_params_t params_cache; +#endif ngx_array_t *params_source; ngx_array_t *catch_stderr; @@ -156,7 +159,8 @@ static void *ngx_http_fastcgi_create_loc static char *ngx_http_fastcgi_merge_loc_conf(ngx_conf_t *cf, void *parent, void *child); static ngx_int_t ngx_http_fastcgi_init_params(ngx_conf_t *cf, - ngx_http_fastcgi_loc_conf_t *conf, ngx_http_fastcgi_params_t *params); + ngx_http_fastcgi_loc_conf_t *conf, ngx_http_fastcgi_params_t *params, + ngx_keyval_t *default_params); static ngx_int_t ngx_http_fastcgi_script_name_variable(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); @@ -795,7 +799,11 @@ ngx_http_fastcgi_create_request(ngx_http flcf = ngx_http_get_module_loc_conf(r, ngx_http_fastcgi_module); +#if (NGX_HTTP_CACHE) + params = r->upstream->cacheable ? &flcf->params_cache : &flcf->params; +#else params = &flcf->params; +#endif if (params->lengths) { ngx_memzero(&le, sizeof(ngx_http_script_engine_t)); @@ -2420,6 +2428,7 @@ ngx_http_fastcgi_merge_loc_conf(ngx_conf ngx_http_fastcgi_loc_conf_t *conf = child; size_t size; + ngx_int_t rc; ngx_hash_init_t hash; ngx_http_core_loc_conf_t *clcf; @@ -2712,19 +2721,29 @@ ngx_http_fastcgi_merge_loc_conf(ngx_conf #endif if (conf->params_source == NULL) { + conf->params = prev->params; +#if (NGX_HTTP_CACHE) + conf->params_cache = prev->params_cache; +#endif conf->params_source = prev->params_source; + } + + rc = ngx_http_fastcgi_init_params(cf, conf, &conf->params, NULL); + if (rc != NGX_OK) { + return NGX_CONF_ERROR; + } #if (NGX_HTTP_CACHE) - if ((conf->upstream.cache == NULL) == (prev->upstream.cache == NULL)) -#endif - { - conf->params = prev->params; + + if (conf->upstream.cache) { + rc = ngx_http_fastcgi_init_params(cf, conf, &conf->params_cache, + ngx_http_fastcgi_cache_headers); + if (rc != NGX_OK) { + return NGX_CONF_ERROR; } } - if (ngx_http_fastcgi_init_params(cf, conf, &conf->params) != NGX_OK) { - return NGX_CONF_ERROR; - } +#endif return NGX_CONF_OK; } @@ -2732,19 +2751,17 @@ ngx_http_fastcgi_merge_loc_conf(ngx_conf static ngx_int_t ngx_http_fastcgi_init_params(ngx_conf_t *cf, ngx_http_fastcgi_loc_conf_t *conf, - ngx_http_fastcgi_params_t *params) + ngx_http_fastcgi_params_t *params, ngx_keyval_t *default_params) { u_char *p; size_t size; uintptr_t *code; ngx_uint_t i, nsrc; - ngx_array_t headers_names; -#if (NGX_HTTP_CACHE) - ngx_array_t params_merged; -#endif + ngx_array_t headers_names, params_merged; + ngx_keyval_t *h; ngx_hash_key_t *hk; ngx_hash_init_t hash; - ngx_http_upstream_param_t *src; + ngx_http_upstream_param_t *src, *s; ngx_http_script_compile_t sc; ngx_http_script_copy_code_t *copy; @@ -2752,12 +2769,7 @@ ngx_http_fastcgi_init_params(ngx_conf_t return NGX_OK; } - if (conf->params_source == NULL -#if (NGX_HTTP_CACHE) - && (conf->upstream.cache == NULL) -#endif - ) - { + if (conf->params_source == NULL && default_params == NULL) { params->hash.buckets = (void *) 1; return NGX_OK; } @@ -2787,12 +2799,7 @@ ngx_http_fastcgi_init_params(ngx_conf_t nsrc = 0; } -#if (NGX_HTTP_CACHE) - - if (conf->upstream.cache) { - ngx_keyval_t *h; - ngx_http_upstream_param_t *s; - + if (default_params) { if (ngx_array_init(¶ms_merged, cf->temp_pool, 4, sizeof(ngx_http_upstream_param_t)) != NGX_OK) @@ -2810,7 +2817,7 @@ ngx_http_fastcgi_init_params(ngx_conf_t *s = src[i]; } - h = ngx_http_fastcgi_cache_headers; + h = default_params; while (h->key.len) { @@ -2841,8 +2848,6 @@ ngx_http_fastcgi_init_params(ngx_conf_t nsrc = params_merged.nelts; } -#endif - for (i = 0; i < nsrc; i++) { if (src[i].key.len > sizeof("HTTP_") - 1 diff -r f8e80f8c7fc7 -r 8d0cf26ce071 src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c Wed Nov 19 17:33:22 2014 +0300 +++ b/src/http/modules/ngx_http_proxy_module.c Wed Nov 19 17:33:23 2014 +0300 @@ -55,6 +55,9 @@ typedef struct { ngx_array_t *body_set; ngx_http_proxy_headers_t headers; +#if (NGX_HTTP_CACHE) + ngx_http_proxy_headers_t headers_cache; +#endif ngx_array_t *headers_source; ngx_array_t *proxy_lengths; @@ -153,7 +156,8 @@ static void *ngx_http_proxy_create_loc_c static char *ngx_http_proxy_merge_loc_conf(ngx_conf_t *cf, void *parent, void *child); static ngx_int_t ngx_http_proxy_init_headers(ngx_conf_t *cf, - ngx_http_proxy_loc_conf_t *conf, ngx_http_proxy_headers_t *headers); + ngx_http_proxy_loc_conf_t *conf, ngx_http_proxy_headers_t *headers, + ngx_keyval_t *default_headers); static char *ngx_http_proxy_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); @@ -1095,7 +1099,11 @@ ngx_http_proxy_create_request(ngx_http_r plcf = ngx_http_get_module_loc_conf(r, ngx_http_proxy_module); +#if (NGX_HTTP_CACHE) + headers = u->cacheable ? &plcf->headers_cache : &plcf->headers; +#else headers = &plcf->headers; +#endif if (u->method.len) { /* HEAD was changed to GET to cache response */ @@ -2515,6 +2523,9 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ * conf->headers.lengths = NULL; * conf->headers.values = NULL; * conf->headers.hash = { NULL, 0 }; + * conf->headers_cache.lengths = NULL; + * conf->headers_cache.values = NULL; + * conf->headers_cache.hash = { NULL, 0 }; * conf->body_set_len = NULL; * conf->body_set = NULL; * conf->body_source = { 0, NULL }; @@ -2606,6 +2617,7 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t u_char *p; size_t size; + ngx_int_t rc; ngx_hash_init_t hash; ngx_http_core_loc_conf_t *clcf; ngx_http_proxy_rewrite_t *pr; @@ -3028,26 +3040,37 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t if (conf->headers_source == NULL) { conf->headers = prev->headers; +#if (NGX_HTTP_CACHE) + conf->headers_cache = prev->headers_cache; +#endif conf->headers_source = prev->headers_source; } -#if (NGX_HTTP_CACHE) - if ((conf->upstream.cache == NULL) != (prev->upstream.cache == NULL)) { - conf->headers.hash.buckets = NULL; - } -#endif - - if (ngx_http_proxy_init_headers(cf, conf, &conf->headers) != NGX_OK) { + rc = ngx_http_proxy_init_headers(cf, conf, &conf->headers, + ngx_http_proxy_headers); + if (rc != NGX_OK) { return NGX_CONF_ERROR; } +#if (NGX_HTTP_CACHE) + + if (conf->upstream.cache) { + rc = ngx_http_proxy_init_headers(cf, conf, &conf->headers_cache, + ngx_http_proxy_cache_headers); + if (rc != NGX_OK) { + return NGX_CONF_ERROR; + } + } + +#endif + return NGX_CONF_OK; } static ngx_int_t ngx_http_proxy_init_headers(ngx_conf_t *cf, ngx_http_proxy_loc_conf_t *conf, - ngx_http_proxy_headers_t *headers) + ngx_http_proxy_headers_t *headers, ngx_keyval_t *default_headers) { u_char *p; size_t size; @@ -3094,17 +3117,6 @@ ngx_http_proxy_init_headers(ngx_conf_t * return NGX_ERROR; } - -#if (NGX_HTTP_CACHE) - - h = conf->upstream.cache ? ngx_http_proxy_cache_headers: - ngx_http_proxy_headers; -#else - - h = ngx_http_proxy_headers; - -#endif - src = conf->headers_source->elts; for (i = 0; i < conf->headers_source->nelts; i++) { @@ -3116,6 +3128,8 @@ ngx_http_proxy_init_headers(ngx_conf_t * *s = src[i]; } + h = default_headers; + while (h->key.len) { src = headers_merged.elts; diff -r f8e80f8c7fc7 -r 8d0cf26ce071 src/http/modules/ngx_http_scgi_module.c --- a/src/http/modules/ngx_http_scgi_module.c Wed Nov 19 17:33:22 2014 +0300 +++ b/src/http/modules/ngx_http_scgi_module.c Wed Nov 19 17:33:23 2014 +0300 @@ -24,6 +24,9 @@ typedef struct { ngx_http_upstream_conf_t upstream; ngx_http_scgi_params_t params; +#if (NGX_HTTP_CACHE) + ngx_http_scgi_params_t params_cache; +#endif ngx_array_t *params_source; ngx_array_t *scgi_lengths; @@ -48,7 +51,8 @@ static void *ngx_http_scgi_create_loc_co static char *ngx_http_scgi_merge_loc_conf(ngx_conf_t *cf, void *parent, void *child); static ngx_int_t ngx_http_scgi_init_params(ngx_conf_t *cf, - ngx_http_scgi_loc_conf_t *conf, ngx_http_scgi_params_t *params); + ngx_http_scgi_loc_conf_t *conf, ngx_http_scgi_params_t *params, + ngx_keyval_t *default_params); static char *ngx_http_scgi_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); static char *ngx_http_scgi_store(ngx_conf_t *cf, ngx_command_t *cmd, @@ -608,7 +612,11 @@ ngx_http_scgi_create_request(ngx_http_re From arut at nginx.com Wed Nov 19 14:38:06 2014 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 19 Nov 2014 14:38:06 +0000 Subject: [nginx] Proxy: renamed and rearranged fields in proxy configurat... Message-ID: details: http://hg.nginx.org/nginx/rev/29fa5023bd6f branches: changeset: 5910:29fa5023bd6f user: Roman Arutyunyan date: Wed Nov 19 17:33:24 2014 +0300 description: Proxy: renamed and rearranged fields in proxy configuration. No functional changes. diffstat: src/http/modules/ngx_http_proxy_module.c | 31 +++++++++++++++---------------- 1 files changed, 15 insertions(+), 16 deletions(-) diffs (91 lines): diff -r 8d0cf26ce071 -r 29fa5023bd6f src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c Wed Nov 19 17:33:23 2014 +0300 +++ b/src/http/modules/ngx_http_proxy_module.c Wed Nov 19 17:33:24 2014 +0300 @@ -51,8 +51,9 @@ typedef struct { ngx_http_upstream_conf_t upstream; ngx_array_t *body_flushes; - ngx_array_t *body_set_len; - ngx_array_t *body_set; + ngx_array_t *body_lengths; + ngx_array_t *body_values; + ngx_str_t body_source; ngx_http_proxy_headers_t headers; #if (NGX_HTTP_CACHE) @@ -67,8 +68,6 @@ typedef struct { ngx_array_t *cookie_domains; ngx_array_t *cookie_paths; - ngx_str_t body_source; - ngx_str_t method; ngx_str_t location; ngx_str_t url; @@ -1166,8 +1165,8 @@ ngx_http_proxy_create_request(ngx_http_r ngx_http_script_flush_no_cacheable_variables(r, plcf->body_flushes); ngx_http_script_flush_no_cacheable_variables(r, headers->flushes); - if (plcf->body_set_len) { - le.ip = plcf->body_set_len->elts; + if (plcf->body_lengths) { + le.ip = plcf->body_lengths->elts; le.request = r; le.flushed = 1; body_len = 0; @@ -1362,8 +1361,8 @@ ngx_http_proxy_create_request(ngx_http_r /* add "\r\n" at the header end */ *b->last++ = CR; *b->last++ = LF; - if (plcf->body_set) { - e.ip = plcf->body_set->elts; + if (plcf->body_values) { + e.ip = plcf->body_values->elts; e.pos = b->last; while (*(uintptr_t *) e.ip) { @@ -1378,7 +1377,7 @@ ngx_http_proxy_create_request(ngx_http_r "http proxy header:%N\"%*s\"", (size_t) (b->last - b->pos), b->pos); - if (plcf->body_set == NULL && plcf->upstream.pass_request_body) { + if (plcf->body_values == NULL && plcf->upstream.pass_request_body) { body = u->request_bufs; u->request_bufs = cl; @@ -2526,8 +2525,8 @@ ngx_http_proxy_create_loc_conf(ngx_conf_ * conf->headers_cache.lengths = NULL; * conf->headers_cache.values = NULL; * conf->headers_cache.hash = { NULL, 0 }; - * conf->body_set_len = NULL; - * conf->body_set = NULL; + * conf->body_lengths = NULL; + * conf->body_values = NULL; * conf->body_source = { 0, NULL }; * conf->redirects = NULL; * conf->ssl = 0; @@ -3017,19 +3016,19 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t if (conf->body_source.data == NULL) { conf->body_flushes = prev->body_flushes; conf->body_source = prev->body_source; - conf->body_set_len = prev->body_set_len; - conf->body_set = prev->body_set; + conf->body_lengths = prev->body_lengths; + conf->body_values = prev->body_values; } - if (conf->body_source.data && conf->body_set_len == NULL) { + if (conf->body_source.data && conf->body_lengths == NULL) { ngx_memzero(&sc, sizeof(ngx_http_script_compile_t)); sc.cf = cf; sc.source = &conf->body_source; sc.flushes = &conf->body_flushes; - sc.lengths = &conf->body_set_len; - sc.values = &conf->body_set; + sc.lengths = &conf->body_lengths; + sc.values = &conf->body_values; sc.complete_lengths = 1; sc.complete_values = 1; From mdounin at mdounin.ru Wed Nov 19 16:35:27 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Nov 2014 19:35:27 +0300 Subject: [PATCH] Not Modified: prefer entity tags over date validators In-Reply-To: <3efade6bb02f7962a512.1416359746@piotrs-macbook-pro.local> References: <3efade6bb02f7962a512.1416359746@piotrs-macbook-pro.local> Message-ID: <20141119163527.GX26593@mdounin.ru> Hello! On Tue, Nov 18, 2014 at 05:15:46PM -0800, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1416359232 28800 > # Tue Nov 18 17:07:12 2014 -0800 > # Node ID 3efade6bb02f7962a5120e1a1f95a1dc8f0b6a4c > # Parent 2f7e557eab5b501ba71418febd3de9ef1c0ab4f1 > Not Modified: prefer entity tags over date validators. > > RFC7232 says: > > A recipient MUST ignore If-Modified-Since if the request contains an > If-None-Match header field; the condition in If-None-Match is > considered to be a more accurate replacement for the condition in > If-Modified-Since, and the two are only combined for the sake of > interoperating with older intermediaries that might not implement > If-None-Match. > > and: > > A recipient MUST ignore If-Unmodified-Since if the request contains > an If-Match header field; the condition in If-Match is considered to > be a more accurate replacement for the condition in > If-Unmodified-Since, and the two are only combined for the sake of > interoperating with older intermediaries that might not implement > If-Match. Current nginx behaviour is to respect both, and I don't see real reasons to change the behaviour. A while ago I've tried to dig into HTTPbis VCS and tracker to find out why this part of the specification was changed from RFC2616, but failed. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Nov 19 16:38:14 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Nov 2014 19:38:14 +0300 Subject: [PATCH 2 of 2] Cache: send conditional requests only for cached 200 OK responses In-Reply-To: <16f4ca8391ddd98ba99b.1416359749@piotrs-macbook-pro.local> References: <99e65578bc80960b2fdf.1416359748@piotrs-macbook-pro.local> <16f4ca8391ddd98ba99b.1416359749@piotrs-macbook-pro.local> Message-ID: <20141119163814.GY26593@mdounin.ru> Hello! On Tue, Nov 18, 2014 at 05:15:49PM -0800, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1416359233 28800 > # Tue Nov 18 17:07:13 2014 -0800 > # Node ID 16f4ca8391ddd98ba99b00a46c0b56390f38e0a2 > # Parent 99e65578bc80960b2fdf494e048678dd97bba029 > Cache: send conditional requests only for cached 200 OK responses. > > RFC7232 says: > > The 304 (Not Modified) status code indicates that a conditional GET > or HEAD request has been received and would have resulted in a 200 > (OK) response if it were not for the fact that the condition > evaluated to false. > > which means that there is no reason to send requests with "If-None-Match" > and/or "If-Modified-Since" headers for responses cached with other status > codes. > > Also, sending conditional requests for responses cached with other status > codes could result in a strange behavior, e.g. upstream server returning > 304 Not Modified for cached 404 Not Found responses, etc. And what's wrong with it? I don't see why this should be status-specific. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Nov 19 16:43:53 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Nov 2014 19:43:53 +0300 Subject: [PATCH 1 of 2] Cache: remove unused valid_msec fields In-Reply-To: <99e65578bc80960b2fdf.1416359748@piotrs-macbook-pro.local> References: <99e65578bc80960b2fdf.1416359748@piotrs-macbook-pro.local> Message-ID: <20141119164353.GZ26593@mdounin.ru> Hello! On Tue, Nov 18, 2014 at 05:15:48PM -0800, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1416359233 28800 > # Tue Nov 18 17:07:13 2014 -0800 > # Node ID 99e65578bc80960b2fdf494e048678dd97bba029 > # Parent 2f7e557eab5b501ba71418febd3de9ef1c0ab4f1 > Cache: remove unused valid_msec fields. I would rather not, for two reasons: - I would actually like to see support for subsecond cache validity times added eventually (unlikely to happen in the near future, but still). - This implies cache file on-disk format change and invalidation of all previously cached files, so shouldn't be done without some serious enough reason. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Nov 19 18:02:29 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Nov 2014 18:02:29 +0000 Subject: [nginx] Cache: add support for Cache-Control's s-maxage response... Message-ID: details: http://hg.nginx.org/nginx/rev/88d55e5934f7 branches: changeset: 5911:88d55e5934f7 user: Piotr Sikora date: Tue Nov 18 17:07:14 2014 -0800 description: Cache: add support for Cache-Control's s-maxage response directive. Signed-off-by: Piotr Sikora diffstat: src/http/ngx_http_upstream.c | 24 +++++++++++++++--------- 1 files changed, 15 insertions(+), 9 deletions(-) diffs (53 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -3934,7 +3934,7 @@ ngx_http_upstream_process_cache_control( #if (NGX_HTTP_CACHE) { - u_char *p, *last; + u_char *p, *start, *last; ngx_int_t n; if (u->conf->ignore_headers & NGX_HTTP_UPSTREAM_IGN_CACHE_CONTROL) { @@ -3949,18 +3949,24 @@ ngx_http_upstream_process_cache_control( return NGX_OK; } - p = h->value.data; - last = p + h->value.len; - - if (ngx_strlcasestrn(p, last, (u_char *) "no-cache", 8 - 1) != NULL - || ngx_strlcasestrn(p, last, (u_char *) "no-store", 8 - 1) != NULL - || ngx_strlcasestrn(p, last, (u_char *) "private", 7 - 1) != NULL) + start = h->value.data; + last = start + h->value.len; + + if (ngx_strlcasestrn(start, last, (u_char *) "no-cache", 8 - 1) != NULL + || ngx_strlcasestrn(start, last, (u_char *) "no-store", 8 - 1) != NULL + || ngx_strlcasestrn(start, last, (u_char *) "private", 7 - 1) != NULL) { u->cacheable = 0; return NGX_OK; } - p = ngx_strlcasestrn(p, last, (u_char *) "max-age=", 8 - 1); + p = ngx_strlcasestrn(start, last, (u_char *) "s-maxage=", 9 - 1); + offset = 9; + + if (p == NULL) { + p = ngx_strlcasestrn(start, last, (u_char *) "max-age=", 8 - 1); + offset = 8; + } if (p == NULL) { return NGX_OK; @@ -3968,7 +3974,7 @@ ngx_http_upstream_process_cache_control( n = 0; - for (p += 8; p < last; p++) { + for (p += offset; p < last; p++) { if (*p == ',' || *p == ';' || *p == ' ') { break; } From mdounin at mdounin.ru Wed Nov 19 18:02:45 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Nov 2014 21:02:45 +0300 Subject: [PATCH] Cache: add support for Cache-Control's s-maxage response directive In-Reply-To: References: Message-ID: <20141119180245.GA53423@mdounin.ru> Hello! On Tue, Nov 18, 2014 at 05:15:51PM -0800, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1416359234 28800 > # Tue Nov 18 17:07:14 2014 -0800 > # Node ID b7b345ad11d81cbf6c17e5933aad9ce3af4f16c8 > # Parent 2f7e557eab5b501ba71418febd3de9ef1c0ab4f1 > Cache: add support for Cache-Control's s-maxage response directive. Committed, thanks. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Wed Nov 19 18:18:58 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 19 Nov 2014 18:18:58 +0000 Subject: [nginx] Renamed ngx_handle_sent_chain() to ngx_chain_update_sent(). Message-ID: details: http://hg.nginx.org/nginx/rev/de68ed551bfb branches: changeset: 5912:de68ed551bfb user: Valentin Bartenev date: Wed Nov 19 21:16:19 2014 +0300 description: Renamed ngx_handle_sent_chain() to ngx_chain_update_sent(). No functional changes. diffstat: src/core/ngx_buf.c | 2 +- src/core/ngx_buf.h | 2 +- src/os/unix/ngx_darwin_sendfile_chain.c | 2 +- src/os/unix/ngx_freebsd_sendfile_chain.c | 2 +- src/os/unix/ngx_linux_sendfile_chain.c | 2 +- src/os/unix/ngx_solaris_sendfilev_chain.c | 2 +- src/os/unix/ngx_writev_chain.c | 2 +- src/os/win32/ngx_wsasend_chain.c | 4 ++-- 8 files changed, 9 insertions(+), 9 deletions(-) diffs (104 lines): diff -r 88d55e5934f7 -r de68ed551bfb src/core/ngx_buf.c --- a/src/core/ngx_buf.c Tue Nov 18 17:07:14 2014 -0800 +++ b/src/core/ngx_buf.c Wed Nov 19 21:16:19 2014 +0300 @@ -221,7 +221,7 @@ ngx_chain_update_chains(ngx_pool_t *p, n ngx_chain_t * -ngx_handle_sent_chain(ngx_chain_t *in, off_t sent) +ngx_chain_update_sent(ngx_chain_t *in, off_t sent) { off_t size; diff -r 88d55e5934f7 -r de68ed551bfb src/core/ngx_buf.h --- a/src/core/ngx_buf.h Tue Nov 18 17:07:14 2014 -0800 +++ b/src/core/ngx_buf.h Wed Nov 19 21:16:19 2014 +0300 @@ -158,6 +158,6 @@ ngx_chain_t *ngx_chain_get_free_buf(ngx_ void ngx_chain_update_chains(ngx_pool_t *p, ngx_chain_t **free, ngx_chain_t **busy, ngx_chain_t **out, ngx_buf_tag_t tag); -ngx_chain_t *ngx_handle_sent_chain(ngx_chain_t *in, off_t sent); +ngx_chain_t *ngx_chain_update_sent(ngx_chain_t *in, off_t sent); #endif /* _NGX_BUF_H_INCLUDED_ */ diff -r 88d55e5934f7 -r de68ed551bfb src/os/unix/ngx_darwin_sendfile_chain.c --- a/src/os/unix/ngx_darwin_sendfile_chain.c Tue Nov 18 17:07:14 2014 -0800 +++ b/src/os/unix/ngx_darwin_sendfile_chain.c Wed Nov 19 21:16:19 2014 +0300 @@ -305,7 +305,7 @@ ngx_darwin_sendfile_chain(ngx_connection c->sent += sent; - in = ngx_handle_sent_chain(in, sent); + in = ngx_chain_update_sent(in, sent); if (eintr) { send = prev_send + sent; diff -r 88d55e5934f7 -r de68ed551bfb src/os/unix/ngx_freebsd_sendfile_chain.c --- a/src/os/unix/ngx_freebsd_sendfile_chain.c Tue Nov 18 17:07:14 2014 -0800 +++ b/src/os/unix/ngx_freebsd_sendfile_chain.c Wed Nov 19 21:16:19 2014 +0300 @@ -356,7 +356,7 @@ ngx_freebsd_sendfile_chain(ngx_connectio c->sent += sent; - in = ngx_handle_sent_chain(in, sent); + in = ngx_chain_update_sent(in, sent); #if (NGX_HAVE_AIO_SENDFILE) if (c->busy_sendfile) { diff -r 88d55e5934f7 -r de68ed551bfb src/os/unix/ngx_linux_sendfile_chain.c --- a/src/os/unix/ngx_linux_sendfile_chain.c Tue Nov 18 17:07:14 2014 -0800 +++ b/src/os/unix/ngx_linux_sendfile_chain.c Wed Nov 19 21:16:19 2014 +0300 @@ -313,7 +313,7 @@ ngx_linux_sendfile_chain(ngx_connection_ c->sent += sent; - in = ngx_handle_sent_chain(in, sent); + in = ngx_chain_update_sent(in, sent); if (eintr) { send = prev_send; diff -r 88d55e5934f7 -r de68ed551bfb src/os/unix/ngx_solaris_sendfilev_chain.c --- a/src/os/unix/ngx_solaris_sendfilev_chain.c Tue Nov 18 17:07:14 2014 -0800 +++ b/src/os/unix/ngx_solaris_sendfilev_chain.c Wed Nov 19 21:16:19 2014 +0300 @@ -197,7 +197,7 @@ ngx_solaris_sendfilev_chain(ngx_connecti c->sent += sent; - in = ngx_handle_sent_chain(in, sent); + in = ngx_chain_update_sent(in, sent); if (eintr) { send = prev_send + sent; diff -r 88d55e5934f7 -r de68ed551bfb src/os/unix/ngx_writev_chain.c --- a/src/os/unix/ngx_writev_chain.c Tue Nov 18 17:07:14 2014 -0800 +++ b/src/os/unix/ngx_writev_chain.c Wed Nov 19 21:16:19 2014 +0300 @@ -131,7 +131,7 @@ ngx_writev_chain(ngx_connection_t *c, ng c->sent += sent; - in = ngx_handle_sent_chain(in, sent); + in = ngx_chain_update_sent(in, sent); if (eintr) { send = prev_send; diff -r 88d55e5934f7 -r de68ed551bfb src/os/win32/ngx_wsasend_chain.c --- a/src/os/win32/ngx_wsasend_chain.c Tue Nov 18 17:07:14 2014 -0800 +++ b/src/os/win32/ngx_wsasend_chain.c Wed Nov 19 21:16:19 2014 +0300 @@ -113,7 +113,7 @@ ngx_wsasend_chain(ngx_connection_t *c, n c->sent += sent; - in = ngx_handle_sent_chain(in, sent); + in = ngx_chain_update_sent(in, sent); if (send - prev_send != sent) { wev->ready = 0; @@ -279,7 +279,7 @@ ngx_overlapped_wsasend_chain(ngx_connect c->sent += sent; - in = ngx_handle_sent_chain(in, sent); + in = ngx_chain_update_sent(in, sent); if (in) { wev->ready = 0; From vbart at nginx.com Wed Nov 19 18:19:01 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 19 Nov 2014 18:19:01 +0000 Subject: [nginx] Introduced the ngx_output_chain_to_iovec() function. Message-ID: details: http://hg.nginx.org/nginx/rev/8e903522c17a branches: changeset: 5913:8e903522c17a user: Valentin Bartenev date: Tue Oct 07 11:38:57 2014 +0400 description: Introduced the ngx_output_chain_to_iovec() function. It deduplicates code of the send chain functions and uses only preallocated memory, which completely solves the problem mentioned in d1bde5c3c5d2. diffstat: src/os/unix/ngx_darwin_sendfile_chain.c | 126 ++++-------------------- src/os/unix/ngx_freebsd_sendfile_chain.c | 122 ++++-------------------- src/os/unix/ngx_linux_sendfile_chain.c | 80 ++-------------- src/os/unix/ngx_os.h | 11 ++ src/os/unix/ngx_writev_chain.c | 151 +++++++++++++++++++++--------- 5 files changed, 171 insertions(+), 319 deletions(-) diffs (truncated from 742 to 300 lines): diff -r de68ed551bfb -r 8e903522c17a src/os/unix/ngx_darwin_sendfile_chain.c --- a/src/os/unix/ngx_darwin_sendfile_chain.c Wed Nov 19 21:16:19 2014 +0300 +++ b/src/os/unix/ngx_darwin_sendfile_chain.c Tue Oct 07 11:38:57 2014 +0400 @@ -31,17 +31,15 @@ ngx_chain_t * ngx_darwin_sendfile_chain(ngx_connection_t *c, ngx_chain_t *in, off_t limit) { int rc; - u_char *prev; off_t size, send, prev_send, aligned, sent, fprev; - off_t header_size, file_size; + off_t file_size; ngx_uint_t eintr; ngx_err_t err; ngx_buf_t *file; - ngx_array_t header, trailer; ngx_event_t *wev; ngx_chain_t *cl; + ngx_iovec_t header, trailer; struct sf_hdtr hdtr; - struct iovec *iov; struct iovec headers[NGX_IOVS_PREALLOCATE]; struct iovec trailers[NGX_IOVS_PREALLOCATE]; @@ -70,69 +68,27 @@ ngx_darwin_sendfile_chain(ngx_connection send = 0; - header.elts = headers; - header.size = sizeof(struct iovec); + header.iovs = headers; header.nalloc = NGX_IOVS_PREALLOCATE; - header.pool = c->pool; - trailer.elts = trailers; - trailer.size = sizeof(struct iovec); + trailer.iovs = trailers; trailer.nalloc = NGX_IOVS_PREALLOCATE; - trailer.pool = c->pool; for ( ;; ) { file = NULL; file_size = 0; - header_size = 0; eintr = 0; prev_send = send; - header.nelts = 0; - trailer.nelts = 0; - /* create the header iovec and coalesce the neighbouring bufs */ - prev = NULL; - iov = NULL; + cl = ngx_output_chain_to_iovec(&header, in, limit - send, c->log); - for (cl = in; cl && send < limit; cl = cl->next) { - - if (ngx_buf_special(cl->buf)) { - continue; - } - - if (!ngx_buf_in_memory_only(cl->buf)) { - break; - } - - size = cl->buf->last - cl->buf->pos; - - if (send + size > limit) { - size = limit - send; - } - - if (prev == cl->buf->pos) { - iov->iov_len += (size_t) size; - - } else { - if (header.nelts >= IOV_MAX) { - break; - } - - iov = ngx_array_push(&header); - if (iov == NULL) { - return NGX_CHAIN_ERROR; - } - - iov->iov_base = (void *) cl->buf->pos; - iov->iov_len = (size_t) size; - } - - prev = cl->buf->pos + (size_t) size; - header_size += size; - send += size; + if (cl == NGX_CHAIN_ERROR) { + return NGX_CHAIN_ERROR; } + send += header.size; if (cl && cl->buf->in_file && send < limit) { file = cl->buf; @@ -165,51 +121,17 @@ ngx_darwin_sendfile_chain(ngx_connection && fprev == cl->buf->file_pos); } - if (file && header.nelts == 0) { + if (file && header.count == 0) { /* create the trailer iovec and coalesce the neighbouring bufs */ - prev = NULL; - iov = NULL; + cl = ngx_output_chain_to_iovec(&trailer, cl, limit - send, c->log); - while (cl && send < limit) { + if (cl == NGX_CHAIN_ERROR) { + return NGX_CHAIN_ERROR; + } - if (ngx_buf_special(cl->buf)) { - cl = cl->next; - continue; - } - - if (!ngx_buf_in_memory_only(cl->buf)) { - break; - } - - size = cl->buf->last - cl->buf->pos; - - if (send + size > limit) { - size = limit - send; - } - - if (prev == cl->buf->pos) { - iov->iov_len += (size_t) size; - - } else { - if (trailer.nelts >= IOV_MAX) { - break; - } - - iov = ngx_array_push(&trailer); - if (iov == NULL) { - return NGX_CHAIN_ERROR; - } - - iov->iov_base = (void *) cl->buf->pos; - iov->iov_len = (size_t) size; - } - - prev = cl->buf->pos + (size_t) size; - send += size; - cl = cl->next; - } + send += trailer.size; } if (file) { @@ -219,16 +141,16 @@ ngx_darwin_sendfile_chain(ngx_connection * but corresponding pointer is not NULL */ - hdtr.headers = header.nelts ? (struct iovec *) header.elts: NULL; - hdtr.hdr_cnt = header.nelts; - hdtr.trailers = trailer.nelts ? (struct iovec *) trailer.elts: NULL; - hdtr.trl_cnt = trailer.nelts; + hdtr.headers = header.count ? header.iovs : NULL; + hdtr.hdr_cnt = header.count; + hdtr.trailers = trailer.count ? trailer.iovs : NULL; + hdtr.trl_cnt = trailer.count; - sent = header_size + file_size; + sent = header.size + file_size; ngx_log_debug3(NGX_LOG_DEBUG_EVENT, c->log, 0, - "sendfile: @%O %O h:%O", - file->file_pos, sent, header_size); + "sendfile: @%O %O h:%uz", + file->file_pos, sent, header.size); rc = sendfile(file->file->fd, c->fd, file->file_pos, &sent, &hdtr, 0); @@ -271,13 +193,13 @@ ngx_darwin_sendfile_chain(ngx_connection ngx_log_debug4(NGX_LOG_DEBUG_EVENT, c->log, 0, "sendfile: %d, @%O %O:%O", - rc, file->file_pos, sent, file_size + header_size); + rc, file->file_pos, sent, file_size + header.size); } else { - rc = writev(c->fd, header.elts, header.nelts); + rc = writev(c->fd, header.iovs, header.count); ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, - "writev: %d of %O", rc, header_size); + "writev: %d of %uz", rc, header.size); if (rc == -1) { err = ngx_errno; diff -r de68ed551bfb -r 8e903522c17a src/os/unix/ngx_freebsd_sendfile_chain.c --- a/src/os/unix/ngx_freebsd_sendfile_chain.c Wed Nov 19 21:16:19 2014 +0300 +++ b/src/os/unix/ngx_freebsd_sendfile_chain.c Tue Oct 07 11:38:57 2014 +0400 @@ -33,17 +33,15 @@ ngx_chain_t * ngx_freebsd_sendfile_chain(ngx_connection_t *c, ngx_chain_t *in, off_t limit) { int rc, flags; - u_char *prev; off_t size, send, prev_send, aligned, sent, fprev; - size_t header_size, file_size; + size_t file_size; ngx_uint_t eintr, eagain; ngx_err_t err; ngx_buf_t *file; - ngx_array_t header, trailer; ngx_event_t *wev; ngx_chain_t *cl; + ngx_iovec_t header, trailer; struct sf_hdtr hdtr; - struct iovec *iov; struct iovec headers[NGX_IOVS_PREALLOCATE]; struct iovec trailers[NGX_IOVS_PREALLOCATE]; @@ -74,69 +72,27 @@ ngx_freebsd_sendfile_chain(ngx_connectio eagain = 0; flags = 0; - header.elts = headers; - header.size = sizeof(struct iovec); + header.iovs = headers; header.nalloc = NGX_IOVS_PREALLOCATE; - header.pool = c->pool; - trailer.elts = trailers; - trailer.size = sizeof(struct iovec); + trailer.iovs = trailers; trailer.nalloc = NGX_IOVS_PREALLOCATE; - trailer.pool = c->pool; for ( ;; ) { file = NULL; file_size = 0; - header_size = 0; eintr = 0; prev_send = send; - header.nelts = 0; - trailer.nelts = 0; - /* create the header iovec and coalesce the neighbouring bufs */ - prev = NULL; - iov = NULL; + cl = ngx_output_chain_to_iovec(&header, in, limit - send, c->log); - for (cl = in; cl && send < limit; cl = cl->next) { - - if (ngx_buf_special(cl->buf)) { - continue; - } - - if (!ngx_buf_in_memory_only(cl->buf)) { - break; - } - - size = cl->buf->last - cl->buf->pos; - - if (send + size > limit) { - size = limit - send; - } - - if (prev == cl->buf->pos) { - iov->iov_len += (size_t) size; - - } else { - if (header.nelts >= IOV_MAX){ - break; - } - - iov = ngx_array_push(&header); - if (iov == NULL) { - return NGX_CHAIN_ERROR; - } - - iov->iov_base = (void *) cl->buf->pos; - iov->iov_len = (size_t) size; - } - - prev = cl->buf->pos + (size_t) size; - header_size += (size_t) size; - send += size; + if (cl == NGX_CHAIN_ERROR) { + return NGX_CHAIN_ERROR; } + send += header.size; if (cl && cl->buf->in_file && send < limit) { file = cl->buf; @@ -174,47 +130,13 @@ ngx_freebsd_sendfile_chain(ngx_connectio /* create the trailer iovec and coalesce the neighbouring bufs */ From vbart at nginx.com Wed Nov 19 18:19:04 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 19 Nov 2014 18:19:04 +0000 Subject: [nginx] Refactored ngx_solaris_sendfilev_chain(). Message-ID: details: http://hg.nginx.org/nginx/rev/4dd67e5d958e branches: changeset: 5914:4dd67e5d958e user: Valentin Bartenev date: Wed Nov 19 21:17:11 2014 +0300 description: Refactored ngx_solaris_sendfilev_chain(). Though ngx_solaris_sendfilev_chain() shouldn't suffer from the problem mentioned in d1bde5c3c5d2 since currently IOV_MAX on Solaris is 16, but this follows the change from 3d5717550371 in order to make the code look similar to other systems and potentially eliminates the problem in the future. diffstat: src/os/unix/ngx_solaris_sendfilev_chain.c | 25 +++++++------------------ 1 files changed, 7 insertions(+), 18 deletions(-) diffs (77 lines): diff -r 8e903522c17a -r 4dd67e5d958e src/os/unix/ngx_solaris_sendfilev_chain.c --- a/src/os/unix/ngx_solaris_sendfilev_chain.c Tue Oct 07 11:38:57 2014 +0400 +++ b/src/os/unix/ngx_solaris_sendfilev_chain.c Wed Nov 19 21:17:11 2014 +0300 @@ -48,8 +48,8 @@ ngx_solaris_sendfilev_chain(ngx_connecti ssize_t n; ngx_int_t eintr; ngx_err_t err; + ngx_uint_t nsfv; sendfilevec_t *sfv, sfvs[NGX_SENDFILEVECS]; - ngx_array_t vec; ngx_event_t *wev; ngx_chain_t *cl; @@ -73,11 +73,6 @@ ngx_solaris_sendfilev_chain(ngx_connecti send = 0; - vec.elts = sfvs; - vec.size = sizeof(sendfilevec_t); - vec.nalloc = NGX_SENDFILEVECS; - vec.pool = c->pool; - for ( ;; ) { fd = SFV_FD_SELF; prev = NULL; @@ -87,7 +82,7 @@ ngx_solaris_sendfilev_chain(ngx_connecti sent = 0; prev_send = send; - vec.nelts = 0; + nsfv = 0; /* create the sendfilevec and coalesce the neighbouring bufs */ @@ -110,14 +105,11 @@ ngx_solaris_sendfilev_chain(ngx_connecti sfv->sfv_len += (size_t) size; } else { - if (vec.nelts >= IOV_MAX) { + if (nsfv == NGX_SENDFILEVECS) { break; } - sfv = ngx_array_push(&vec); - if (sfv == NULL) { - return NGX_CHAIN_ERROR; - } + sfv = &sfvs[nsfv++]; sfv->sfv_fd = SFV_FD_SELF; sfv->sfv_flag = 0; @@ -148,14 +140,11 @@ ngx_solaris_sendfilev_chain(ngx_connecti sfv->sfv_len += (size_t) size; } else { - if (vec.nelts >= IOV_MAX) { + if (nsfv == NGX_SENDFILEVECS) { break; } - sfv = ngx_array_push(&vec); - if (sfv == NULL) { - return NGX_CHAIN_ERROR; - } + sfv = &sfvs[nsfv++]; fd = cl->buf->file->fd; sfv->sfv_fd = fd; @@ -169,7 +158,7 @@ ngx_solaris_sendfilev_chain(ngx_connecti } } - n = sendfilev(c->fd, vec.elts, vec.nelts, &sent); + n = sendfilev(c->fd, sfvs, nsfv, &sent); if (n == -1) { err = ngx_errno; From vbart at nginx.com Wed Nov 19 18:19:06 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 19 Nov 2014 18:19:06 +0000 Subject: [nginx] Moved the code for coalescing file buffers to a separate... Message-ID: details: http://hg.nginx.org/nginx/rev/ac3f78219f85 branches: changeset: 5915:ac3f78219f85 user: Valentin Bartenev date: Wed Aug 13 15:11:45 2014 +0400 description: Moved the code for coalescing file buffers to a separate function. diffstat: src/core/ngx_buf.c | 42 ++++++++++++++++++++++++++++++++ src/core/ngx_buf.h | 2 + src/os/unix/ngx_darwin_sendfile_chain.c | 27 ++------------------ src/os/unix/ngx_freebsd_sendfile_chain.c | 27 ++------------------ src/os/unix/ngx_linux_sendfile_chain.c | 27 ++------------------ 5 files changed, 53 insertions(+), 72 deletions(-) diffs (199 lines): diff -r 4dd67e5d958e -r ac3f78219f85 src/core/ngx_buf.c --- a/src/core/ngx_buf.c Wed Nov 19 21:17:11 2014 +0300 +++ b/src/core/ngx_buf.c Wed Aug 13 15:11:45 2014 +0400 @@ -220,6 +220,48 @@ ngx_chain_update_chains(ngx_pool_t *p, n } +off_t +ngx_chain_coalesce_file(ngx_chain_t **in, off_t limit) +{ + off_t total, size, aligned, fprev; + ngx_fd_t fd; + ngx_chain_t *cl; + + total = 0; + + cl = *in; + fd = cl->buf->file->fd; + + do { + size = cl->buf->file_last - cl->buf->file_pos; + + if (size > limit - total) { + size = limit - total; + + aligned = (cl->buf->file_pos + size + ngx_pagesize - 1) + & ~((off_t) ngx_pagesize - 1); + + if (aligned <= cl->buf->file_last) { + size = aligned - cl->buf->file_pos; + } + } + + total += size; + fprev = cl->buf->file_pos + size; + cl = cl->next; + + } while (cl + && cl->buf->in_file + && total < limit + && fd == cl->buf->file->fd + && fprev == cl->buf->file_pos); + + *in = cl; + + return total; +} + + ngx_chain_t * ngx_chain_update_sent(ngx_chain_t *in, off_t sent) { diff -r 4dd67e5d958e -r ac3f78219f85 src/core/ngx_buf.h --- a/src/core/ngx_buf.h Wed Nov 19 21:17:11 2014 +0300 +++ b/src/core/ngx_buf.h Wed Aug 13 15:11:45 2014 +0400 @@ -158,6 +158,8 @@ ngx_chain_t *ngx_chain_get_free_buf(ngx_ void ngx_chain_update_chains(ngx_pool_t *p, ngx_chain_t **free, ngx_chain_t **busy, ngx_chain_t **out, ngx_buf_tag_t tag); +off_t ngx_chain_coalesce_file(ngx_chain_t **in, off_t limit); + ngx_chain_t *ngx_chain_update_sent(ngx_chain_t *in, off_t sent); #endif /* _NGX_BUF_H_INCLUDED_ */ diff -r 4dd67e5d958e -r ac3f78219f85 src/os/unix/ngx_darwin_sendfile_chain.c --- a/src/os/unix/ngx_darwin_sendfile_chain.c Wed Nov 19 21:17:11 2014 +0300 +++ b/src/os/unix/ngx_darwin_sendfile_chain.c Wed Aug 13 15:11:45 2014 +0400 @@ -31,7 +31,7 @@ ngx_chain_t * ngx_darwin_sendfile_chain(ngx_connection_t *c, ngx_chain_t *in, off_t limit) { int rc; - off_t size, send, prev_send, aligned, sent, fprev; + off_t send, prev_send, sent; off_t file_size; ngx_uint_t eintr; ngx_err_t err; @@ -95,30 +95,9 @@ ngx_darwin_sendfile_chain(ngx_connection /* coalesce the neighbouring file bufs */ - do { - size = cl->buf->file_last - cl->buf->file_pos; + file_size = ngx_chain_coalesce_file(&cl, limit - send); - if (send + size > limit) { - size = limit - send; - - aligned = (cl->buf->file_pos + size + ngx_pagesize - 1) - & ~((off_t) ngx_pagesize - 1); - - if (aligned <= cl->buf->file_last) { - size = aligned - cl->buf->file_pos; - } - } - - file_size += size; - send += size; - fprev = cl->buf->file_pos + size; - cl = cl->next; - - } while (cl - && cl->buf->in_file - && send < limit - && file->file->fd == cl->buf->file->fd - && fprev == cl->buf->file_pos); + send += file_size; } if (file && header.count == 0) { diff -r 4dd67e5d958e -r ac3f78219f85 src/os/unix/ngx_freebsd_sendfile_chain.c --- a/src/os/unix/ngx_freebsd_sendfile_chain.c Wed Nov 19 21:17:11 2014 +0300 +++ b/src/os/unix/ngx_freebsd_sendfile_chain.c Wed Aug 13 15:11:45 2014 +0400 @@ -33,7 +33,7 @@ ngx_chain_t * ngx_freebsd_sendfile_chain(ngx_connection_t *c, ngx_chain_t *in, off_t limit) { int rc, flags; - off_t size, send, prev_send, aligned, sent, fprev; + off_t send, prev_send, sent; size_t file_size; ngx_uint_t eintr, eagain; ngx_err_t err; @@ -99,30 +99,9 @@ ngx_freebsd_sendfile_chain(ngx_connectio /* coalesce the neighbouring file bufs */ - do { - size = cl->buf->file_last - cl->buf->file_pos; + file_size = (size_t) ngx_chain_coalesce_file(&cl, limit - send); - if (send + size > limit) { - size = limit - send; - - aligned = (cl->buf->file_pos + size + ngx_pagesize - 1) - & ~((off_t) ngx_pagesize - 1); - - if (aligned <= cl->buf->file_last) { - size = aligned - cl->buf->file_pos; - } - } - - file_size += (size_t) size; - send += size; - fprev = cl->buf->file_pos + size; - cl = cl->next; - - } while (cl - && cl->buf->in_file - && send < limit - && file->file->fd == cl->buf->file->fd - && fprev == cl->buf->file_pos); + send += file_size; } diff -r 4dd67e5d958e -r ac3f78219f85 src/os/unix/ngx_linux_sendfile_chain.c --- a/src/os/unix/ngx_linux_sendfile_chain.c Wed Nov 19 21:17:11 2014 +0300 +++ b/src/os/unix/ngx_linux_sendfile_chain.c Wed Aug 13 15:11:45 2014 +0400 @@ -31,7 +31,7 @@ ngx_chain_t * ngx_linux_sendfile_chain(ngx_connection_t *c, ngx_chain_t *in, off_t limit) { int rc, tcp_nodelay; - off_t size, send, prev_send, aligned, sent, fprev; + off_t send, prev_send, sent; size_t file_size; ngx_err_t err; ngx_buf_t *file; @@ -153,30 +153,9 @@ ngx_linux_sendfile_chain(ngx_connection_ /* coalesce the neighbouring file bufs */ - do { - size = cl->buf->file_last - cl->buf->file_pos; + file_size = (size_t) ngx_chain_coalesce_file(&cl, limit - send); - if (send + size > limit) { - size = limit - send; - - aligned = (cl->buf->file_pos + size + ngx_pagesize - 1) - & ~((off_t) ngx_pagesize - 1); - - if (aligned <= cl->buf->file_last) { - size = aligned - cl->buf->file_pos; - } - } - - file_size += (size_t) size; - send += size; - fprev = cl->buf->file_pos + size; - cl = cl->next; - - } while (cl - && cl->buf->in_file - && send < limit - && file->file->fd == cl->buf->file->fd - && fprev == cl->buf->file_pos); + send += file_size; } if (file) { From vbart at nginx.com Wed Nov 19 18:19:09 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 19 Nov 2014 18:19:09 +0000 Subject: [nginx] Merged conditions in the ngx_*_sendfile_chain() functions. Message-ID: details: http://hg.nginx.org/nginx/rev/e044893b4587 branches: changeset: 5916:e044893b4587 user: Valentin Bartenev date: Wed Aug 13 15:11:45 2014 +0400 description: Merged conditions in the ngx_*_sendfile_chain() functions. No functional changes. diffstat: src/os/unix/ngx_darwin_sendfile_chain.c | 23 ++++++++++------------- src/os/unix/ngx_freebsd_sendfile_chain.c | 9 --------- src/os/unix/ngx_linux_sendfile_chain.c | 5 ----- 3 files changed, 10 insertions(+), 27 deletions(-) diffs (101 lines): diff -r ac3f78219f85 -r e044893b4587 src/os/unix/ngx_darwin_sendfile_chain.c --- a/src/os/unix/ngx_darwin_sendfile_chain.c Wed Aug 13 15:11:45 2014 +0400 +++ b/src/os/unix/ngx_darwin_sendfile_chain.c Wed Aug 13 15:11:45 2014 +0400 @@ -75,8 +75,6 @@ ngx_darwin_sendfile_chain(ngx_connection trailer.nalloc = NGX_IOVS_PREALLOCATE; for ( ;; ) { - file = NULL; - file_size = 0; eintr = 0; prev_send = send; @@ -98,23 +96,22 @@ ngx_darwin_sendfile_chain(ngx_connection file_size = ngx_chain_coalesce_file(&cl, limit - send); send += file_size; - } - if (file && header.count == 0) { + if (header.count == 0) { - /* create the trailer iovec and coalesce the neighbouring bufs */ + /* + * create the trailer iovec and coalesce the neighbouring bufs + */ - cl = ngx_output_chain_to_iovec(&trailer, cl, limit - send, c->log); + cl = ngx_output_chain_to_iovec(&trailer, cl, limit - send, c->log); - if (cl == NGX_CHAIN_ERROR) { - return NGX_CHAIN_ERROR; + if (cl == NGX_CHAIN_ERROR) { + return NGX_CHAIN_ERROR; + } + + send += trailer.size; } - send += trailer.size; - } - - if (file) { - /* * sendfile() returns EINVAL if sf_hdtr's count is 0, * but corresponding pointer is not NULL diff -r ac3f78219f85 -r e044893b4587 src/os/unix/ngx_freebsd_sendfile_chain.c --- a/src/os/unix/ngx_freebsd_sendfile_chain.c Wed Aug 13 15:11:45 2014 +0400 +++ b/src/os/unix/ngx_freebsd_sendfile_chain.c Wed Aug 13 15:11:45 2014 +0400 @@ -79,8 +79,6 @@ ngx_freebsd_sendfile_chain(ngx_connectio trailer.nalloc = NGX_IOVS_PREALLOCATE; for ( ;; ) { - file = NULL; - file_size = 0; eintr = 0; prev_send = send; @@ -102,10 +100,6 @@ ngx_freebsd_sendfile_chain(ngx_connectio file_size = (size_t) ngx_chain_coalesce_file(&cl, limit - send); send += file_size; - } - - - if (file) { /* create the trailer iovec and coalesce the neighbouring bufs */ @@ -116,9 +110,6 @@ ngx_freebsd_sendfile_chain(ngx_connectio } send += trailer.size; - } - - if (file) { if (ngx_freebsd_use_tcp_nopush && c->tcp_nopush == NGX_TCP_NOPUSH_UNSET) diff -r ac3f78219f85 -r e044893b4587 src/os/unix/ngx_linux_sendfile_chain.c --- a/src/os/unix/ngx_linux_sendfile_chain.c Wed Aug 13 15:11:45 2014 +0400 +++ b/src/os/unix/ngx_linux_sendfile_chain.c Wed Aug 13 15:11:45 2014 +0400 @@ -66,8 +66,6 @@ ngx_linux_sendfile_chain(ngx_connection_ header.nalloc = NGX_IOVS_PREALLOCATE; for ( ;; ) { - file = NULL; - file_size = 0; eintr = 0; prev_send = send; @@ -156,9 +154,6 @@ ngx_linux_sendfile_chain(ngx_connection_ file_size = (size_t) ngx_chain_coalesce_file(&cl, limit - send); send += file_size; - } - - if (file) { #if 1 if (file_size == 0) { ngx_debug_point(); From vbart at nginx.com Wed Nov 19 18:19:12 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 19 Nov 2014 18:19:12 +0000 Subject: [nginx] Moved writev() handling code to a separate function. Message-ID: details: http://hg.nginx.org/nginx/rev/2c64b69daec5 branches: changeset: 5917:2c64b69daec5 user: Valentin Bartenev date: Wed Aug 13 15:11:45 2014 +0400 description: Moved writev() handling code to a separate function. This reduces code duplication and unifies debug logging of the writev() syscall among various send chain functions. diffstat: src/os/unix/ngx_darwin_sendfile_chain.c | 29 ++---------- src/os/unix/ngx_freebsd_sendfile_chain.c | 29 ++---------- src/os/unix/ngx_linux_sendfile_chain.c | 28 ++---------- src/os/unix/ngx_os.h | 3 + src/os/unix/ngx_writev_chain.c | 73 ++++++++++++++++++------------- 5 files changed, 60 insertions(+), 102 deletions(-) diffs (268 lines): diff -r e044893b4587 -r 2c64b69daec5 src/os/unix/ngx_darwin_sendfile_chain.c --- a/src/os/unix/ngx_darwin_sendfile_chain.c Wed Aug 13 15:11:45 2014 +0400 +++ b/src/os/unix/ngx_darwin_sendfile_chain.c Wed Aug 13 15:11:45 2014 +0400 @@ -33,6 +33,7 @@ ngx_darwin_sendfile_chain(ngx_connection int rc; off_t send, prev_send, sent; off_t file_size; + ssize_t n; ngx_uint_t eintr; ngx_err_t err; ngx_buf_t *file; @@ -172,33 +173,13 @@ ngx_darwin_sendfile_chain(ngx_connection rc, file->file_pos, sent, file_size + header.size); } else { - rc = writev(c->fd, header.iovs, header.count); + n = ngx_writev(c, &header); - ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, - "writev: %d of %uz", rc, header.size); - - if (rc == -1) { - err = ngx_errno; - - switch (err) { - case NGX_EAGAIN: - break; - - case NGX_EINTR: - eintr = 1; - break; - - default: - wev->error = 1; - ngx_connection_error(c, err, "writev() failed"); - return NGX_CHAIN_ERROR; - } - - ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, err, - "writev() not ready"); + if (n == NGX_ERROR) { + return NGX_CHAIN_ERROR; } - sent = rc > 0 ? rc : 0; + sent = (n == NGX_AGAIN) ? 0 : n; } c->sent += sent; diff -r e044893b4587 -r 2c64b69daec5 src/os/unix/ngx_freebsd_sendfile_chain.c --- a/src/os/unix/ngx_freebsd_sendfile_chain.c Wed Aug 13 15:11:45 2014 +0400 +++ b/src/os/unix/ngx_freebsd_sendfile_chain.c Wed Aug 13 15:11:45 2014 +0400 @@ -35,6 +35,7 @@ ngx_freebsd_sendfile_chain(ngx_connectio int rc, flags; off_t send, prev_send, sent; size_t file_size; + ssize_t n; ngx_uint_t eintr, eagain; ngx_err_t err; ngx_buf_t *file; @@ -217,33 +218,13 @@ ngx_freebsd_sendfile_chain(ngx_connectio rc, file->file_pos, sent, file_size + header.size); } else { - rc = writev(c->fd, header.iovs, header.count); + n = ngx_writev(c, &header); - ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, - "writev: %d of %uz", rc, header.size); - - if (rc == -1) { - err = ngx_errno; - - switch (err) { - case NGX_EAGAIN: - break; - - case NGX_EINTR: - eintr = 1; - break; - - default: - wev->error = 1; - ngx_connection_error(c, err, "writev() failed"); - return NGX_CHAIN_ERROR; - } - - ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, err, - "writev() not ready"); + if (n == NGX_ERROR) { + return NGX_CHAIN_ERROR; } - sent = rc > 0 ? rc : 0; + sent = (n == NGX_AGAIN) ? 0 : n; } c->sent += sent; diff -r e044893b4587 -r 2c64b69daec5 src/os/unix/ngx_linux_sendfile_chain.c --- a/src/os/unix/ngx_linux_sendfile_chain.c Wed Aug 13 15:11:45 2014 +0400 +++ b/src/os/unix/ngx_linux_sendfile_chain.c Wed Aug 13 15:11:45 2014 +0400 @@ -33,6 +33,7 @@ ngx_linux_sendfile_chain(ngx_connection_ int rc, tcp_nodelay; off_t send, prev_send, sent; size_t file_size; + ssize_t n; ngx_err_t err; ngx_buf_t *file; ngx_uint_t eintr; @@ -199,32 +200,13 @@ ngx_linux_sendfile_chain(ngx_connection_ rc, file->file_pos, sent, file_size); } else { - rc = writev(c->fd, header.iovs, header.count); + n = ngx_writev(c, &header); - if (rc == -1) { - err = ngx_errno; - - switch (err) { - case NGX_EAGAIN: - break; - - case NGX_EINTR: - eintr = 1; - break; - - default: - wev->error = 1; - ngx_connection_error(c, err, "writev() failed"); - return NGX_CHAIN_ERROR; - } - - ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, err, - "writev() not ready"); + if (n == NGX_ERROR) { + return NGX_CHAIN_ERROR; } - sent = rc > 0 ? rc : 0; - - ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, "writev: %O", sent); + sent = (n == NGX_AGAIN) ? 0 : n; } c->sent += sent; diff -r e044893b4587 -r 2c64b69daec5 src/os/unix/ngx_os.h --- a/src/os/unix/ngx_os.h Wed Aug 13 15:11:45 2014 +0400 +++ b/src/os/unix/ngx_os.h Wed Aug 13 15:11:45 2014 +0400 @@ -75,6 +75,9 @@ ngx_chain_t *ngx_output_chain_to_iovec(n size_t limit, ngx_log_t *log); +ssize_t ngx_writev(ngx_connection_t *c, ngx_iovec_t *vec); + + extern ngx_os_io_t ngx_os_io; extern ngx_int_t ngx_ncpu; extern ngx_int_t ngx_max_sockets; diff -r e044893b4587 -r 2c64b69daec5 src/os/unix/ngx_writev_chain.c --- a/src/os/unix/ngx_writev_chain.c Wed Aug 13 15:11:45 2014 +0400 +++ b/src/os/unix/ngx_writev_chain.c Wed Aug 13 15:11:45 2014 +0400 @@ -15,8 +15,6 @@ ngx_writev_chain(ngx_connection_t *c, ng { ssize_t n, sent; off_t send, prev_send; - ngx_uint_t eintr; - ngx_err_t err; ngx_chain_t *cl; ngx_event_t *wev; ngx_iovec_t vec; @@ -51,7 +49,6 @@ ngx_writev_chain(ngx_connection_t *c, ng vec.nalloc = NGX_IOVS_PREALLOCATE; for ( ;; ) { - eintr = 0; prev_send = send; /* create the iovec and coalesce the neighbouring bufs */ @@ -83,42 +80,18 @@ ngx_writev_chain(ngx_connection_t *c, ng send += vec.size; - n = writev(c->fd, vec.iovs, vec.count); + n = ngx_writev(c, &vec); - if (n == -1) { - err = ngx_errno; - - switch (err) { - case NGX_EAGAIN: - break; - - case NGX_EINTR: - eintr = 1; - break; - - default: - wev->error = 1; - (void) ngx_connection_error(c, err, "writev() failed"); - return NGX_CHAIN_ERROR; - } - - ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, err, - "writev() not ready"); + if (n == NGX_ERROR) { + return NGX_CHAIN_ERROR; } - sent = n > 0 ? n : 0; - - ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0, "writev: %z", sent); + sent = (n == NGX_AGAIN) ? 0 : n; c->sent += sent; in = ngx_chain_update_sent(in, sent); - if (eintr) { - send = prev_send; - continue; - } - if (send - prev_send != sent) { wev->ready = 0; return in; @@ -203,3 +176,41 @@ ngx_output_chain_to_iovec(ngx_iovec_t *v return in; } + + +ssize_t +ngx_writev(ngx_connection_t *c, ngx_iovec_t *vec) +{ + ssize_t n; + ngx_err_t err; + +eintr: + + n = writev(c->fd, vec->iovs, vec->count); + + ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, + "writev: %z of %uz", n, vec->size); + + if (n == -1) { + err = ngx_errno; + + switch (err) { + case NGX_EAGAIN: + ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, err, + "writev() not ready"); + return NGX_AGAIN; + + case NGX_EINTR: + ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, err, + "writev() was interrupted"); + goto eintr; + + default: + c->write->error = 1; + ngx_connection_error(c, err, "writev() failed"); + return NGX_ERROR; + } + } + + return n; +} From vbart at nginx.com Wed Nov 19 18:19:14 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 19 Nov 2014 18:19:14 +0000 Subject: [nginx] Fixed type of sendfile() return value on Linux. Message-ID: details: http://hg.nginx.org/nginx/rev/c50b5ed3cd4b branches: changeset: 5918:c50b5ed3cd4b user: Valentin Bartenev date: Wed Nov 19 21:18:13 2014 +0300 description: Fixed type of sendfile() return value on Linux. There was no real problem since the amount of bytes can be sent is limited by NGX_SENDFILE_MAXSIZE to less than 2G. But that can be changed in the future diffstat: src/os/unix/ngx_linux_sendfile_chain.c | 12 ++++++------ 1 files changed, 6 insertions(+), 6 deletions(-) diffs (39 lines): diff -r 2c64b69daec5 -r c50b5ed3cd4b src/os/unix/ngx_linux_sendfile_chain.c --- a/src/os/unix/ngx_linux_sendfile_chain.c Wed Aug 13 15:11:45 2014 +0400 +++ b/src/os/unix/ngx_linux_sendfile_chain.c Wed Nov 19 21:18:13 2014 +0300 @@ -30,7 +30,7 @@ ngx_chain_t * ngx_linux_sendfile_chain(ngx_connection_t *c, ngx_chain_t *in, off_t limit) { - int rc, tcp_nodelay; + int tcp_nodelay; off_t send, prev_send, sent; size_t file_size; ssize_t n; @@ -170,9 +170,9 @@ ngx_linux_sendfile_chain(ngx_connection_ ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0, "sendfile: @%O %uz", file->file_pos, file_size); - rc = sendfile(c->fd, file->file->fd, &offset, file_size); + n = sendfile(c->fd, file->file->fd, &offset, file_size); - if (rc == -1) { + if (n == -1) { err = ngx_errno; switch (err) { @@ -193,11 +193,11 @@ ngx_linux_sendfile_chain(ngx_connection_ "sendfile() is not ready"); } - sent = rc > 0 ? rc : 0; + sent = n > 0 ? n : 0; ngx_log_debug4(NGX_LOG_DEBUG_EVENT, c->log, 0, - "sendfile: %d, @%O %O:%uz", - rc, file->file_pos, sent, file_size); + "sendfile: %z, @%O %O:%uz", + n, file->file_pos, sent, file_size); } else { n = ngx_writev(c, &header); From vbart at nginx.com Wed Nov 19 18:46:21 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Wed, 19 Nov 2014 18:46:21 +0000 Subject: [nginx] Style. Message-ID: details: http://hg.nginx.org/nginx/rev/fddc6bed1e6e branches: changeset: 5919:fddc6bed1e6e user: Valentin Bartenev date: Wed Nov 19 21:46:01 2014 +0300 description: Style. diffstat: src/os/unix/ngx_darwin_sendfile_chain.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff -r c50b5ed3cd4b -r fddc6bed1e6e src/os/unix/ngx_darwin_sendfile_chain.c --- a/src/os/unix/ngx_darwin_sendfile_chain.c Wed Nov 19 21:18:13 2014 +0300 +++ b/src/os/unix/ngx_darwin_sendfile_chain.c Wed Nov 19 21:46:01 2014 +0300 @@ -104,8 +104,8 @@ ngx_darwin_sendfile_chain(ngx_connection * create the trailer iovec and coalesce the neighbouring bufs */ - cl = ngx_output_chain_to_iovec(&trailer, cl, limit - send, c->log); - + cl = ngx_output_chain_to_iovec(&trailer, cl, limit - send, + c->log); if (cl == NGX_CHAIN_ERROR) { return NGX_CHAIN_ERROR; } From piotr at cloudflare.com Wed Nov 19 20:50:20 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Wed, 19 Nov 2014 12:50:20 -0800 Subject: [PATCH] Not Modified: prefer entity tags over date validators In-Reply-To: <20141119163527.GX26593@mdounin.ru> References: <3efade6bb02f7962a512.1416359746@piotrs-macbook-pro.local> <20141119163527.GX26593@mdounin.ru> Message-ID: Hey Maxim, > Current nginx behaviour is to respect both, and I don't see real > reasons to change the behaviour. How about adhering to RFC standards? RFC7232 clearly describes the precedence for evaluation of conditional requests (section 6) and puts even more emphasis on it by saying that If-(Un)modified-Since headers MUST (not SHOULD, MUST) be ignored when If-(None-)Match headers are present (section 3). > A while ago I've tried to dig > into HTTPbis VCS and tracker to find out why this part of the > specification was changed from RFC2616, but failed. ETag is stronger validator than date, that's it. Best regards, Piotr Sikora From piotr at cloudflare.com Wed Nov 19 21:06:22 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Wed, 19 Nov 2014 13:06:22 -0800 Subject: [PATCH 1 of 2] Cache: remove unused valid_msec fields In-Reply-To: <20141119164353.GZ26593@mdounin.ru> References: <99e65578bc80960b2fdf.1416359748@piotrs-macbook-pro.local> <20141119164353.GZ26593@mdounin.ru> Message-ID: Hey Maxim, > I would rather not, for two reasons: > > - I would actually like to see support for subsecond cache > validity times added eventually (unlikely to happen in the near > future, but still). That's valid reason, but it's going to be of rather limited use, because neither "Expires" nor "Cache-Control" can request such validity. Also, this field has been there since the introduction of cache over 5 years ago, just wasting I/O, so I'd argue that there is no reason to keep it, unless it's being used (which is not). It can always be added later on, when it's needed. > - This implies cache file on-disk format change and invalidation > of all previously cached files, so shouldn't be done without > some serious enough reason. Trust me, I know that ;) That's why it's part of a patchset that adds status field to the on-disk format (in the exactly same spot, nonetheless). Also, I'm going to do another on-disk format change during next release cycle (for the Age and Expires headers, that we've talked about), so you might postpone committing this if you're worried about too much bumps, but we're going to invalidate all previously cached files one way or another. Best regards, Piotr Sikora From piotr at cloudflare.com Wed Nov 19 21:52:34 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Wed, 19 Nov 2014 13:52:34 -0800 Subject: [PATCH 2 of 2] Cache: send conditional requests only for cached 200 OK responses In-Reply-To: <20141119163814.GY26593@mdounin.ru> References: <99e65578bc80960b2fdf.1416359748@piotrs-macbook-pro.local> <16f4ca8391ddd98ba99b.1416359749@piotrs-macbook-pro.local> <20141119163814.GY26593@mdounin.ru> Message-ID: Hey Maxim, > And what's wrong with it? > I don't see why this should be status-specific. Because, according to the RFC, 304 Not Modified can be only emitted for 200 OK responses, so we know that revalidation of other status codes cannot happen, at least not when talking with well-behaving server, so sending conditional requests for such cached responses is pointless. Maybe I should elaborate a bit more on the problem I'm trying to workaround, which is also the underlying reason for the fix I made in fd283aa92e04. Prior to that commit, when nginx cached 404 Not Found response without Last-Modified header, i.e. with such minimal headers: HTTP/1.1 404 Not Found Date: Wed, 19 Nov 2014 21:12:47 GMT it would always try to revalidate it with bogus If-Modified-Since header: GET /notfound HTTP/1.1 Host: www.example.com If-Modified-Since: Thu, 01 Jan 1970 00:00:00 GMT for which upstream server would usually respond with full 404 Not Found response (not 304 Not Modified), unless the file appeared on the upstream server (and would have resulted in 200 OK for unconditional requests), then it would respond with 304 Not Modified (thanks to the broken If-Modified-Since test on their side), revalidating 404 Not Found based on the 200 OK response. The problem mostly disappeared after fd283aa92e04, but it can still happen, e.g. when cached 404 Not Found response with Last-Modified header will be revalidated (and hence cached forever) by 304 Not Modified sent instead of 200 OK response with Last-Modified header older than the one in cached response. Hopefully that explains the problem enough... Let me know if you still have any doubt as to why it is important to change the current behavior to send conditional requests only for cached 200 OK responses. Best regards, Piotr Sikora From ru at nginx.com Thu Nov 20 12:38:15 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 20 Nov 2014 12:38:15 +0000 Subject: [nginx] Resolver: fixed use-after-free memory access. Message-ID: details: http://hg.nginx.org/nginx/rev/7420068c4d4b branches: changeset: 5920:7420068c4d4b user: Ruslan Ermilov date: Thu Nov 20 15:24:40 2014 +0300 description: Resolver: fixed use-after-free memory access. In 954867a2f0a6, we switched to using resolver node as the timer event data, so make sure we do not free resolver node memory until the corresponding timer is deleted. diffstat: src/core/ngx_resolver.c | 8 ++++---- 1 files changed, 4 insertions(+), 4 deletions(-) diffs (39 lines): diff -r fddc6bed1e6e -r 7420068c4d4b src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c Wed Nov 19 21:46:01 2014 +0300 +++ b/src/core/ngx_resolver.c Thu Nov 20 15:24:40 2014 +0300 @@ -1568,8 +1568,6 @@ ngx_resolver_process_a(ngx_resolver_t *r ngx_rbtree_delete(&r->name_rbtree, &rn->node); - ngx_resolver_free_node(r, rn); - /* unlock name mutex */ while (next) { @@ -1580,6 +1578,8 @@ ngx_resolver_process_a(ngx_resolver_t *r ctx->handler(ctx); } + ngx_resolver_free_node(r, rn); + return; } @@ -2143,8 +2143,6 @@ valid: ngx_rbtree_delete(tree, &rn->node); - ngx_resolver_free_node(r, rn); - /* unlock addr mutex */ while (next) { @@ -2155,6 +2153,8 @@ valid: ctx->handler(ctx); } + ngx_resolver_free_node(r, rn); + return; } From ru at nginx.com Thu Nov 20 12:38:18 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 20 Nov 2014 12:38:18 +0000 Subject: [nginx] Resolver: fixed debug event logging. Message-ID: details: http://hg.nginx.org/nginx/rev/5004210e8c78 branches: changeset: 5921:5004210e8c78 user: Ruslan Ermilov date: Thu Nov 20 15:24:42 2014 +0300 description: Resolver: fixed debug event logging. In 954867a2f0a6, we switched to using resolver node as the timer event data. This broke debug event logging. Replaced now unused ngx_resolver_ctx_t.ident with ngx_resolver_node_t.ident so that ngx_event_ident() extracts something sensible when accessing ngx_resolver_node_t as ngx_connection_t. diffstat: src/core/ngx_resolver.c | 25 +++++++++++++++---------- src/core/ngx_resolver.h | 15 ++++++++------- 2 files changed, 23 insertions(+), 17 deletions(-) diffs (132 lines): diff -r 7420068c4d4b -r 5004210e8c78 src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c Thu Nov 20 15:24:40 2014 +0300 +++ b/src/core/ngx_resolver.c Thu Nov 20 15:24:42 2014 +0300 @@ -48,6 +48,11 @@ typedef struct { } ngx_resolver_an_t; +#define ngx_resolver_node(n) \ + (ngx_resolver_node_t *) \ + ((u_char *) (n) - offsetof(ngx_resolver_node_t, node)) + + ngx_int_t ngx_udp_connect(ngx_udp_connection_t *uc); @@ -288,7 +293,7 @@ ngx_resolver_cleanup_tree(ngx_resolver_t while (tree->root != tree->sentinel) { - rn = (ngx_resolver_node_t *) ngx_rbtree_min(tree->root, tree->sentinel); + rn = ngx_resolver_node(ngx_rbtree_min(tree->root, tree->sentinel)); ngx_queue_remove(&rn->queue); @@ -666,7 +671,7 @@ ngx_resolve_name_locked(ngx_resolver_t * ctx->event->handler = ngx_resolver_timeout_handler; ctx->event->data = rn; ctx->event->log = r->log; - ctx->ident = -1; + rn->ident = -1; ngx_add_timer(ctx->event, ctx->timeout); } @@ -859,7 +864,7 @@ ngx_resolve_addr(ngx_resolver_ctx_t *ctx ctx->event->handler = ngx_resolver_timeout_handler; ctx->event->data = rn; ctx->event->log = r->log; - ctx->ident = -1; + rn->ident = -1; ngx_add_timer(ctx->event, ctx->timeout); @@ -2290,7 +2295,7 @@ ngx_resolver_lookup_name(ngx_resolver_t /* hash == node->key */ - rn = (ngx_resolver_node_t *) node; + rn = ngx_resolver_node(node); rc = ngx_memn2cmp(name->data, rn->name, name->len, rn->nlen); @@ -2329,7 +2334,7 @@ ngx_resolver_lookup_addr(ngx_resolver_t /* addr == node->key */ - return (ngx_resolver_node_t *) node; + return ngx_resolver_node(node); } /* not found */ @@ -2365,7 +2370,7 @@ ngx_resolver_lookup_addr6(ngx_resolver_t /* hash == node->key */ - rn = (ngx_resolver_node_t *) node; + rn = ngx_resolver_node(node); rc = ngx_memcmp(addr, &rn->addr6, 16); @@ -2403,8 +2408,8 @@ ngx_resolver_rbtree_insert_value(ngx_rbt } else { /* node->key == temp->key */ - rn = (ngx_resolver_node_t *) node; - rn_temp = (ngx_resolver_node_t *) temp; + rn = ngx_resolver_node(node); + rn_temp = ngx_resolver_node(temp); p = (ngx_memn2cmp(rn->name, rn_temp->name, rn->nlen, rn_temp->nlen) < 0) ? &temp->left : &temp->right; @@ -2446,8 +2451,8 @@ ngx_resolver_rbtree_insert_addr6_value(n } else { /* node->key == temp->key */ - rn = (ngx_resolver_node_t *) node; - rn_temp = (ngx_resolver_node_t *) temp; + rn = ngx_resolver_node(node); + rn_temp = ngx_resolver_node(temp); p = (ngx_memcmp(&rn->addr6, &rn_temp->addr6, 16) < 0) ? &temp->left : &temp->right; diff -r 7420068c4d4b -r 5004210e8c78 src/core/ngx_resolver.h --- a/src/core/ngx_resolver.h Thu Nov 20 15:24:40 2014 +0300 +++ b/src/core/ngx_resolver.h Thu Nov 20 15:24:42 2014 +0300 @@ -51,11 +51,15 @@ typedef void (*ngx_resolver_handler_pt)( typedef struct { - ngx_rbtree_node_t node; + /* PTR: resolved name, A: name to resolve */ + u_char *name; + ngx_queue_t queue; - /* PTR: resolved name, A: name to resolve */ - u_char *name; + /* event ident must be after 3 pointers as in ngx_connection_t */ + ngx_int_t ident; + + ngx_rbtree_node_t node; #if (NGX_HAVE_INET6) /* PTR: IPv6 address to resolve (IPv4 address is in rbtree node key) */ @@ -103,7 +107,7 @@ typedef struct { void *dummy; ngx_log_t *log; - /* ident must be after 3 pointers */ + /* event ident must be after 3 pointers as in ngx_connection_t */ ngx_int_t ident; /* simple round robin DNS peers balancer */ @@ -143,9 +147,6 @@ struct ngx_resolver_ctx_s { ngx_resolver_t *resolver; ngx_udp_connection_t *udp_connection; - /* ident must be after 3 pointers */ - ngx_int_t ident; - ngx_int_t state; ngx_str_t name; From mdounin at mdounin.ru Thu Nov 20 16:02:59 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 20 Nov 2014 19:02:59 +0300 Subject: [PATCH] Not Modified: prefer entity tags over date validators In-Reply-To: References: <3efade6bb02f7962a512.1416359746@piotrs-macbook-pro.local> <20141119163527.GX26593@mdounin.ru> Message-ID: <20141120160259.GG53423@mdounin.ru> Hello! On Wed, Nov 19, 2014 at 12:50:20PM -0800, Piotr Sikora wrote: > Hey Maxim, > > > Current nginx behaviour is to respect both, and I don't see real > > reasons to change the behaviour. > > How about adhering to RFC standards? RFC7232 clearly describes the > precedence for evaluation of conditional requests (section 6) and puts > even more emphasis on it by saying that If-(Un)modified-Since headers > MUST (not SHOULD, MUST) be ignored when If-(None-)Match headers are > present (section 3). Sure. We do adhere RFC2616 here. The problem is that RFC7232 is different, and there are no known reasons why it should. After HTTPbis working group switched to working on HTTP/2 and introduced GTFO frame, I tend to be very sceptical about their work. > > A while ago I've tried to dig > > into HTTPbis VCS and tracker to find out why this part of the > > specification was changed from RFC2616, but failed. > > ETag is stronger validator than date, that's it. The same applies to RFC2616, but it mandates different behaviour. So what's the problem with checking both date and ETag? -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Nov 20 16:14:31 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 20 Nov 2014 19:14:31 +0300 Subject: [PATCH 1 of 2] Cache: remove unused valid_msec fields In-Reply-To: References: <99e65578bc80960b2fdf.1416359748@piotrs-macbook-pro.local> <20141119164353.GZ26593@mdounin.ru> Message-ID: <20141120161431.GH53423@mdounin.ru> Hello! On Wed, Nov 19, 2014 at 01:06:22PM -0800, Piotr Sikora wrote: > Hey Maxim, > > > I would rather not, for two reasons: > > > > - I would actually like to see support for subsecond cache > > validity times added eventually (unlikely to happen in the near > > future, but still). > > That's valid reason, but it's going to be of rather limited use, > because neither "Expires" nor "Cache-Control" can request such > validity. I've seen more than one attempt to use nginx for microcaching, and this looks like a perfectly valid use case which will benefit from subsecond times a lot. > Also, this field has been there since the introduction of cache over 5 > years ago, just wasting I/O, so I'd argue that there is no reason to > keep it, unless it's being used (which is not). It can always be added > later on, when it's needed. >From I/O point of view, there is no difference as there is a padding anyway. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Nov 20 16:39:12 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 20 Nov 2014 19:39:12 +0300 Subject: [PATCH 2 of 2] Cache: send conditional requests only for cached 200 OK responses In-Reply-To: References: <99e65578bc80960b2fdf.1416359748@piotrs-macbook-pro.local> <16f4ca8391ddd98ba99b.1416359749@piotrs-macbook-pro.local> <20141119163814.GY26593@mdounin.ru> Message-ID: <20141120163912.GI53423@mdounin.ru> Hello! On Wed, Nov 19, 2014 at 01:52:34PM -0800, Piotr Sikora wrote: > Hey Maxim, > > > And what's wrong with it? > > I don't see why this should be status-specific. > > Because, according to the RFC, 304 Not Modified can be only emitted > for 200 OK responses, so we know that revalidation of other status > codes cannot happen, at least not when talking with well-behaving > server, so sending conditional requests for such cached responses is > pointless. For example, it can be usable with 206 responses as well (and this is perfectly allowed by the RFC). > Maybe I should elaborate a bit more on the problem I'm trying to > workaround, which is also the underlying reason for the fix I made in > fd283aa92e04. > > Prior to that commit, when nginx cached 404 Not Found response without > Last-Modified header, i.e. with such minimal headers: > > HTTP/1.1 404 Not Found > Date: Wed, 19 Nov 2014 21:12:47 GMT > > it would always try to revalidate it with bogus If-Modified-Since header: > > GET /notfound HTTP/1.1 > Host: www.example.com > If-Modified-Since: Thu, 01 Jan 1970 00:00:00 GMT > > for which upstream server would usually respond with full 404 Not > Found response (not 304 Not Modified), unless the file appeared on the > upstream server (and would have resulted in 200 OK for unconditional > requests), then it would respond with 304 Not Modified (thanks to the > broken If-Modified-Since test on their side), revalidating 404 Not > Found based on the 200 OK response. > > The problem mostly disappeared after fd283aa92e04, but it can still > happen, e.g. when cached 404 Not Found response with Last-Modified > header will be revalidated (and hence cached forever) by 304 Not > Modified sent instead of 200 OK response with Last-Modified header > older than the one in cached response. > > Hopefully that explains the problem enough... Let me know if you still > have any doubt as to why it is important to change the current > behavior to send conditional requests only for cached 200 OK > responses. I still think this approach is wrong, the behaviour shouldn't depend on the status code. -- Maxim Dounin http://nginx.org/ From vl at nginx.com Thu Nov 20 17:36:01 2014 From: vl at nginx.com (Homutov Vladimir) Date: Thu, 20 Nov 2014 17:36:01 +0000 Subject: [nginx] Syslog: allowed underscore symbol in tag (ticket #667). Message-ID: details: http://hg.nginx.org/nginx/rev/68f64bc17fa4 branches: changeset: 5922:68f64bc17fa4 user: Vladimir Homutov date: Thu Nov 20 20:02:21 2014 +0300 description: Syslog: allowed underscore symbol in tag (ticket #667). diffstat: src/core/ngx_syslog.c | 5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-) diffs (17 lines): diff -r 5004210e8c78 -r 68f64bc17fa4 src/core/ngx_syslog.c --- a/src/core/ngx_syslog.c Thu Nov 20 15:24:42 2014 +0300 +++ b/src/core/ngx_syslog.c Thu Nov 20 20:02:21 2014 +0300 @@ -182,10 +182,11 @@ ngx_syslog_parse_args(ngx_conf_t *cf, ng for (i = 4; i < len; i++) { c = ngx_tolower(p[i]); - if (c < '0' || (c > '9' && c < 'a') || c > 'z') { + if (c < '0' || (c > '9' && c < 'a' && c != '_') || c > 'z') { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "syslog \"tag\" only allows " - "alphanumeric characters"); + "alphanumeric characters " + "and underscore"); return NGX_CONF_ERROR; } } From pdn at cryptopro.ru Fri Nov 21 05:22:13 2014 From: pdn at cryptopro.ru (Dmitrii Pichulin) Date: Fri, 21 Nov 2014 08:22:13 +0300 Subject: [PATCH] allow to use engine keyform for server private key In-Reply-To: <5450FE27.2070702@cryptopro.ru> References: <20140801165839.GT1849@mdounin.ru> <53E8483C.8020507@cryptopro.ru> <20140812004334.GN1849@mdounin.ru> <5450FE27.2070702@cryptopro.ru> Message-ID: <546ECC05.5020009@cryptopro.ru> Ping. Patch: http://mailman.nginx.org/pipermail/nginx-devel/2014-August/005740.html Example: http://mailman.nginx.org/pipermail/nginx-devel/2014-October/006151.html From mdounin at mdounin.ru Fri Nov 21 13:09:01 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 21 Nov 2014 16:09:01 +0300 Subject: [PATCH] allow to use engine keyform for server private key In-Reply-To: <546ECC05.5020009@cryptopro.ru> References: <20140801165839.GT1849@mdounin.ru> <53E8483C.8020507@cryptopro.ru> <20140812004334.GN1849@mdounin.ru> <5450FE27.2070702@cryptopro.ru> <546ECC05.5020009@cryptopro.ru> Message-ID: <20141121130901.GE73155@mdounin.ru> Hello! On Fri, Nov 21, 2014 at 08:22:13AM +0300, Dmitrii Pichulin wrote: > Ping. > > Patch: > http://mailman.nginx.org/pipermail/nginx-devel/2014-August/005740.html > > Example: > http://mailman.nginx.org/pipermail/nginx-devel/2014-October/006151.html Thanks again and sorry, still no time. I hope I'll be able to look into it in the next week or so. -- Maxim Dounin http://nginx.org/ From zellster at gmail.com Fri Nov 21 19:27:54 2014 From: zellster at gmail.com (Adam Zell) Date: Fri, 21 Nov 2014 11:27:54 -0800 Subject: Raptor HTTP server micro-optimizations Message-ID: Interesting collection of C/C++ performance tweaks: http://www.rubyraptor.org/how-we-made-raptor-up-to-4x-faster-than-unicorn-and-up-to-2x-faster-than-puma-torquebox/ http://www.rubyraptor.org/pointer-tagging-linked-string-hash-tables-turbocaching-and-other-raptor-optimizations/ -- Adam zellster at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Nov 21 19:52:09 2014 From: vbart at nginx.com (Valentin Bartenev) Date: Fri, 21 Nov 2014 19:52:09 +0000 Subject: [nginx] SPDY: push pending data while closing a stream as with k... Message-ID: details: http://hg.nginx.org/nginx/rev/2c10db908b8c branches: changeset: 5923:2c10db908b8c user: Valentin Bartenev date: Fri Nov 21 22:51:49 2014 +0300 description: SPDY: push pending data while closing a stream as with keepalive. This helps to avoid delays in sending the last chunk of data because of bad interaction between Nagle's algorithm on nginx side and delayed ACK on the client side. Delays could also be caused by TCP_CORK/TCP_NOPUSH if SPDY was working without SSL and sendfile() was used. diffstat: src/http/ngx_http_spdy.c | 52 +++++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 51 insertions(+), 1 deletions(-) diffs (70 lines): diff -r 68f64bc17fa4 -r 2c10db908b8c src/http/ngx_http_spdy.c --- a/src/http/ngx_http_spdy.c Thu Nov 20 20:02:21 2014 +0300 +++ b/src/http/ngx_http_spdy.c Fri Nov 21 22:51:49 2014 +0300 @@ -3317,8 +3317,10 @@ ngx_http_spdy_close_stream_handler(ngx_e void ngx_http_spdy_close_stream(ngx_http_spdy_stream_t *stream, ngx_int_t rc) { + int tcp_nodelay; ngx_event_t *ev; - ngx_connection_t *fc; + ngx_connection_t *c, *fc; + ngx_http_core_loc_conf_t *clcf; ngx_http_spdy_stream_t **index, *s; ngx_http_spdy_srv_conf_t *sscf; ngx_http_spdy_connection_t *sc; @@ -3344,6 +3346,54 @@ ngx_http_spdy_close_stream(ngx_http_spdy { sc->connection->error = 1; } + + } else { + c = sc->connection; + + if (c->tcp_nopush == NGX_TCP_NOPUSH_SET) { + if (ngx_tcp_push(c->fd) == -1) { + ngx_connection_error(c, ngx_socket_errno, + ngx_tcp_push_n " failed"); + c->error = 1; + tcp_nodelay = 0; + + } else { + c->tcp_nopush = NGX_TCP_NOPUSH_UNSET; + tcp_nodelay = ngx_tcp_nodelay_and_tcp_nopush ? 1 : 0; + } + + } else { + tcp_nodelay = 1; + } + + clcf = ngx_http_get_module_loc_conf(stream->request, + ngx_http_core_module); + + if (tcp_nodelay + && clcf->tcp_nodelay + && c->tcp_nodelay == NGX_TCP_NODELAY_UNSET) + { + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "tcp_nodelay"); + + if (setsockopt(c->fd, IPPROTO_TCP, TCP_NODELAY, + (const void *) &tcp_nodelay, sizeof(int)) + == -1) + { +#if (NGX_SOLARIS) + /* Solaris returns EINVAL if a socket has been shut down */ + c->log_error = NGX_ERROR_IGNORE_EINVAL; +#endif + + ngx_connection_error(c, ngx_socket_errno, + "setsockopt(TCP_NODELAY) failed"); + + c->log_error = NGX_ERROR_INFO; + c->error = 1; + + } else { + c->tcp_nodelay = NGX_TCP_NODELAY_SET; + } + } } if (sc->stream == stream) { From pkopensrc at gmail.com Sat Nov 22 08:14:06 2014 From: pkopensrc at gmail.com (punit kandoi) Date: Sat, 22 Nov 2014 13:44:06 +0530 Subject: Proxy : Why two places request structure is stored in upstream Message-ID: Hi, I was reading world famous tutorial by Emiller. Nginx guide to module development. I was going to ngx_http_proxy_handler() as described in tutorial. I seen request structure is stored in two places in upstream. u->pipe->input_filter = ngx_http_proxy_copy_filter; u->pipe->input_ctx = r; <<------ u->input_filter_init = ngx_http_proxy_input_filter_init; u->input_filter = ngx_http_proxy_non_buffered_copy_filter; u->input_filter_ctx = r; <<------ Whats the difference between two variables? If we want to take request structure from upstream in later stage before selecting server which value to be used? Awaiting for reply. -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotr at cloudflare.com Sun Nov 23 11:40:43 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Sun, 23 Nov 2014 03:40:43 -0800 Subject: [PATCH] Not Modified: prefer entity tags over date validators In-Reply-To: <20141120160259.GG53423@mdounin.ru> References: <3efade6bb02f7962a512.1416359746@piotrs-macbook-pro.local> <20141119163527.GX26593@mdounin.ru> <20141120160259.GG53423@mdounin.ru> Message-ID: Hey Maxim, > Sure. We do adhere RFC2616 here. The problem is that RFC7232 is > different, and there are no known reasons why it should. RFC7232 obsoletes RFC2616, so it should be pretty clear which one to respect in places they differ. > The same applies to RFC2616, but it mandates different behaviour. > So what's the problem with checking both date and ETag? Checking both validators can easily result in false-negatives, i.e. it is possible to send legitimate conditional request that would pass strong entity tags validation, but fail weak date validation, because clients can send requests with "If-Modified-Since: date of the response" that fails on the web servers using stricter than required "exact" logic (i.e. nginx). Checking only the strongest validator prevents this from happening. Best regards, Piotr Sikora From piotr at cloudflare.com Sun Nov 23 11:45:42 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Sun, 23 Nov 2014 03:45:42 -0800 Subject: [PATCH 2 of 2] Cache: send conditional requests only for cached 200 OK responses In-Reply-To: <20141120163912.GI53423@mdounin.ru> References: <99e65578bc80960b2fdf.1416359748@piotrs-macbook-pro.local> <16f4ca8391ddd98ba99b.1416359749@piotrs-macbook-pro.local> <20141119163814.GY26593@mdounin.ru> <20141120163912.GI53423@mdounin.ru> Message-ID: Hey Maxim, > For example, it can be usable with 206 responses as well (and this > is perfectly allowed by the RFC). 206 responses won't be cached by nginx, so that's kind of a moot point. > I still think this approach is wrong, the behaviour shouldn't > depend on the status code. Why not? How else do you want to fix this problem? I really don't understand why do you disagree, because it's pretty clear to me that it MUST depend on the status code... What more convincing do you need? Best regards, Piotr Sikora From steven.hartland at multiplay.co.uk Sun Nov 23 12:34:07 2014 From: steven.hartland at multiplay.co.uk (Steven Hartland) Date: Sun, 23 Nov 2014 12:34:07 +0000 Subject: [PATCH 2 of 2] Cache: send conditional requests only for cached 200 OK responses In-Reply-To: References: <99e65578bc80960b2fdf.1416359748@piotrs-macbook-pro.local> <16f4ca8391ddd98ba99b.1416359749@piotrs-macbook-pro.local> <20141119163814.GY26593@mdounin.ru> <20141120163912.GI53423@mdounin.ru> Message-ID: <5471D43F.1050702@multiplay.co.uk> We use 206 cached responses. On 23/11/2014 11:45, Piotr Sikora wrote: > Hey Maxim, > >> For example, it can be usable with 206 responses as well (and this >> is perfectly allowed by the RFC). > 206 responses won't be cached by nginx, so that's kind of a moot point. > >> I still think this approach is wrong, the behaviour shouldn't >> depend on the status code. > Why not? How else do you want to fix this problem? > > I really don't understand why do you disagree, because it's pretty > clear to me that it MUST depend on the status code... What more > convincing do you need? > > Best regards, > Piotr Sikora > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From s.martin49 at gmail.com Sun Nov 23 17:07:00 2014 From: s.martin49 at gmail.com (Samuel Martin) Date: Sun, 23 Nov 2014 18:07:00 +0100 Subject: [PATCH] linux: add check for sysctl sycall presence Message-ID: <3a010d195a5a4249da55.1416762420@bobook> # HG changeset patch # User "Samuel Martin" # Date 1416759425 -3600 # Sun Nov 23 17:17:05 2014 +0100 # Node ID 3a010d195a5a4249da5552c5de7643f898c9e507 # Parent 1be88123e98c8b0e78602eeb3a8c3eb3444c15f3 linux: add check for sysctl sycall presence And also disable call to the sysctl function when the sysctl syscall is disabled in the kernel configuration. This fixes: http://autobuild.buildroot.org/results/730/730105fc0a89b381b3b29192d07f28ef1f13cbb3/build-end.log Signed-off-by: Samuel Martin diff -r 1be88123e98c -r 3a010d195a5a auto/os/linux --- a/auto/os/linux Sat Aug 02 00:30:55 2014 +0200 +++ b/auto/os/linux Sun Nov 23 17:17:05 2014 +0100 @@ -44,6 +44,20 @@ have=NGX_HAVE_POSIX_FADVISE . auto/nohave fi +# sysctl syscall + +ngx_feature="sysctl_syscall" +ngx_feature_name="NGX_HAVE_SYSCTL_SYSCALL" +ngx_feature_run=no +ngx_feature_run_force_result="$ngx_force_have_sysctl_syscall" +ngx_feature_incs="#include " +ngx_feature_path= +ngx_feature_libs= +ngx_feature_test="int name[2] = { CTL_KERN, KERN_RTSIGMAX }; + int old; + sysctl(name, 2, &old, sizeof(old), NULL, 0);" +. auto/feature + # epoll, EPOLLET version ngx_feature="epoll" diff -r 1be88123e98c -r 3a010d195a5a src/event/modules/ngx_rtsig_module.c --- a/src/event/modules/ngx_rtsig_module.c Sat Aug 02 00:30:55 2014 +0200 +++ b/src/event/modules/ngx_rtsig_module.c Sun Nov 23 17:17:05 2014 +0100 @@ -621,6 +621,7 @@ if (tested >= rtscf->overflow_test) { +#if (NGX_HAVE_SYSCTL_SYSCALL) if (ngx_linux_rtsig_max) { /* @@ -668,7 +669,9 @@ } } else { - +#else + { +#endif /* * Linux has not KERN_RTSIGMAX since 2.6.6-mm2 * so drain the rt signal queue unconditionally diff -r 1be88123e98c -r 3a010d195a5a src/os/unix/ngx_linux_config.h --- a/src/os/unix/ngx_linux_config.h Sat Aug 02 00:30:55 2014 +0200 +++ b/src/os/unix/ngx_linux_config.h Sun Nov 23 17:17:05 2014 +0100 @@ -84,8 +84,10 @@ #if (NGX_HAVE_RTSIG) #include +#if (NGX_HAVE_SYSCTL_SYSCALL) #include #endif +#endif #if (NGX_HAVE_EPOLL) diff -r 1be88123e98c -r 3a010d195a5a src/os/unix/ngx_linux_init.c --- a/src/os/unix/ngx_linux_init.c Sat Aug 02 00:30:55 2014 +0200 +++ b/src/os/unix/ngx_linux_init.c Sun Nov 23 17:17:05 2014 +0100 @@ -46,7 +46,7 @@ (void) ngx_cpystrn(ngx_linux_kern_osrelease, (u_char *) u.release, sizeof(ngx_linux_kern_osrelease)); -#if (NGX_HAVE_RTSIG) +#if (NGX_HAVE_RTSIG) && (NGX_HAVE_SYSCTL_SYSCALL) { int name[2]; size_t len; From mdounin at mdounin.ru Mon Nov 24 13:00:14 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 24 Nov 2014 16:00:14 +0300 Subject: Proxy : Why two places request structure is stored in upstream In-Reply-To: References: Message-ID: <20141124130014.GC31620@mdounin.ru> Hello! On Sat, Nov 22, 2014 at 01:44:06PM +0530, punit kandoi wrote: > Hi, > > I was reading world famous tutorial by Emiller. Nginx guide to module > development. > > I was going to ngx_http_proxy_handler() as described in tutorial. > > I seen request structure is stored in two places in upstream. > > u->pipe->input_filter = ngx_http_proxy_copy_filter; > u->pipe->input_ctx = r; <<------ > > u->input_filter_init = ngx_http_proxy_input_filter_init; > u->input_filter = ngx_http_proxy_non_buffered_copy_filter; > u->input_filter_ctx = r; <<------ > > Whats the difference between two variables? The "u->pipe->input_ctx" context is for ngx_event_pipe() code, to be used by buffered input filters when needed. The "u->input_filter_ctx" context is for non-buffered filters. > If we want to take request structure from upstream in later stage before > selecting server which value to be used? Both of the above are wrong if you aren't writing an input filter. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Nov 24 13:07:53 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 24 Nov 2014 16:07:53 +0300 Subject: [PATCH] linux: add check for sysctl sycall presence In-Reply-To: <3a010d195a5a4249da55.1416762420@bobook> References: <3a010d195a5a4249da55.1416762420@bobook> Message-ID: <20141124130753.GD31620@mdounin.ru> Hello! On Sun, Nov 23, 2014 at 06:07:00PM +0100, Samuel Martin wrote: > # HG changeset patch > # User "Samuel Martin" > # Date 1416759425 -3600 > # Sun Nov 23 17:17:05 2014 +0100 > # Node ID 3a010d195a5a4249da5552c5de7643f898c9e507 > # Parent 1be88123e98c8b0e78602eeb3a8c3eb3444c15f3 > linux: add check for sysctl sycall presence > > And also disable call to the sysctl function when the sysctl syscall > is disabled in the kernel configuration. > > This fixes: > http://autobuild.buildroot.org/results/730/730105fc0a89b381b3b29192d07f28ef1f13cbb3/build-end.log Just avoid --with-rtsig-module on such hosts instead. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Nov 24 15:03:59 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 24 Nov 2014 18:03:59 +0300 Subject: [PATCH 2 of 2] Cache: send conditional requests only for cached 200 OK responses In-Reply-To: References: <99e65578bc80960b2fdf.1416359748@piotrs-macbook-pro.local> <16f4ca8391ddd98ba99b.1416359749@piotrs-macbook-pro.local> <20141119163814.GY26593@mdounin.ru> <20141120163912.GI53423@mdounin.ru> Message-ID: <20141124150359.GG31620@mdounin.ru> Hello! On Sun, Nov 23, 2014 at 03:45:42AM -0800, Piotr Sikora wrote: > Hey Maxim, > > > For example, it can be usable with 206 responses as well (and this > > is perfectly allowed by the RFC). > > 206 responses won't be cached by nginx, so that's kind of a moot point. That's up to a configuration. > > I still think this approach is wrong, the behaviour shouldn't > > depend on the status code. > > Why not? How else do you want to fix this problem? Which problem? You are trying to convince me that that conditional requests shouldn't be used to revalidate responses with non-200 status code because it may not work with some upstream servers. Sure, this can happen. Moreover, it can happen with responses with 200 status code, too. Two trivial solutions include: - fix an upstream server; - disable use of conditional requests, it's off by default. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Nov 24 15:29:17 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 24 Nov 2014 18:29:17 +0300 Subject: [PATCH] Not Modified: prefer entity tags over date validators In-Reply-To: References: <3efade6bb02f7962a512.1416359746@piotrs-macbook-pro.local> <20141119163527.GX26593@mdounin.ru> <20141120160259.GG53423@mdounin.ru> Message-ID: <20141124152917.GH31620@mdounin.ru> Hello! On Sun, Nov 23, 2014 at 03:40:43AM -0800, Piotr Sikora wrote: > Hey Maxim, > > > Sure. We do adhere RFC2616 here. The problem is that RFC7232 is > > different, and there are no known reasons why it should. > > RFC7232 obsoletes RFC2616, so it should be pretty clear which one to > respect in places they differ. Or which one to ignore, if something in it looks wrong/suspicious. I'm just trying to say that blindly following an RFC isn't a good rationale for a commit. > > The same applies to RFC2616, but it mandates different behaviour. > > So what's the problem with checking both date and ETag? > > Checking both validators can easily result in false-negatives, i.e. it > is possible to send legitimate conditional request that would pass > strong entity tags validation, but fail weak date validation, because > clients can send requests with "If-Modified-Since: date of the > response" that fails on the web servers using stricter than required > "exact" logic (i.e. nginx). > > Checking only the strongest validator prevents this from happening. That's the only explanation I can think of, too, but it doesn't justify the "MUST" clause used in the RFC7232. Nothing really bad can happen if a server adheres to RFC2616 mandated behaviour and checks both validators. At most, the behaviour will be suboptimal. And, AFAIK, all clients do behave in a way compatible with RFC2616 and don't try to send fake dates in If-Modified-Since. So the question remains. Or, rather, two questions: - Why the change was done in RFC7232 compared to RFC2616. - Do we really need to change anything in our code. -- Maxim Dounin http://nginx.org/ From piotr at cloudflare.com Mon Nov 24 21:57:32 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 24 Nov 2014 13:57:32 -0800 Subject: [PATCH] Not Modified: prefer entity tags over date validators In-Reply-To: <20141124152917.GH31620@mdounin.ru> References: <3efade6bb02f7962a512.1416359746@piotrs-macbook-pro.local> <20141119163527.GX26593@mdounin.ru> <20141120160259.GG53423@mdounin.ru> <20141124152917.GH31620@mdounin.ru> Message-ID: Hey Maxim, > Or which one to ignore, if something in it looks wrong/suspicious. > I'm just trying to say that blindly following an RFC isn't a good > rationale for a commit. Yeah, but not following and/or doing exactly the opposite (like in this case) without good reason just leads to edge cases and interoperability issues. > That's the only explanation I can think of, too, but it doesn't > justify the "MUST" clause used in the RFC7232. Nothing really bad > can happen if a server adheres to RFC2616 mandated behaviour and > checks both validators. At most, the behaviour will be suboptimal. Agreed. > And, AFAIK, all clients do behave in a way compatible with RFC2616 > and don't try to send fake dates in If-Modified-Since. The thing is that neither RFC says what date should be used in "If-Modified-Since" header, both only advise to use the one from "Last-Modified" for best interoperability. However, the semantics of it suggest that any date is fine, really, and a lot of web servers implement this quite literally (i.e. using nginx's "before" logic). > So the question remains. Or, rather, two questions: > > - Why the change was done in RFC7232 compared to RFC2616. > > - Do we really need to change anything in our code. I don't think you have a good enough reason to justify not adhering to RFC7232. Best regards, Piotr Sikora From s.martin49 at gmail.com Mon Nov 24 22:20:24 2014 From: s.martin49 at gmail.com (Samuel Martin) Date: Mon, 24 Nov 2014 23:20:24 +0100 Subject: [PATCH] linux: add check for sysctl sycall presence In-Reply-To: <20141124130753.GD31620@mdounin.ru> References: <3a010d195a5a4249da55.1416762420@bobook> <20141124130753.GD31620@mdounin.ru> Message-ID: Hello, On Mon, Nov 24, 2014 at 2:07 PM, Maxim Dounin wrote: > Hello! > > On Sun, Nov 23, 2014 at 06:07:00PM +0100, Samuel Martin wrote: > >> # HG changeset patch >> # User "Samuel Martin" >> # Date 1416759425 -3600 >> # Sun Nov 23 17:17:05 2014 +0100 >> # Node ID 3a010d195a5a4249da5552c5de7643f898c9e507 >> # Parent 1be88123e98c8b0e78602eeb3a8c3eb3444c15f3 >> linux: add check for sysctl sycall presence >> >> And also disable call to the sysctl function when the sysctl syscall >> is disabled in the kernel configuration. >> Actually sysctl syscall is deprecated and some new architectures choose to not support deprecated stuff. So, it may be the first architecture not supporting this syscall, but not the last one... The tricky thing is that you cannot know if the target provided this syscall or not before trying to link some code using the sysctl function. >> This fixes: >> http://autobuild.buildroot.org/results/730/730105fc0a89b381b3b29192d07f28ef1f13cbb3/build-end.log > > Just avoid --with-rtsig-module on such hosts instead. You mean not setting it on the configure command line or conditionally setting NGX_EVENT_RTSIG wrt the sysctl check result? It is not really convenient to have a build failing during the compile/link step, whereas the check can be done during the configuration, and just automatically disable the unsupported feature (silently or with some warning) or loudly bail out. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel Regards, -- Samuel From piotr at cloudflare.com Mon Nov 24 22:40:41 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 24 Nov 2014 14:40:41 -0800 Subject: [PATCH 2 of 2] Cache: send conditional requests only for cached 200 OK responses In-Reply-To: <20141124150359.GG31620@mdounin.ru> References: <99e65578bc80960b2fdf.1416359748@piotrs-macbook-pro.local> <16f4ca8391ddd98ba99b.1416359749@piotrs-macbook-pro.local> <20141119163814.GY26593@mdounin.ru> <20141120163912.GI53423@mdounin.ru> <20141124150359.GG31620@mdounin.ru> Message-ID: Hey Maxim, > That's up to a configuration. You're right... It seems like a waste of disk space to me, but nginx can be forced to do that. This still leaves us with only 200 and 206. > Which problem? You are trying to convince me that that > conditional requests shouldn't be used to revalidate responses > with non-200 status code because it may not work with some > upstream servers. Sure, this can happen. Moreover, it can happen > with responses with 200 status code, too. No, I'm saying that it cannot work. Maybe I shouldn't have tried to explain the reasoning behind fd283aa92e04, because it looks that it just added confusion, so let's start again. 1. client #1 requests "/logo.png" from website behind nginx, 2. nginx doesn't have "/logo.png" in cache, so it sends request upstream, 3. upstream server rate-limits nginx and replies with 503: HTTP/1.1 503 Service Temporarily Unavailable Last-Modified: Mon, 24 Nov 2014 22:00:00 GMT 4. 503 gets force-cached at nginx (due to configuration), 5. client #2 requests "/logo.png" from website behind nginx, 6. nginx has "/logo.png" in cache, but it's expired, so it tries to revalidate it: GET /logo.png HTTP/1.1 If-Modified-Since: Mon, 24 Nov 2014 22:00:00 GMT 7. upstream replies with 304, accidentally revalidating 503: HTTP/1.1 304 Not Modified even though, the 304 is generated from: HTTP/1.1 200 OK Last-Modified: Mon, 10 Nov 2014 00:00:00 GMT not from the rate-limiting. This scenario describes perfectly behaving upstream server, the only issue here is that nginx tries to revalidate response that cannot be revalidated and applies 304 Not Modified response to it. > Two trivial solutions include: > > - fix an upstream server; There is no issue with the upstream server, only with nginx revalidating non-200 status codes. > - disable use of conditional requests, it's off by default. So you're saying that just because it's off by default, it's fine for it to be broken? Best regards, Piotr Sikora From piotr at cloudflare.com Tue Nov 25 03:27:53 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 24 Nov 2014 19:27:53 -0800 Subject: [PATCH 1 of 2] Cache: don't update cache if revalidated response is not cacheable Message-ID: <01f07fc7932b64f261c9.1416886073@piotrs-macbook-pro.local> # HG changeset patch # User Piotr Sikora # Date 1416886025 28800 # Mon Nov 24 19:27:05 2014 -0800 # Node ID 01f07fc7932b64f261c9e6cb778c87279fabcde2 # Parent 2c10db908b8c4a9c0532c58830275d5ad84ae686 Cache: don't update cache if revalidated response is not cacheable. Signed-off-by: Piotr Sikora diff -r 2c10db908b8c -r 01f07fc7932b src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Fri Nov 21 22:51:49 2014 +0300 +++ b/src/http/ngx_http_upstream.c Mon Nov 24 19:27:05 2014 -0800 @@ -2002,14 +2002,13 @@ ngx_http_upstream_test_next(ngx_http_req && u->cache_status == NGX_HTTP_CACHE_EXPIRED && u->conf->cache_revalidate) { - time_t now, valid; + time_t valid; ngx_int_t rc; ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "http upstream not modified"); - now = ngx_time(); - valid = r->cache->valid_sec; + valid = u->cacheable ? r->cache->valid_sec : 0; rc = u->reinit_request(r); @@ -2021,25 +2020,31 @@ ngx_http_upstream_test_next(ngx_http_req u->cache_status = NGX_HTTP_CACHE_REVALIDATED; rc = ngx_http_upstream_cache_send(r, u); - if (valid == 0) { - valid = r->cache->valid_sec; - } - - if (valid == 0) { - valid = ngx_http_file_cache_valid(u->conf->cache_valid, - u->headers_in.status_n); + if (u->cacheable || valid) { + time_t now; + + now = ngx_time(); + + if (valid == 0) { + valid = r->cache->valid_sec; + } + + if (valid == 0) { + valid = ngx_http_file_cache_valid(u->conf->cache_valid, + u->headers_in.status_n); + if (valid) { + valid = now + valid; + } + } + if (valid) { - valid = now + valid; + r->cache->valid_sec = valid; + r->cache->date = now; + + ngx_http_file_cache_update_header(r); } } - if (valid) { - r->cache->valid_sec = valid; - r->cache->date = now; - - ngx_http_file_cache_update_header(r); - } - ngx_http_upstream_finalize_request(r, u, rc); return NGX_OK; } From piotr at cloudflare.com Tue Nov 25 03:27:54 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Mon, 24 Nov 2014 19:27:54 -0800 Subject: [PATCH 2 of 2] Cache: test proxy_no_cache predicates before updating cache In-Reply-To: <01f07fc7932b64f261c9.1416886073@piotrs-macbook-pro.local> References: <01f07fc7932b64f261c9.1416886073@piotrs-macbook-pro.local> Message-ID: # HG changeset patch # User Piotr Sikora # Date 1416886025 28800 # Mon Nov 24 19:27:05 2014 -0800 # Node ID ab6cff701ca23bee8f24e9efcdbcef2ca938b68f # Parent 01f07fc7932b64f261c9e6cb778c87279fabcde2 Cache: test proxy_no_cache predicates before updating cache. Signed-off-by: Piotr Sikora diff -r 01f07fc7932b -r ab6cff701ca2 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Mon Nov 24 19:27:05 2014 -0800 +++ b/src/http/ngx_http_upstream.c Mon Nov 24 19:27:05 2014 -0800 @@ -2008,6 +2008,20 @@ ngx_http_upstream_test_next(ngx_http_req ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "http upstream not modified"); + switch (ngx_http_test_predicates(r, u->conf->no_cache)) { + + case NGX_ERROR: + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); + return NGX_OK; + + case NGX_DECLINED: + u->cacheable = 0; + break; + + default: /* NGX_OK */ + break; + } + valid = u->cacheable ? r->cache->valid_sec : 0; rc = u->reinit_request(r); From mdounin at mdounin.ru Tue Nov 25 12:11:15 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 Nov 2014 15:11:15 +0300 Subject: [PATCH] linux: add check for sysctl sycall presence In-Reply-To: References: <3a010d195a5a4249da55.1416762420@bobook> <20141124130753.GD31620@mdounin.ru> Message-ID: <20141125121115.GK31620@mdounin.ru> Hello! On Mon, Nov 24, 2014 at 11:20:24PM +0100, Samuel Martin wrote: > On Mon, Nov 24, 2014 at 2:07 PM, Maxim Dounin wrote: [...] > > Just avoid --with-rtsig-module on such hosts instead. > > You mean not setting it on the configure command line or conditionally > setting NGX_EVENT_RTSIG wrt the sysctl check result? Just not specifying it on the configure command line will be enough. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Tue Nov 25 13:30:39 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 25 Nov 2014 16:30:39 +0300 Subject: [PATCH] linux: add check for sysctl sycall presence In-Reply-To: References: <3a010d195a5a4249da55.1416762420@bobook> <20141124130753.GD31620@mdounin.ru> Message-ID: <5716210.WfVkJygdyp@vbart-workstation> On Monday 24 November 2014 23:20:24 Samuel Martin wrote: > Hello, > > On Mon, Nov 24, 2014 at 2:07 PM, Maxim Dounin wrote: > > Hello! > > > > On Sun, Nov 23, 2014 at 06:07:00PM +0100, Samuel Martin wrote: > > > >> # HG changeset patch > >> # User "Samuel Martin" > >> # Date 1416759425 -3600 > >> # Sun Nov 23 17:17:05 2014 +0100 > >> # Node ID 3a010d195a5a4249da5552c5de7643f898c9e507 > >> # Parent 1be88123e98c8b0e78602eeb3a8c3eb3444c15f3 > >> linux: add check for sysctl sycall presence > >> > >> And also disable call to the sysctl function when the sysctl syscall > >> is disabled in the kernel configuration. > >> > > Actually sysctl syscall is deprecated and some new architectures > choose to not support deprecated stuff. So, it may be the first > architecture not supporting this syscall, but not the last one... > [..] I believe the rtsig module in nginx is deprecated as well. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Tue Nov 25 13:43:44 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 Nov 2014 16:43:44 +0300 Subject: [PATCH 2 of 2] Cache: send conditional requests only for cached 200 OK responses In-Reply-To: References: <99e65578bc80960b2fdf.1416359748@piotrs-macbook-pro.local> <16f4ca8391ddd98ba99b.1416359749@piotrs-macbook-pro.local> <20141119163814.GY26593@mdounin.ru> <20141120163912.GI53423@mdounin.ru> <20141124150359.GG31620@mdounin.ru> Message-ID: <20141125134344.GO31620@mdounin.ru> Hello! On Mon, Nov 24, 2014 at 02:40:41PM -0800, Piotr Sikora wrote: > Hey Maxim, > > > That's up to a configuration. > > You're right... It seems like a waste of disk space to me, but nginx > can be forced to do that. > > This still leaves us with only 200 and 206. The question is: how many other status codes should be considered? I _think_ that 200 and 206 is good enough list, but I'm not sure. > > Which problem? You are trying to convince me that that > > conditional requests shouldn't be used to revalidate responses > > with non-200 status code because it may not work with some > > upstream servers. Sure, this can happen. Moreover, it can happen > > with responses with 200 status code, too. > > No, I'm saying that it cannot work. > > Maybe I shouldn't have tried to explain the reasoning behind > fd283aa92e04, because it looks that it just added confusion, so let's > start again. > > 1. client #1 requests "/logo.png" from website behind nginx, > 2. nginx doesn't have "/logo.png" in cache, so it sends request upstream, > 3. upstream server rate-limits nginx and replies with 503: > > HTTP/1.1 503 Service Temporarily Unavailable > Last-Modified: Mon, 24 Nov 2014 22:00:00 GMT Here is the first problem in the upstream server: it returns an error with the Last-Modified. This is what nginx avoids for a reason. > 4. 503 gets force-cached at nginx (due to configuration), Here you intentionally cache a response which is not cacheable as per HTTP specification. > 5. client #2 requests "/logo.png" from website behind nginx, > 6. nginx has "/logo.png" in cache, but it's expired, so it tries to > revalidate it: > > GET /logo.png HTTP/1.1 > If-Modified-Since: Mon, 24 Nov 2014 22:00:00 GMT > > 7. upstream replies with 304, accidentally revalidating 503: > > HTTP/1.1 304 Not Modified > > even though, the 304 is generated from: > > HTTP/1.1 200 OK > Last-Modified: Mon, 10 Nov 2014 00:00:00 GMT > > not from the rate-limiting. Here is another problem in the upstream server: it returns 304 incorrectly. Strict Last-Modified matching is here for a reason and, among other benefits, allows to mitigate such problems. > This scenario describes perfectly behaving upstream server, the only > issue here is that nginx tries to revalidate response that cannot be > revalidated and applies 304 Not Modified response to it. This is not true. The upstream server intentionally does at least two perfectly stupid things, and additionally you force nginx to cache an uncacheable response. And all of this combined causes the problem, not the fact that nginx uses conditional requests. > > Two trivial solutions include: > > > > - fix an upstream server; > > There is no issue with the upstream server, only with nginx > revalidating non-200 status codes. This is not true, see above. > > - disable use of conditional requests, it's off by default. > > So you're saying that just because it's off by default, it's fine for > it to be broken? It's not broken. Trying to convince me that it's broken will not work, for sure. You may have better luck convincing me that the change will help to mitigate known/seen-in-the-wild problems without affecting other uses. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Nov 25 13:51:53 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 Nov 2014 16:51:53 +0300 Subject: [PATCH] Not Modified: prefer entity tags over date validators In-Reply-To: References: <3efade6bb02f7962a512.1416359746@piotrs-macbook-pro.local> <20141119163527.GX26593@mdounin.ru> <20141120160259.GG53423@mdounin.ru> <20141124152917.GH31620@mdounin.ru> Message-ID: <20141125135153.GP31620@mdounin.ru> Hello! On Mon, Nov 24, 2014 at 01:57:32PM -0800, Piotr Sikora wrote: > Hey Maxim, > > > Or which one to ignore, if something in it looks wrong/suspicious. > > I'm just trying to say that blindly following an RFC isn't a good > > rationale for a commit. > > Yeah, but not following and/or doing exactly the opposite (like in > this case) without good reason just leads to edge cases and > interoperability issues. That's exactly the reason why I don't want to change this without a good reason (and an explanation why the change was done in RFC7232). Current behaviour is one required by RFC2616. Difference with the behaviour required by RFC7232 at most can result in suboptimal operation. Nothing really bad can happen. -- Maxim Dounin http://nginx.org/ From steven.hartland at multiplay.co.uk Tue Nov 25 14:22:13 2014 From: steven.hartland at multiplay.co.uk (Steven Hartland) Date: Tue, 25 Nov 2014 14:22:13 +0000 Subject: [PATCH] Allow Partial Content responses to satisfy Range requests Message-ID: <0c3c06fabfc3b1c57710.1416925333@blade26.multiplay.co.uk> # HG changeset patch # User Steven Hartland # Date 1416925134 0 # Tue Nov 25 14:18:54 2014 +0000 # Node ID 0c3c06fabfc3b1c57710c0cced4837c10e3e9bbb # Parent 7d7eac6e31df1d962a644f8093c1fbb8f91620ce Allow Partial Content responses to satisfy Range requests. diff -r 7d7eac6e31df -r 0c3c06fabfc3 src/http/modules/ngx_http_range_filter_module.c --- a/src/http/modules/ngx_http_range_filter_module.c Tue Nov 04 19:56:23 2014 +0900 +++ b/src/http/modules/ngx_http_range_filter_module.c Tue Nov 25 14:18:54 2014 +0000 @@ -54,6 +54,7 @@ typedef struct { off_t offset; + off_t content_length; ngx_str_t boundary_header; ngx_array_t ranges; } ngx_http_range_filter_ctx_t; @@ -65,7 +66,8 @@ ngx_http_range_filter_ctx_t *ctx); static ngx_int_t ngx_http_range_multipart_header(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx); -static ngx_int_t ngx_http_range_not_satisfiable(ngx_http_request_t *r); +static ngx_int_t ngx_http_range_not_satisfiable(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx); static ngx_int_t ngx_http_range_test_overlapped(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r, @@ -76,6 +78,9 @@ static ngx_int_t ngx_http_range_header_filter_init(ngx_conf_t *cf); static ngx_int_t ngx_http_range_body_filter_init(ngx_conf_t *cf); +static ngx_int_t ngx_http_content_range_parse(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx); + static ngx_http_module_t ngx_http_range_header_filter_module_ctx = { NULL, /* preconfiguration */ @@ -153,8 +158,8 @@ ngx_http_range_filter_ctx_t *ctx; if (r->http_version < NGX_HTTP_VERSION_10 - || r->headers_out.status != NGX_HTTP_OK - || r != r->main + || (r->headers_out.status != NGX_HTTP_OK + && r->headers_out.status != NGX_HTTP_PARTIAL_CONTENT) || r->headers_out.content_length_n == -1 || !r->allow_ranges) { @@ -230,26 +235,31 @@ ranges = r->single_range ? 1 : clcf->max_ranges; - switch (ngx_http_range_parse(r, ctx, ranges)) { + switch (ngx_http_content_range_parse(r, ctx)) { + case NGX_OK: + switch (ngx_http_range_parse(r, ctx, ranges)) { + case NGX_OK: + ngx_http_set_ctx(r, ctx, ngx_http_range_body_filter_module); - case NGX_OK: - ngx_http_set_ctx(r, ctx, ngx_http_range_body_filter_module); + r->headers_out.status = NGX_HTTP_PARTIAL_CONTENT; + r->headers_out.status_line.len = 0; - r->headers_out.status = NGX_HTTP_PARTIAL_CONTENT; - r->headers_out.status_line.len = 0; + if (ctx->ranges.nelts == 1) { + return ngx_http_range_singlepart_header(r, ctx); + } - if (ctx->ranges.nelts == 1) { - return ngx_http_range_singlepart_header(r, ctx); + return ngx_http_range_multipart_header(r, ctx); + + case NGX_HTTP_RANGE_NOT_SATISFIABLE: + return ngx_http_range_not_satisfiable(r); + + case NGX_ERROR: + return NGX_ERROR; + + default: /* NGX_DECLINED */ + break; } - - return ngx_http_range_multipart_header(r, ctx); - - case NGX_HTTP_RANGE_NOT_SATISFIABLE: - return ngx_http_range_not_satisfiable(r); - - case NGX_ERROR: - return NGX_ERROR; - + break; default: /* NGX_DECLINED */ break; } @@ -274,13 +284,12 @@ ngx_uint_t ranges) { u_char *p; - off_t start, end, size, content_length; + off_t start, end, size; ngx_uint_t suffix; ngx_http_range_t *range; p = r->headers_in.range->value.data + 6; size = 0; - content_length = r->headers_out.content_length_n; for ( ;; ) { start = 0; @@ -298,6 +307,10 @@ start = start * 10 + *p++ - '0'; } + if (start < ctx->offset) { + return NGX_HTTP_RANGE_NOT_SATISFIABLE; + } + while (*p == ' ') { p++; } if (*p++ != '-') { @@ -307,7 +320,7 @@ while (*p == ' ') { p++; } if (*p == ',' || *p == '\0') { - end = content_length; + end = ctx->content_length; goto found; } @@ -331,12 +344,12 @@ } if (suffix) { - start = content_length - end; - end = content_length - 1; + start = ctx->content_length - end; + end = ctx->content_length - 1; } - if (end >= content_length) { - end = content_length; + if (end >= ctx->content_length) { + end = ctx->content_length; } else { end++; @@ -369,7 +382,7 @@ return NGX_HTTP_RANGE_NOT_SATISFIABLE; } - if (size > content_length) { + if (size > ctx->content_length) { return NGX_DECLINED; } @@ -384,16 +397,18 @@ ngx_table_elt_t *content_range; ngx_http_range_t *range; - content_range = ngx_list_push(&r->headers_out.headers); - if (content_range == NULL) { - return NGX_ERROR; + if (r->headers_out.content_range == NULL) { + content_range = ngx_list_push(&r->headers_out.headers); + if (content_range == NULL) { + return NGX_ERROR; + } + r->headers_out.content_range = content_range; + content_range->hash = 1; + ngx_str_set(&content_range->key, "Content-Range"); + } else { + content_range = r->headers_out.content_range; } - r->headers_out.content_range = content_range; - - content_range->hash = 1; - ngx_str_set(&content_range->key, "Content-Range"); - content_range->value.data = ngx_pnalloc(r->pool, sizeof("bytes -/") - 1 + 3 * NGX_OFF_T_LEN); if (content_range->value.data == NULL) { @@ -407,7 +422,7 @@ content_range->value.len = ngx_sprintf(content_range->value.data, "bytes %O-%O/%O", range->start, range->end - 1, - r->headers_out.content_length_n) + ctx->content_length) - content_range->value.data; r->headers_out.content_length_n = range->end - range->start; @@ -546,22 +561,25 @@ static ngx_int_t -ngx_http_range_not_satisfiable(ngx_http_request_t *r) +ngx_http_range_not_satisfiable(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx) { ngx_table_elt_t *content_range; r->headers_out.status = NGX_HTTP_RANGE_NOT_SATISFIABLE; - content_range = ngx_list_push(&r->headers_out.headers); - if (content_range == NULL) { - return NGX_ERROR; + if (r->headers_out.content_range == NULL) { + content_range = ngx_list_push(&r->headers_out.headers); + if (content_range == NULL) { + return NGX_ERROR; + } + r->headers_out.content_range = content_range; + content_range->hash = 1; + ngx_str_set(&content_range->key, "Content-Range"); + } else { + content_range = r->headers_out.content_range; } - r->headers_out.content_range = content_range; - - content_range->hash = 1; - ngx_str_set(&content_range->key, "Content-Range"); - content_range->value.data = ngx_pnalloc(r->pool, sizeof("bytes */") - 1 + NGX_OFF_T_LEN); if (content_range->value.data == NULL) { @@ -570,7 +588,7 @@ content_range->value.len = ngx_sprintf(content_range->value.data, "bytes */%O", - r->headers_out.content_length_n) + ctx->content_length) - content_range->value.data; ngx_http_clear_content_length(r); @@ -888,3 +906,76 @@ return NGX_OK; } + + +static ngx_int_t +ngx_http_content_range_parse(ngx_http_request_t *r, + ngx_http_range_filter_ctx_t *ctx) +{ + u_char *p; + off_t start, end, len; + + ctx->offset = 0; + ctx->content_length = r->headers_out.content_length_n; + + if (r->headers_out.status != NGX_HTTP_PARTIAL_CONTENT) { + return NGX_OK; + } + + if (r->headers_out.content_range == NULL + || r->headers_out.content_range->value.len == 0) { + return NGX_HTTP_RANGE_NOT_SATISFIABLE; + } + + if (r->headers_out.content_range->value.len < 7 + || ngx_strncasecmp(r->headers_out.content_range->value.data, + (u_char *) "bytes ", 6) != 0) { + return NGX_DECLINED; + } + + start = 0; + end = 0; + len = 0; + + p = r->headers_out.content_range->value.data + 6; + + while (*p == ' ') { p++; } + + if (*p < '0' || *p > '9') { + return NGX_HTTP_RANGE_NOT_SATISFIABLE; + } + + while (*p >= '0' && *p <= '9') { + start = start * 10 + *p++ - '0'; + } + + if (*p++ != '-') { + return NGX_HTTP_RANGE_NOT_SATISFIABLE; + } + + while (*p >= '0' && *p <= '9') { + end = end * 10 + *p++ - '0'; + } + + if (*p++ != '/') { + return NGX_HTTP_RANGE_NOT_SATISFIABLE; + } + + if (*p < '0' || *p > '9') { + return NGX_HTTP_RANGE_NOT_SATISFIABLE; + } + + while (*p >= '0' && *p <= '9') { + len = len * 10 + *p++ - '0'; + } + + if (*p != '\0') { + return NGX_HTTP_RANGE_NOT_SATISFIABLE; + } + + ctx->offset = start; + ctx->content_length = len; + + return NGX_OK; +} + diff -r 7d7eac6e31df -r 0c3c06fabfc3 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Tue Nov 04 19:56:23 2014 +0900 +++ b/src/http/ngx_http_upstream.c Tue Nov 25 14:18:54 2014 +0000 @@ -292,6 +292,11 @@ ngx_http_upstream_copy_content_encoding, 0, 0 }, #endif + { ngx_string("Content-Range"), + ngx_http_upstream_ignore_header_line, 0, + ngx_http_upstream_copy_allow_ranges, + offsetof(ngx_http_headers_out_t, content_range), 1 }, + { ngx_null_string, NULL, 0, NULL, 0, 0 } }; @@ -4499,37 +4504,26 @@ ngx_http_upstream_copy_allow_ranges(ngx_http_request_t *r, ngx_table_elt_t *h, ngx_uint_t offset) { - ngx_table_elt_t *ho; - if (r->upstream->conf->force_ranges) { return NGX_OK; } - #if (NGX_HTTP_CACHE) - if (r->cached) { r->allow_ranges = 1; - return NGX_OK; + if (offsetof(ngx_http_headers_out_t, accept_ranges) == offset) { + return NGX_OK; + } } if (r->upstream->cacheable) { r->allow_ranges = 1; r->single_range = 1; - return NGX_OK; - } - + if (offsetof(ngx_http_headers_out_t, accept_ranges) == offset) { + return NGX_OK; + } + } #endif - - ho = ngx_list_push(&r->headers_out.headers); - if (ho == NULL) { - return NGX_ERROR; - } - - *ho = *h; - - r->headers_out.accept_ranges = ho; - - return NGX_OK; + return ngx_http_upstream_copy_header_line(r, h, offset); } From steven.hartland at multiplay.co.uk Tue Nov 25 14:30:10 2014 From: steven.hartland at multiplay.co.uk (Steven Hartland) Date: Tue, 25 Nov 2014 14:30:10 +0000 Subject: [PATCH] Allow Partial Content responses to satisfy Range requests In-Reply-To: <0c3c06fabfc3b1c57710.1416925333@blade26.multiplay.co.uk> References: <0c3c06fabfc3b1c57710.1416925333@blade26.multiplay.co.uk> Message-ID: <54749272.4040909@multiplay.co.uk> Sent this one a while back but never had any feedback so updating to latest code base and resending. Previously only 200 responses could satisfy Range requests, this adds support for 206 responses (Partial Content) to also satisfy Range requests as long as the Range fits within the Partial Content. This can be used directly as well as by other modules where it can be used to build more complex responses, such as the response to a Range request from a predicable set of sub Range requests. We have built a caching solution with one such custom module that allows effective caching of gaming content from all the major game distribution networks. More details on this can be found here: http://blog.multiplay.co.uk/2014/04/lancache-dynamically-caching-game-installs-at-lans-using-nginx/ We'd like to release this module but it relies on having these enhancements to the nginx core, so would love to get them integrated. Regards Steve On 25/11/2014 14:22, Steven Hartland wrote: > # HG changeset patch > # User Steven Hartland > # Date 1416925134 0 > # Tue Nov 25 14:18:54 2014 +0000 > # Node ID 0c3c06fabfc3b1c57710c0cced4837c10e3e9bbb > # Parent 7d7eac6e31df1d962a644f8093c1fbb8f91620ce > Allow Partial Content responses to satisfy Range requests. > > diff -r 7d7eac6e31df -r 0c3c06fabfc3 src/http/modules/ngx_http_range_filter_module.c > --- a/src/http/modules/ngx_http_range_filter_module.c Tue Nov 04 19:56:23 2014 +0900 > +++ b/src/http/modules/ngx_http_range_filter_module.c Tue Nov 25 14:18:54 2014 +0000 > @@ -54,6 +54,7 @@ > > typedef struct { > off_t offset; > + off_t content_length; > ngx_str_t boundary_header; > ngx_array_t ranges; > } ngx_http_range_filter_ctx_t; > @@ -65,7 +66,8 @@ > ngx_http_range_filter_ctx_t *ctx); > static ngx_int_t ngx_http_range_multipart_header(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx); > -static ngx_int_t ngx_http_range_not_satisfiable(ngx_http_request_t *r); > +static ngx_int_t ngx_http_range_not_satisfiable(ngx_http_request_t *r, > + ngx_http_range_filter_ctx_t *ctx); > static ngx_int_t ngx_http_range_test_overlapped(ngx_http_request_t *r, > ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in); > static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r, > @@ -76,6 +78,9 @@ > static ngx_int_t ngx_http_range_header_filter_init(ngx_conf_t *cf); > static ngx_int_t ngx_http_range_body_filter_init(ngx_conf_t *cf); > > +static ngx_int_t ngx_http_content_range_parse(ngx_http_request_t *r, > + ngx_http_range_filter_ctx_t *ctx); > + > > static ngx_http_module_t ngx_http_range_header_filter_module_ctx = { > NULL, /* preconfiguration */ > @@ -153,8 +158,8 @@ > ngx_http_range_filter_ctx_t *ctx; > > if (r->http_version < NGX_HTTP_VERSION_10 > - || r->headers_out.status != NGX_HTTP_OK > - || r != r->main > + || (r->headers_out.status != NGX_HTTP_OK > + && r->headers_out.status != NGX_HTTP_PARTIAL_CONTENT) > || r->headers_out.content_length_n == -1 > || !r->allow_ranges) > { > @@ -230,26 +235,31 @@ > > ranges = r->single_range ? 1 : clcf->max_ranges; > > - switch (ngx_http_range_parse(r, ctx, ranges)) { > + switch (ngx_http_content_range_parse(r, ctx)) { > + case NGX_OK: > + switch (ngx_http_range_parse(r, ctx, ranges)) { > + case NGX_OK: > + ngx_http_set_ctx(r, ctx, ngx_http_range_body_filter_module); > > - case NGX_OK: > - ngx_http_set_ctx(r, ctx, ngx_http_range_body_filter_module); > + r->headers_out.status = NGX_HTTP_PARTIAL_CONTENT; > + r->headers_out.status_line.len = 0; > > - r->headers_out.status = NGX_HTTP_PARTIAL_CONTENT; > - r->headers_out.status_line.len = 0; > + if (ctx->ranges.nelts == 1) { > + return ngx_http_range_singlepart_header(r, ctx); > + } > > - if (ctx->ranges.nelts == 1) { > - return ngx_http_range_singlepart_header(r, ctx); > + return ngx_http_range_multipart_header(r, ctx); > + > + case NGX_HTTP_RANGE_NOT_SATISFIABLE: > + return ngx_http_range_not_satisfiable(r); > + > + case NGX_ERROR: > + return NGX_ERROR; > + > + default: /* NGX_DECLINED */ > + break; > } > - > - return ngx_http_range_multipart_header(r, ctx); > - > - case NGX_HTTP_RANGE_NOT_SATISFIABLE: > - return ngx_http_range_not_satisfiable(r); > - > - case NGX_ERROR: > - return NGX_ERROR; > - > + break; > default: /* NGX_DECLINED */ > break; > } > @@ -274,13 +284,12 @@ > ngx_uint_t ranges) > { > u_char *p; > - off_t start, end, size, content_length; > + off_t start, end, size; > ngx_uint_t suffix; > ngx_http_range_t *range; > > p = r->headers_in.range->value.data + 6; > size = 0; > - content_length = r->headers_out.content_length_n; > > for ( ;; ) { > start = 0; > @@ -298,6 +307,10 @@ > start = start * 10 + *p++ - '0'; > } > > + if (start < ctx->offset) { > + return NGX_HTTP_RANGE_NOT_SATISFIABLE; > + } > + > while (*p == ' ') { p++; } > > if (*p++ != '-') { > @@ -307,7 +320,7 @@ > while (*p == ' ') { p++; } > > if (*p == ',' || *p == '\0') { > - end = content_length; > + end = ctx->content_length; > goto found; > } > > @@ -331,12 +344,12 @@ > } > > if (suffix) { > - start = content_length - end; > - end = content_length - 1; > + start = ctx->content_length - end; > + end = ctx->content_length - 1; > } > > - if (end >= content_length) { > - end = content_length; > + if (end >= ctx->content_length) { > + end = ctx->content_length; > > } else { > end++; > @@ -369,7 +382,7 @@ > return NGX_HTTP_RANGE_NOT_SATISFIABLE; > } > > - if (size > content_length) { > + if (size > ctx->content_length) { > return NGX_DECLINED; > } > > @@ -384,16 +397,18 @@ > ngx_table_elt_t *content_range; > ngx_http_range_t *range; > > - content_range = ngx_list_push(&r->headers_out.headers); > - if (content_range == NULL) { > - return NGX_ERROR; > + if (r->headers_out.content_range == NULL) { > + content_range = ngx_list_push(&r->headers_out.headers); > + if (content_range == NULL) { > + return NGX_ERROR; > + } > + r->headers_out.content_range = content_range; > + content_range->hash = 1; > + ngx_str_set(&content_range->key, "Content-Range"); > + } else { > + content_range = r->headers_out.content_range; > } > > - r->headers_out.content_range = content_range; > - > - content_range->hash = 1; > - ngx_str_set(&content_range->key, "Content-Range"); > - > content_range->value.data = ngx_pnalloc(r->pool, > sizeof("bytes -/") - 1 + 3 * NGX_OFF_T_LEN); > if (content_range->value.data == NULL) { > @@ -407,7 +422,7 @@ > content_range->value.len = ngx_sprintf(content_range->value.data, > "bytes %O-%O/%O", > range->start, range->end - 1, > - r->headers_out.content_length_n) > + ctx->content_length) > - content_range->value.data; > > r->headers_out.content_length_n = range->end - range->start; > @@ -546,22 +561,25 @@ > > > static ngx_int_t > -ngx_http_range_not_satisfiable(ngx_http_request_t *r) > +ngx_http_range_not_satisfiable(ngx_http_request_t *r, > + ngx_http_range_filter_ctx_t *ctx) > { > ngx_table_elt_t *content_range; > > r->headers_out.status = NGX_HTTP_RANGE_NOT_SATISFIABLE; > > - content_range = ngx_list_push(&r->headers_out.headers); > - if (content_range == NULL) { > - return NGX_ERROR; > + if (r->headers_out.content_range == NULL) { > + content_range = ngx_list_push(&r->headers_out.headers); > + if (content_range == NULL) { > + return NGX_ERROR; > + } > + r->headers_out.content_range = content_range; > + content_range->hash = 1; > + ngx_str_set(&content_range->key, "Content-Range"); > + } else { > + content_range = r->headers_out.content_range; > } > > - r->headers_out.content_range = content_range; > - > - content_range->hash = 1; > - ngx_str_set(&content_range->key, "Content-Range"); > - > content_range->value.data = ngx_pnalloc(r->pool, > sizeof("bytes */") - 1 + NGX_OFF_T_LEN); > if (content_range->value.data == NULL) { > @@ -570,7 +588,7 @@ > > content_range->value.len = ngx_sprintf(content_range->value.data, > "bytes */%O", > - r->headers_out.content_length_n) > + ctx->content_length) > - content_range->value.data; > > ngx_http_clear_content_length(r); > @@ -888,3 +906,76 @@ > > return NGX_OK; > } > + > + > +static ngx_int_t > +ngx_http_content_range_parse(ngx_http_request_t *r, > + ngx_http_range_filter_ctx_t *ctx) > +{ > + u_char *p; > + off_t start, end, len; > + > + ctx->offset = 0; > + ctx->content_length = r->headers_out.content_length_n; > + > + if (r->headers_out.status != NGX_HTTP_PARTIAL_CONTENT) { > + return NGX_OK; > + } > + > + if (r->headers_out.content_range == NULL > + || r->headers_out.content_range->value.len == 0) { > + return NGX_HTTP_RANGE_NOT_SATISFIABLE; > + } > + > + if (r->headers_out.content_range->value.len < 7 > + || ngx_strncasecmp(r->headers_out.content_range->value.data, > + (u_char *) "bytes ", 6) != 0) { > + return NGX_DECLINED; > + } > + > + start = 0; > + end = 0; > + len = 0; > + > + p = r->headers_out.content_range->value.data + 6; > + > + while (*p == ' ') { p++; } > + > + if (*p < '0' || *p > '9') { > + return NGX_HTTP_RANGE_NOT_SATISFIABLE; > + } > + > + while (*p >= '0' && *p <= '9') { > + start = start * 10 + *p++ - '0'; > + } > + > + if (*p++ != '-') { > + return NGX_HTTP_RANGE_NOT_SATISFIABLE; > + } > + > + while (*p >= '0' && *p <= '9') { > + end = end * 10 + *p++ - '0'; > + } > + > + if (*p++ != '/') { > + return NGX_HTTP_RANGE_NOT_SATISFIABLE; > + } > + > + if (*p < '0' || *p > '9') { > + return NGX_HTTP_RANGE_NOT_SATISFIABLE; > + } > + > + while (*p >= '0' && *p <= '9') { > + len = len * 10 + *p++ - '0'; > + } > + > + if (*p != '\0') { > + return NGX_HTTP_RANGE_NOT_SATISFIABLE; > + } > + > + ctx->offset = start; > + ctx->content_length = len; > + > + return NGX_OK; > +} > + > diff -r 7d7eac6e31df -r 0c3c06fabfc3 src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c Tue Nov 04 19:56:23 2014 +0900 > +++ b/src/http/ngx_http_upstream.c Tue Nov 25 14:18:54 2014 +0000 > @@ -292,6 +292,11 @@ > ngx_http_upstream_copy_content_encoding, 0, 0 }, > #endif > > + { ngx_string("Content-Range"), > + ngx_http_upstream_ignore_header_line, 0, > + ngx_http_upstream_copy_allow_ranges, > + offsetof(ngx_http_headers_out_t, content_range), 1 }, > + > { ngx_null_string, NULL, 0, NULL, 0, 0 } > }; > > @@ -4499,37 +4504,26 @@ > ngx_http_upstream_copy_allow_ranges(ngx_http_request_t *r, > ngx_table_elt_t *h, ngx_uint_t offset) > { > - ngx_table_elt_t *ho; > - > if (r->upstream->conf->force_ranges) { > return NGX_OK; > } > - > #if (NGX_HTTP_CACHE) > - > if (r->cached) { > r->allow_ranges = 1; > - return NGX_OK; > + if (offsetof(ngx_http_headers_out_t, accept_ranges) == offset) { > + return NGX_OK; > + } > } > > if (r->upstream->cacheable) { > r->allow_ranges = 1; > r->single_range = 1; > - return NGX_OK; > - } > - > + if (offsetof(ngx_http_headers_out_t, accept_ranges) == offset) { > + return NGX_OK; > + } > + } > #endif > - > - ho = ngx_list_push(&r->headers_out.headers); > - if (ho == NULL) { > - return NGX_ERROR; > - } > - > - *ho = *h; > - > - r->headers_out.accept_ranges = ho; > - > - return NGX_OK; > + return ngx_http_upstream_copy_header_line(r, h, offset); > } > > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From piotr at cloudflare.com Wed Nov 26 00:51:44 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 25 Nov 2014 16:51:44 -0800 Subject: [PATCH] Not Modified: prefer entity tags over date validators In-Reply-To: <20141125135153.GP31620@mdounin.ru> References: <3efade6bb02f7962a512.1416359746@piotrs-macbook-pro.local> <20141119163527.GX26593@mdounin.ru> <20141120160259.GG53423@mdounin.ru> <20141124152917.GH31620@mdounin.ru> <20141125135153.GP31620@mdounin.ru> Message-ID: Hey Maxim, > That's exactly the reason why I don't want to change this without > a good reason IMHO, I already gave you two good enough reasons: false-negatives and RFC7232-compliance. I'm afraid that I can't do much better than that. > (and an explanation why the change was done in RFC7232). That, I cannot provide, but those changes make sense to me. Best regards, Piotr Sikora From piotr at cloudflare.com Wed Nov 26 01:14:48 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 25 Nov 2014 17:14:48 -0800 Subject: [PATCH 2 of 2] Cache: send conditional requests only for cached 200 OK responses In-Reply-To: <20141125134344.GO31620@mdounin.ru> References: <99e65578bc80960b2fdf.1416359748@piotrs-macbook-pro.local> <16f4ca8391ddd98ba99b.1416359749@piotrs-macbook-pro.local> <20141119163814.GY26593@mdounin.ru> <20141120163912.GI53423@mdounin.ru> <20141124150359.GG31620@mdounin.ru> <20141125134344.GO31620@mdounin.ru> Message-ID: Hey Maxim, > The question is: how many other status codes should be considered? > I _think_ that 200 and 206 is good enough list, but I'm not sure. Well, even if we miss a status code or two (and I don't think that we do), then the worst thing that's going to happen is, to quote your argument from the other thread, "suboptimal operation, nothing really bad can happen", which is much, much better than the current situation. > Here is the first problem in the upstream server: it returns an > error with the Last-Modified. This is what nginx avoids for a > reason. As far as I can tell, that's neither wrong nor forbidden by RFC. > Here you intentionally cache a response which is not > cacheable as per HTTP specification. ...and yet nginx tries to revalidate it? This is exactly what I'm trying to prevent with this patch. Also, quoting your previous response, "that's up to a configuration" to make it cacheable or not. And last, but not least, 503 was just an example, it could very well be 404 Not Found (cacheable by default per RFC), which doesn't change anything in the described scenario, so it's a moot point. > Here is another problem in the upstream server: it returns 304 > incorrectly. Strict Last-Modified matching is here for a reason > and, among other benefits, allows to mitigate such problems. Sorry, but you're wrong, there is nothing incorrect about it. Strict matching is a made up requirement and both RFCs agree on that. RFC2616 says: The If-Modified-Since request-header field is used with a method to make it conditional: if the requested variant has not been modified since the time specified in this field, an entity will not be returned from the server; instead, a 304 (not modified) response will be returned without any message-body. RFC7232 says: The "If-Modified-Since" header field makes a GET or HEAD request method conditional on the selected representation's modification date being more recent than the date provided in the field-value. It's worth noting that Apache, IIS, Varnish, ATS, Google's, Twitter's and Akamai's web servers all behave this way (i.e. using "before", not "exact" logic) and, as far as I can tell, nginx is the only exception to this rule. > This is not true. The upstream server intentionally does at least > two perfectly stupid things, and additionally you force nginx to > cache an uncacheable response. And all of this combined causes > the problem, not the fact that nginx uses conditional requests. I disagree. Responses with non-200/206 status codes cannot be revalidated with 304, so nginx expecting that to work and/or using 304 generated from 200/206 status code to revalidate responses cached with non-200/206 status is just wrong. > It's not broken. Trying to convince me that it's broken will not > work, for sure. You may have better luck convincing me that the > change will help to mitigate known/seen-in-the-wild problems > without affecting other uses. I might have better luck doing that, but it doesn't change the fact that the current behavior is broken. But yeah, if that's going to make you feel better, we had at least a few cases where 404 responses were being revalidated by 304 generated from 200. Best regards, Piotr Sikora From piotr at cloudflare.com Wed Nov 26 01:15:49 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 25 Nov 2014 17:15:49 -0800 Subject: [PATCH 1 of 2] Cache: remove unused valid_msec fields In-Reply-To: <20141120161431.GH53423@mdounin.ru> References: <99e65578bc80960b2fdf.1416359748@piotrs-macbook-pro.local> <20141119164353.GZ26593@mdounin.ru> <20141120161431.GH53423@mdounin.ru> Message-ID: Hey Maxim, fair enough, I'm going to rebase the other patch and leave valid_msec as-is. Best regards, Piotr Sikora From piotr at cloudflare.com Wed Nov 26 01:40:11 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 25 Nov 2014 17:40:11 -0800 Subject: [PATCH] Cache: send conditional requests only for cached 200/206 responses In-Reply-To: <16f4ca8391ddd98ba99b.1416359749@piotrs-macbook-pro.local> References: <16f4ca8391ddd98ba99b.1416359749@piotrs-macbook-pro.local> Message-ID: # HG changeset patch # User Piotr Sikora # Date 1416965642 28800 # Tue Nov 25 17:34:02 2014 -0800 # Node ID a39048b405998c977426613668861efae545f70f # Parent 2c10db908b8c4a9c0532c58830275d5ad84ae686 Cache: send conditional requests only for cached 200/206 responses. RFC7232 says: The 304 (Not Modified) status code indicates that a conditional GET or HEAD request has been received and would have resulted in a 200 (OK) response if it were not for the fact that the condition evaluated to false. which means that there is no reason to send requests with "If-None-Match" and/or "If-Modified-Since" headers for responses cached with other status codes. Also, sending conditional requests for responses cached with other status codes could result in a strange behavior, e.g. upstream server returning 304 Not Modified for cached 404 Not Found responses, etc. Signed-off-by: Piotr Sikora diff -r 2c10db908b8c -r a39048b40599 src/http/ngx_http_cache.h --- a/src/http/ngx_http_cache.h Fri Nov 21 22:51:49 2014 +0300 +++ b/src/http/ngx_http_cache.h Tue Nov 25 17:34:02 2014 -0800 @@ -27,7 +27,7 @@ #define NGX_HTTP_CACHE_ETAG_LEN 42 #define NGX_HTTP_CACHE_VARY_LEN 42 -#define NGX_HTTP_CACHE_VERSION 3 +#define NGX_HTTP_CACHE_VERSION 4 typedef struct { @@ -84,6 +84,7 @@ struct ngx_http_cache_s { ngx_uint_t min_uses; ngx_uint_t error; + ngx_uint_t status; ngx_uint_t valid_msec; ngx_buf_t *buf; @@ -116,6 +117,7 @@ typedef struct { time_t last_modified; time_t date; uint32_t crc32; + u_short status; u_short valid_msec; u_short header_start; u_short body_start; diff -r 2c10db908b8c -r a39048b40599 src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c Fri Nov 21 22:51:49 2014 +0300 +++ b/src/http/ngx_http_file_cache.c Tue Nov 25 17:34:02 2014 -0800 @@ -561,6 +561,7 @@ ngx_http_file_cache_read(ngx_http_reques c->valid_sec = h->valid_sec; c->last_modified = h->last_modified; c->date = h->date; + c->status = h->status; c->valid_msec = h->valid_msec; c->header_start = h->header_start; c->body_start = h->body_start; @@ -1119,6 +1120,7 @@ ngx_http_file_cache_set_header(ngx_http_ h->last_modified = c->last_modified; h->date = c->date; h->crc32 = c->crc32; + h->status = (u_short) c->status; h->valid_msec = (u_short) c->valid_msec; h->header_start = (u_short) c->header_start; h->body_start = (u_short) c->body_start; @@ -1338,6 +1340,7 @@ ngx_http_file_cache_update_header(ngx_ht if (h.version != NGX_HTTP_CACHE_VERSION || h.last_modified != c->last_modified || h.crc32 != c->crc32 + || h.status != c->status || h.header_start != c->header_start || h.body_start != c->body_start) { @@ -1359,6 +1362,7 @@ ngx_http_file_cache_update_header(ngx_ht h.last_modified = c->last_modified; h.date = c->date; h.crc32 = c->crc32; + h.status = (u_short) c->status; h.valid_msec = (u_short) c->valid_msec; h.header_start = (u_short) c->header_start; h.body_start = (u_short) c->body_start; diff -r 2c10db908b8c -r a39048b40599 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Fri Nov 21 22:51:49 2014 +0300 +++ b/src/http/ngx_http_upstream.c Tue Nov 25 17:34:02 2014 -0800 @@ -2560,6 +2560,7 @@ ngx_http_upstream_send_response(ngx_http } if (valid) { + r->cache->status = u->headers_in.status_n; r->cache->last_modified = u->headers_in.last_modified_time; r->cache->date = now; r->cache->body_start = (u_short) (u->buffer.pos - u->buffer.start); @@ -4936,6 +4937,13 @@ ngx_http_upstream_cache_last_modified(ng return NGX_OK; } + if (r->cache->status != NGX_HTTP_OK + && r->cache->status != NGX_HTTP_PARTIAL_CONTENT) + { + v->not_found = 1; + return NGX_OK; + } + p = ngx_pnalloc(r->pool, sizeof("Mon, 28 Sep 1970 06:00:00 GMT") - 1); if (p == NULL) { return NGX_ERROR; @@ -4964,6 +4972,13 @@ ngx_http_upstream_cache_etag(ngx_http_re return NGX_OK; } + if (r->cache->status != NGX_HTTP_OK + && r->cache->status != NGX_HTTP_PARTIAL_CONTENT) + { + v->not_found = 1; + return NGX_OK; + } + v->valid = 1; v->no_cacheable = 0; v->not_found = 0; From mdounin at mdounin.ru Wed Nov 26 14:54:15 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 26 Nov 2014 17:54:15 +0300 Subject: [PATCH 2 of 2] Cache: send conditional requests only for cached 200 OK responses In-Reply-To: References: <99e65578bc80960b2fdf.1416359748@piotrs-macbook-pro.local> <16f4ca8391ddd98ba99b.1416359749@piotrs-macbook-pro.local> <20141119163814.GY26593@mdounin.ru> <20141120163912.GI53423@mdounin.ru> <20141124150359.GG31620@mdounin.ru> <20141125134344.GO31620@mdounin.ru> Message-ID: <20141126145415.GR31620@mdounin.ru> Hello! On Tue, Nov 25, 2014 at 05:14:48PM -0800, Piotr Sikora wrote: > Hey Maxim, > > > The question is: how many other status codes should be considered? > > I _think_ that 200 and 206 is good enough list, but I'm not sure. > > Well, even if we miss a status code or two (and I don't think that we > do), then the worst thing that's going to happen is, to quote your > argument from the other thread, "suboptimal operation, nothing really > bad can happen", which is much, much better than the current > situation. Fair enough. > > Here is the first problem in the upstream server: it returns an > > error with the Last-Modified. This is what nginx avoids for a > > reason. > > As far as I can tell, that's neither wrong nor forbidden by RFC. Much like nginx behaviour to use it in the If-Modified-Since. So nobody does anything wrong/forbidden, but there is a problem as a result. And nginx already does at least two things to mitigate such problems on server-side, and it probably won't hurt much to add another client-side mitigation. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Nov 26 14:55:56 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 26 Nov 2014 17:55:56 +0300 Subject: [PATCH] Cache: send conditional requests only for cached 200/206 responses In-Reply-To: References: <16f4ca8391ddd98ba99b.1416359749@piotrs-macbook-pro.local> Message-ID: <20141126145556.GS31620@mdounin.ru> Hello! On Tue, Nov 25, 2014 at 05:40:11PM -0800, Piotr Sikora wrote: [...] > if (valid) { > + r->cache->status = u->headers_in.status_n; > r->cache->last_modified = u->headers_in.last_modified_time; > r->cache->date = now; > r->cache->body_start = (u_short) (u->buffer.pos - u->buffer.start); I think we should consider not saving last_modified and etag instead. -- Maxim Dounin http://nginx.org/ From piotr at cloudflare.com Thu Nov 27 02:38:31 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Wed, 26 Nov 2014 18:38:31 -0800 Subject: [PATCH] Cache: send conditional requests only for cached 200/206 responses In-Reply-To: References: Message-ID: # HG changeset patch # User Piotr Sikora # Date 1417055737 28800 # Wed Nov 26 18:35:37 2014 -0800 # Node ID ec4837c14647c6745b41f0a8c55fcc5fcb6f336b # Parent 2c10db908b8c4a9c0532c58830275d5ad84ae686 Cache: send conditional requests only for cached 200/206 responses. RFC7232 says: The 304 (Not Modified) status code indicates that a conditional GET or HEAD request has been received and would have resulted in a 200 (OK) response if it were not for the fact that the condition evaluated to false. which means that there is no reason to send requests with "If-None-Match" and/or "If-Modified-Since" headers for responses cached with other status codes. Also, sending conditional requests for responses cached with other status codes could result in a strange behavior, e.g. upstream server returning 304 Not Modified for cached 404 Not Found responses, etc. Signed-off-by: Piotr Sikora diff -r 2c10db908b8c -r ec4837c14647 src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c Fri Nov 21 22:51:49 2014 +0300 +++ b/src/http/ngx_http_file_cache.c Wed Nov 26 18:35:37 2014 -0800 @@ -175,6 +175,8 @@ ngx_http_file_cache_new(ngx_http_request c->file.log = r->connection->log; c->file.fd = NGX_INVALID_FILE; + c->last_modified = -1; + return NGX_OK; } diff -r 2c10db908b8c -r ec4837c14647 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Fri Nov 21 22:51:49 2014 +0300 +++ b/src/http/ngx_http_upstream.c Wed Nov 26 18:35:37 2014 -0800 @@ -2560,12 +2560,17 @@ ngx_http_upstream_send_response(ngx_http } if (valid) { - r->cache->last_modified = u->headers_in.last_modified_time; r->cache->date = now; r->cache->body_start = (u_short) (u->buffer.pos - u->buffer.start); - if (u->headers_in.etag) { - r->cache->etag = u->headers_in.etag->value; + if (u->headers_in.status_n == NGX_HTTP_OK + || u->headers_in.status_n == NGX_HTTP_PARTIAL_CONTENT) + { + r->cache->last_modified = u->headers_in.last_modified_time; + + if (u->headers_in.etag) { + r->cache->etag = u->headers_in.etag->value; + } } ngx_http_file_cache_set_header(r, u->buffer.start); From mdounin at mdounin.ru Thu Nov 27 15:31:31 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 27 Nov 2014 18:31:31 +0300 Subject: [PATCH] Request hang when cache_lock is used in subrequests In-Reply-To: References: Message-ID: <20141127153131.GY31620@mdounin.ru> Hello! On Sat, Oct 26, 2013 at 03:38:35PM -0700, Yichun Zhang (agentzh) wrote: > Hello! > > Akos Gyimesi reported a request hang (downstream connections stuck in > the CLOSE_WAIT state forever) regarding use of proxy_cache_lock in > subrequests. > > The issue is that when proxy_cache_lock_timeout is reached, > ngx_http_file_cache_lock_wait_handler calls > r->connection->write->handler() directly, but > r->connection->write->handler is (usually) just > ngx_http_request_handler, which simply picks up r->connection->data, > which is *not* necessarily the current (sub)request, so the current > subrequest may never be continued nor finalized, leading to an > infinite request hang. > > The following patch fixes this issue for me. Comments welcome! Yichun, I've spent some time looking in this, and I don't see how it can cause infinite hang at least with stock nginx modules. It certainly can cause suboptimal behaviour though, both with proxy cache locks and with AIO. Here are two patches to address this (and also logging issues with subrequests): # HG changeset patch # User Maxim Dounin # Date 1417096347 -10800 # Thu Nov 27 16:52:27 2014 +0300 # Node ID 6182e4636b972aee8edfdfb70d8ccb45b5d9303a # Parent 005b56eca92995fed63d5a5526db895c2402b96f Upstream: improved subrequest logging. To ensure proper logging make sure to set current_request in all event handlers, including resolve, ssl handshake, cache lock wait timer and aio read handlers. A macro ngx_http_set_log_request() introduced to simplify this. diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -14,6 +14,8 @@ static ngx_int_t ngx_http_file_cache_lock(ngx_http_request_t *r, ngx_http_cache_t *c); static void ngx_http_file_cache_lock_wait_handler(ngx_event_t *ev); +static void ngx_http_file_cache_lock_wait(ngx_http_request_t *r, + ngx_http_cache_t *c); static ngx_int_t ngx_http_file_cache_read(ngx_http_request_t *r, ngx_http_cache_t *c); static ssize_t ngx_http_file_cache_aio_read(ngx_http_request_t *r, @@ -448,25 +450,35 @@ ngx_http_file_cache_lock(ngx_http_reques static void ngx_http_file_cache_lock_wait_handler(ngx_event_t *ev) { - ngx_uint_t wait; - ngx_msec_t now, timer; - ngx_http_cache_t *c; - ngx_http_request_t *r; - ngx_http_file_cache_t *cache; + ngx_connection_t *c; + ngx_http_request_t *r; r = ev->data; - c = r->cache; + c = r->connection; + + ngx_http_set_log_request(c->log, r); + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, + "http file cache wait: \"%V?%V\"", &r->uri, &r->args); + + ngx_http_file_cache_lock_wait(r, r->cache); +} + + +static void +ngx_http_file_cache_lock_wait(ngx_http_request_t *r, ngx_http_cache_t *c) +{ + ngx_uint_t wait; + ngx_msec_t now, timer; + ngx_http_file_cache_t *cache; now = ngx_current_msec; - ngx_log_debug2(NGX_LOG_DEBUG_HTTP, ev->log, 0, - "http file cache wait handler wt:%M cur:%M", - c->wait_time, now); - timer = c->wait_time - now; if ((ngx_msec_int_t) timer <= 0) { - ngx_log_error(NGX_LOG_INFO, ev->log, 0, "cache lock timeout"); + ngx_log_error(NGX_LOG_INFO, r->connection->log, 0, + "cache lock timeout"); c->lock_timeout = 0; goto wakeup; } @@ -485,7 +497,7 @@ ngx_http_file_cache_lock_wait_handler(ng ngx_shmtx_unlock(&cache->shpool->mutex); if (wait) { - ngx_add_timer(ev, (timer > 500) ? 500 : timer); + ngx_add_timer(&c->wait_event, (timer > 500) ? 500 : timer); return; } @@ -665,10 +677,17 @@ static void ngx_http_cache_aio_event_handler(ngx_event_t *ev) { ngx_event_aio_t *aio; + ngx_connection_t *c; ngx_http_request_t *r; aio = ev->data; r = aio->data; + c = r->connection; + + ngx_http_set_log_request(c->log, r); + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, + "http file cache aio: \"%V?%V\"", &r->uri, &r->args); r->main->blocked--; r->aio = 0; diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c +++ b/src/http/ngx_http_request.c @@ -2169,13 +2169,11 @@ ngx_http_request_handler(ngx_event_t *ev { ngx_connection_t *c; ngx_http_request_t *r; - ngx_http_log_ctx_t *ctx; c = ev->data; r = c->data; - ctx = c->log->data; - ctx->current_request = r; + ngx_http_set_log_request(c->log, r); ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, "http run request: \"%V?%V\"", &r->uri, &r->args); @@ -2195,7 +2193,6 @@ void ngx_http_run_posted_requests(ngx_connection_t *c) { ngx_http_request_t *r; - ngx_http_log_ctx_t *ctx; ngx_http_posted_request_t *pr; for ( ;; ) { @@ -2215,8 +2212,7 @@ ngx_http_run_posted_requests(ngx_connect r = pr->request; - ctx = c->log->data; - ctx->current_request = r; + ngx_http_set_log_request(c->log, r); ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, "http posted request: \"%V?%V\"", &r->uri, &r->args); diff --git a/src/http/ngx_http_request.h b/src/http/ngx_http_request.h --- a/src/http/ngx_http_request.h +++ b/src/http/ngx_http_request.h @@ -595,4 +595,8 @@ extern ngx_http_header_out_t ngx_http_ } +#define ngx_http_set_log_request(log, r) \ + ((ngx_http_log_ctx_t *) log->data)->current_request = r + + #endif /* _NGX_HTTP_REQUEST_H_INCLUDED_ */ diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -942,6 +942,11 @@ ngx_http_upstream_resolve_handler(ngx_re u = r->upstream; ur = u->resolved; + ngx_http_set_log_request(c->log, r); + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, + "http upstream resolve: \"%V?%V\"", &r->uri, &r->args); + if (ctx->state) { ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "%V could not be resolved (%i: %s)", @@ -1003,7 +1008,6 @@ ngx_http_upstream_handler(ngx_event_t *e { ngx_connection_t *c; ngx_http_request_t *r; - ngx_http_log_ctx_t *ctx; ngx_http_upstream_t *u; c = ev->data; @@ -1012,8 +1016,7 @@ ngx_http_upstream_handler(ngx_event_t *e u = r->upstream; c = r->connection; - ctx = c->log->data; - ctx->current_request = r; + ngx_http_set_log_request(c->log, r); ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0, "http upstream request: \"%V?%V\"", &r->uri, &r->args); @@ -1447,6 +1450,8 @@ ngx_http_upstream_ssl_handshake(ngx_conn r = c->data; u = r->upstream; + ngx_http_set_log_request(c->log, r); + if (c->ssl->handshaked) { if (u->conf->ssl_verify) { # HG changeset patch # User Maxim Dounin # Date 1417096358 -10800 # Thu Nov 27 16:52:38 2014 +0300 # Node ID a96fc771c7bf1343f7cb965d525a1479644f3325 # Parent 6182e4636b972aee8edfdfb70d8ccb45b5d9303a Cache: proper wakup of subrequests. In case of a cache lock timeout and in the aio handler we now call r->write_event_handler() instead of a connection write handler, to make sure to run appropriate subrequest. Previous code failed to run inactive subrequests and hence resulted in suboptimal behaviour, see report by Yichun Zhang: http://mailman.nginx.org/pipermail/nginx-devel/2013-October/004435.html (Infinite hang claimed in the report seems impossible without 3rd party modules, as subrequests will be eventually woken up by the postpone filter.) diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c +++ b/src/http/ngx_http_file_cache.c @@ -462,6 +462,8 @@ ngx_http_file_cache_lock_wait_handler(ng "http file cache wait: \"%V?%V\"", &r->uri, &r->args); ngx_http_file_cache_lock_wait(r, r->cache); + + ngx_http_run_posted_requests(c); } @@ -505,7 +507,7 @@ wakeup: c->waiting = 0; r->main->blocked--; - r->connection->write->handler(r->connection->write); + r->write_event_handler(r); } @@ -692,7 +694,9 @@ ngx_http_cache_aio_event_handler(ngx_eve r->main->blocked--; r->aio = 0; - r->connection->write->handler(r->connection->write); + r->write_event_handler(r); + + ngx_http_run_posted_requests(c); } #endif -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Nov 28 13:57:44 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 28 Nov 2014 13:57:44 +0000 Subject: [nginx] Typo. Message-ID: details: http://hg.nginx.org/nginx/rev/58956c644ad0 branches: changeset: 5924:58956c644ad0 user: Maxim Dounin date: Fri Nov 28 16:57:23 2014 +0300 description: Typo. diffstat: src/core/ngx_crypt.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff --git a/src/core/ngx_crypt.c b/src/core/ngx_crypt.c --- a/src/core/ngx_crypt.c +++ b/src/core/ngx_crypt.c @@ -66,7 +66,7 @@ ngx_crypt_apr1(ngx_pool_t *pool, u_char size_t saltlen, keylen; ngx_md5_t md5, ctx1; - /* Apache's apr1 crypt is Paul-Henning Kamp's md5 crypt with $apr1$ magic */ + /* Apache's apr1 crypt is Poul-Henning Kamp's md5 crypt with $apr1$ magic */ keylen = ngx_strlen(key); From mdounin at mdounin.ru Fri Nov 28 13:58:34 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 28 Nov 2014 13:58:34 +0000 Subject: [nginx] Fixed post_action to not trigger "header already sent" a... Message-ID: details: http://hg.nginx.org/nginx/rev/c76d851c5e7a branches: changeset: 5925:c76d851c5e7a user: Maxim Dounin date: Fri Nov 28 16:57:50 2014 +0300 description: Fixed post_action to not trigger "header already sent" alert. The alert was introduced in 03ff14058272 (1.5.4), and was triggered on each post_action invocation. There is no real need to call header filters in case of post_action, so return NGX_OK from ngx_http_send_header() if r->post_action is set. diffstat: src/http/ngx_http_core_module.c | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diffs (14 lines): diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c +++ b/src/http/ngx_http_core_module.c @@ -1973,6 +1973,10 @@ ngx_http_send_response(ngx_http_request_ ngx_int_t ngx_http_send_header(ngx_http_request_t *r) { + if (r->post_action) { + return NGX_OK; + } + if (r->header_sent) { ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0, "header already sent"); From mdounin at mdounin.ru Fri Nov 28 14:00:14 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 28 Nov 2014 14:00:14 +0000 Subject: [nginx] Write filter: fixed handling of sync bufs (ticket #132). Message-ID: details: http://hg.nginx.org/nginx/rev/08bfc7188a41 branches: changeset: 5926:08bfc7188a41 user: Maxim Dounin date: Fri Nov 28 16:58:39 2014 +0300 description: Write filter: fixed handling of sync bufs (ticket #132). diffstat: src/http/ngx_http_write_filter_module.c | 13 +++++++++++-- 1 files changed, 11 insertions(+), 2 deletions(-) diffs (51 lines): diff --git a/src/http/ngx_http_write_filter_module.c b/src/http/ngx_http_write_filter_module.c --- a/src/http/ngx_http_write_filter_module.c +++ b/src/http/ngx_http_write_filter_module.c @@ -48,7 +48,7 @@ ngx_int_t ngx_http_write_filter(ngx_http_request_t *r, ngx_chain_t *in) { off_t size, sent, nsent, limit; - ngx_uint_t last, flush; + ngx_uint_t last, flush, sync; ngx_msec_t delay; ngx_chain_t *cl, *ln, **ll, *chain; ngx_connection_t *c; @@ -62,6 +62,7 @@ ngx_http_write_filter(ngx_http_request_t size = 0; flush = 0; + sync = 0; last = 0; ll = &r->out; @@ -105,6 +106,10 @@ ngx_http_write_filter(ngx_http_request_t flush = 1; } + if (cl->buf->sync) { + sync = 1; + } + if (cl->buf->last_buf) { last = 1; } @@ -157,6 +162,10 @@ ngx_http_write_filter(ngx_http_request_t flush = 1; } + if (cl->buf->sync) { + sync = 1; + } + if (cl->buf->last_buf) { last = 1; } @@ -188,7 +197,7 @@ ngx_http_write_filter(ngx_http_request_t && !(c->buffered & NGX_LOWLEVEL_BUFFERED) && !(last && c->need_last_buf)) { - if (last || flush) { + if (last || flush || sync) { for (cl = r->out; cl; /* void */) { ln = cl; cl = cl->next; From tod_baudais at mac.com Fri Nov 28 22:46:40 2014 From: tod_baudais at mac.com (Tod Baudais) Date: Fri, 28 Nov 2014 16:46:40 -0600 Subject: Adding a delay in body filter response Message-ID: Sorry if this is not the place to ask this or if this has been asked before (google hasn't been helpful), but I'm unsure of how to proceed with this problem. I am developing a body filter module that processes html and has to do a process in the middle of sending a response that can take upwards of a couple seconds. So for example, the first half of the HTML gets sent immediately, a process happens and eventually finishes, then second half gets sent (the contents of the second half being dependent on the results of the process). How to I get a body filter to "wait" for a bit and then continue sometime later? The signal/flag/whatever to continue could come from a timer event or elsewhere. I'm looking for a non-blocking solution obviously. I'm using nginx as a proxy if that makes any difference so the HTML doesn't originate on this server. And yes, it's a bit of a weird use case. Thanks, Tod. -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Sat Nov 29 05:38:43 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Fri, 28 Nov 2014 21:38:43 -0800 Subject: [PATCH] Request hang when cache_lock is used in subrequests In-Reply-To: <20141127153131.GY31620@mdounin.ru> References: <20141127153131.GY31620@mdounin.ru> Message-ID: Hi Maxim! On Thu, Nov 27, 2014 at 7:31 AM, Maxim Dounin wrote: > Yichun, I've spent some time looking in this, and I don't see how > it can cause infinite hang at least with stock nginx modules. It > certainly can cause suboptimal behaviour though, both with proxy > cache locks and with AIO. > You're right. I'm not using the stock nginx modules nor the stock subrequest model :P > Here are two patches to address this (and also logging issues with > subrequests): > I've just confirmed that you patches fix my issue. Thank you very much! Best regards, -agentzh From agentzh at gmail.com Sat Nov 29 05:53:59 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Fri, 28 Nov 2014 21:53:59 -0800 Subject: Adding a delay in body filter response In-Reply-To: References: Message-ID: Hello! On Fri, Nov 28, 2014 at 2:46 PM, Tod Baudais wrote: > I am developing a body filter module that processes html and has to do a > process in the middle of sending a response that can take upwards of a > couple seconds. What is this "process" exactly? Is it CPU-bound computation or just a (nonblocking) IO operation? > So for example, the first half of the HTML gets sent > immediately, a process happens and eventually finishes, then second half > gets sent (the contents of the second half being dependent on the results of > the process). If the "process" is an IO operation, this looks trivial if it is implemented as a content handler instead of a body filter. > How to I get a body filter to "wait" for a bit and then > continue sometime later? Alas. The nginx output body filter does not provide a "wait" mode by its design. To make it actually wait, we need to cheat a bit by exhausting the "busy bufs" of the content handler (so as to make it stop sending more bufs in non-buffered mode). That's the only way I'm aware of. (Feel free to prove me wrong.) And the closest thing in the stock nginx distribution is the limit_rate mechanism which can serve as an example. But again, this is complicated, so be careful :) Regards, -agentzh