From aviram at adallom.com Tue Dec 1 07:55:22 2015 From: aviram at adallom.com (Aviram Cohen) Date: Tue, 1 Dec 2015 07:55:22 +0000 Subject: [BUG] Gunzip module may cause requests to fail In-Reply-To: <20151130173717.GJ74233@mdounin.ru> References: <2571972.HTVaeomj3b@vbart-workstation> <20151130173717.GJ74233@mdounin.ru> Message-ID: Hello, Maxim, great hearing from you. I have said that the response is a bit malformed, which means for me that even though it looks weird, it can be handled. You are right that when Nginx gets unpredicted behaviors from the server/client, it should close the connection so that there won't be any damage. However, this is an error that Nginx could have easily avoided. Besides, this is not an explicit Nginx behavior, as Nginx is not aware that it got an empty response with the gzip header. It happens implicitly due to a calculation of data size, which happens to be zero. The error log doesn't show a backend server error, but a zlib error. This all suggests an obvious Nginx bug, and doesn't sound to me like the desired Nginx behavior. Best regards, Aviram -----Original Message----- From: nginx-devel [mailto:nginx-devel-bounces at nginx.org] On Behalf Of Maxim Dounin Sent: ????? 30 ?????? 2015 19:37 To: nginx-devel at nginx.org Subject: Re: [BUG] Gunzip module may cause requests to fail Hello! On Mon, Nov 30, 2015 at 04:29:09PM +0000, Aviram Cohen wrote: > You are right, response bodies that are empty but still "encoded as > gzip" are a bit malformed. > Unfortunately, sometimes we don't control the behavior of the server. > And still, I think Nginx should be able to handle such responses and > not disconnect the client. As you said, such responses are "a bit malformed". And nginx does its best at handling such malformed responses: it logs an error and closes the connection to prevent further damage. The only potentially better option I can think of would be to don't touch such responses at all. Unfortunately, this isn't really possible as response headers are already modified and sent to the client at the point when we know the response body is malformed. Another obvious solution would be to instruct nginx to don't try to gunzip responses if you don't control responses of your backend and there are malformed ones. Actually, this is the default. -- Maxim Dounin https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fnginx.org%2f&data=01%7c01%7cavcohe%40064d.mgd.microsoft.com%7c069f86aae9134c192f5e08d2f9acec05%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=xFpUqj8iDVJ3LPMnHNKI5RPPXcXSd%2fQ7losW7pvG0Eo%3d _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmailman.nginx.org%2fmailman%2flistinfo%2fnginx-devel&data=01%7c01%7cavcohe%40064d.mgd.microsoft.com%7c069f86aae9134c192f5e08d2f9acec05%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=8R%2f%2foXphW5115qEMkiTF9shMlUSUWPn1UAhaARga3qw%3d From x at chrisbranch.co.uk Tue Dec 1 12:19:27 2015 From: x at chrisbranch.co.uk (Chris Branch) Date: Tue, 1 Dec 2015 12:19:27 +0000 Subject: Closing upstream keepalive connections in an invalid state In-Reply-To: <20151130180946.GK74233@mdounin.ru> References: <7FE1A2EB-124C-4A7F-804C-FC5D55D7579F@chrisbranch.co.uk> <20151130180946.GK74233@mdounin.ru> Message-ID: Thanks for your feedback! > On 30 Nov 2015, at 18:09, Maxim Dounin wrote: > > The patch looks incomplete for me. It doesn't seem to handle the > "next upstream" case. And the condition used looks wrong, too, as > it doesn't take into account what nginx actually tried to send. Next upstream was indeed forgotten, and is relevant for the case of buffered request + large request body (not tested). However I disagree that the condition looks wrong. So long as a request is sent in full, the connection remains usable - that?s defined solely by the data remaining in our buffers and the data we are waiting for from the client. There are potential improvements to the keepalive handling for which ?what nginx actually tried to send? is irrelevant: - If using chunked encoding, send a trailer sequence to place the upstream connection in a valid state - Discard the request body to keep the downstream connection alive Having a special flag for this simple case seems unnecessary. It is surely more robust to make a decision based on our current state rather than indirectly deciding that with a flag. You must agree or you wouldn?t dislike the quick and dirty (but otherwise completely functional) patch :) From mdounin at mdounin.ru Tue Dec 1 13:28:36 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Dec 2015 16:28:36 +0300 Subject: [BUG] Gunzip module may cause requests to fail In-Reply-To: References: <2571972.HTVaeomj3b@vbart-workstation> <20151130173717.GJ74233@mdounin.ru> Message-ID: <20151201132836.GN74233@mdounin.ru> Hello! On Tue, Dec 01, 2015 at 07:55:22AM +0000, Aviram Cohen wrote: > Maxim, great hearing from you. > I have said that the response is a bit malformed, which means > for me that even though it looks weird, it can be handled. > You are right that when Nginx gets unpredicted behaviors from the server/client, it should close the connection so that there won't be any damage. > However, this is an error that Nginx could have easily avoided. "Just a little bit pregnant" (c) The response is malformed, and nobody knows how it is expected to look like if it wasn't malformed. > Besides, this is not an explicit Nginx behavior, as Nginx is not aware that it got an empty response with the gzip header. > It happens implicitly due to a calculation of data size, which happens to be zero. The error log doesn't show a backend server error, but a zlib error. > This all suggests an obvious Nginx bug, and doesn't sound to me like the desired Nginx behavior. The error happens when gunzipping the response, and it's irrelevant if it was obtained from a backend or from disk. The error says "inflate() returned ... on response end", and it is belived to be clear enough and in line with other errors reported by gunzip filter. If you think the error message can be improved - feel free to suggest patches. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Dec 1 14:06:48 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Dec 2015 17:06:48 +0300 Subject: Closing upstream keepalive connections in an invalid state In-Reply-To: References: <7FE1A2EB-124C-4A7F-804C-FC5D55D7579F@chrisbranch.co.uk> <20151130180946.GK74233@mdounin.ru> Message-ID: <20151201140647.GO74233@mdounin.ru> Hello! On Tue, Dec 01, 2015 at 12:19:27PM +0000, Chris Branch wrote: > Thanks for your feedback! > > > On 30 Nov 2015, at 18:09, Maxim Dounin wrote: > > > > The patch looks incomplete for me. It doesn't seem to handle the > > "next upstream" case. And the condition used looks wrong, too, as > > it doesn't take into account what nginx actually tried to send. > > Next upstream was indeed forgotten, and is relevant for the case > of buffered request + large request body (not tested). > > However I disagree that the condition looks wrong. So long as a > request is sent in full, the connection remains usable - that?s > defined solely by the data remaining in our buffers and the data > we are waiting for from the client. There are potential > improvements to the keepalive handling for which ?what nginx > actually tried to send? is irrelevant: > > - If using chunked encoding, send a trailer sequence to place > the upstream connection in a valid state > - Discard the request body to keep the downstream connection > alive The condition you use is as follows: + if (u->writer.out != NULL + || (r->request_body && r->request_body->rest > 0)) It fails if u->writer has something unsent in it, right. This part per se isn't enough though, as it clearly doesn't take into account unbuffered request body. So you've added r->request_body checks - but why r->request_body have to be considered at all if, e.g., proxy_pass_request_body was set to off or proxy_set_body was used? And what happens when we've read all the body, and then an error happened in a request body filter - e.g., when we've tried to write it to the request body file? The condition is also likely to be wrong in case of some error while reading the request body file from disk. > Having a special flag for this simple case seems unnecessary. It > is surely more robust to make a decision based on our current > state rather than indirectly deciding that with a flag. You must > agree or you wouldn?t dislike the quick and dirty (but otherwise > completely functional) patch :) Sure, I don't like the special flag too. Though the condition proposed clearly isn't good enough either. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Tue Dec 1 15:10:49 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 01 Dec 2015 18:10:49 +0300 Subject: [BUG] Gunzip module may cause requests to fail In-Reply-To: References: <20151130173717.GJ74233@mdounin.ru> Message-ID: <2805164.n2UDpGHsxe@vbart-laptop> On Tuesday 01 December 2015 07:55:22 Aviram Cohen wrote: > Hello, > > Maxim, great hearing from you. > I have said that the response is a bit malformed, which means for me that even though it looks weird, it can be handled. > You are right that when Nginx gets unpredicted behaviors from the server/client, it should close the connection so that there won't be any damage. > However, this is an error that Nginx could have easily avoided. > [..] The problem is that nginx cannot know how to fix this response. It can be empty because of the application crashed, or because of a compression error. wbr, Valentin V. Bartenev From vbart at nginx.com Tue Dec 1 16:04:32 2015 From: vbart at nginx.com (Valentin Bartenev) Date: Tue, 01 Dec 2015 16:04:32 +0000 Subject: [nginx] Increased the default "connection_pool_size" on 64-bit p... Message-ID: details: http://hg.nginx.org/nginx/rev/ea3ba1ce7014 branches: changeset: 6309:ea3ba1ce7014 user: Valentin Bartenev date: Mon Nov 30 16:27:33 2015 +0300 description: Increased the default "connection_pool_size" on 64-bit platforms. The previous default of 256 bytes isn't enough and results in two allocations on each accepted connection, which is suboptimal. diffstat: src/http/ngx_http_core_module.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r 7e241b36819d -r ea3ba1ce7014 src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c Mon Nov 30 12:04:29 2015 +0300 +++ b/src/http/ngx_http_core_module.c Mon Nov 30 16:27:33 2015 +0300 @@ -3503,7 +3503,7 @@ ngx_http_core_merge_srv_conf(ngx_conf_t /* TODO: it does not merge, it inits only */ ngx_conf_merge_size_value(conf->connection_pool_size, - prev->connection_pool_size, 256); + prev->connection_pool_size, NGX_PTR_SIZE * 64); ngx_conf_merge_size_value(conf->request_pool_size, prev->request_pool_size, 4096); ngx_conf_merge_msec_value(conf->client_header_timeout, From ru at nginx.com Tue Dec 1 17:20:58 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 01 Dec 2015 17:20:58 +0000 Subject: [nginx] Reduced the number of GET method constants. Message-ID: details: http://hg.nginx.org/nginx/rev/9d00576252aa branches: changeset: 6310:9d00576252aa user: Ruslan Ermilov date: Mon Nov 30 12:04:35 2015 +0300 description: Reduced the number of GET method constants. diffstat: src/http/ngx_http_special_response.c | 5 +---- 1 files changed, 1 insertions(+), 4 deletions(-) diffs (22 lines): diff -r ea3ba1ce7014 -r 9d00576252aa src/http/ngx_http_special_response.c --- a/src/http/ngx_http_special_response.c Mon Nov 30 16:27:33 2015 +0300 +++ b/src/http/ngx_http_special_response.c Mon Nov 30 12:04:35 2015 +0300 @@ -359,9 +359,6 @@ static ngx_str_t ngx_http_error_pages[] }; -static ngx_str_t ngx_http_get_name = { 3, (u_char *) "GET " }; - - ngx_int_t ngx_http_special_response_handler(ngx_http_request_t *r, ngx_int_t error) { @@ -564,7 +561,7 @@ ngx_http_send_error_page(ngx_http_reques if (r->method != NGX_HTTP_HEAD) { r->method = NGX_HTTP_GET; - r->method_name = ngx_http_get_name; + r->method_name = ngx_http_core_get_method; } return ngx_http_internal_redirect(r, &uri, &args); From ru at nginx.com Tue Dec 1 17:21:01 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 01 Dec 2015 17:21:01 +0000 Subject: [nginx] Proxy: improved code readability. Message-ID: details: http://hg.nginx.org/nginx/rev/44122bddd9a1 branches: changeset: 6311:44122bddd9a1 user: Ruslan Ermilov date: Fri Nov 06 15:21:51 2015 +0300 description: Proxy: improved code readability. Do not assume that space character follows the method name, just pass it explicitly. The fuss around it has already proved to be unsafe, see bbdb172f0927 and http://mailman.nginx.org/pipermail/nginx-ru/2013-January/049692.html for details. diffstat: src/http/modules/ngx_http_proxy_module.c | 17 +++++------------ 1 files changed, 5 insertions(+), 12 deletions(-) diffs (55 lines): diff -r 9d00576252aa -r 44122bddd9a1 src/http/modules/ngx_http_proxy_module.c --- a/src/http/modules/ngx_http_proxy_module.c Mon Nov 30 12:04:35 2015 +0300 +++ b/src/http/modules/ngx_http_proxy_module.c Fri Nov 06 15:21:51 2015 +0300 @@ -1157,25 +1157,24 @@ ngx_http_proxy_create_request(ngx_http_r if (u->method.len) { /* HEAD was changed to GET to cache response */ method = u->method; - method.len++; } else if (plcf->method.len) { method = plcf->method; } else { method = r->method_name; - method.len++; } ctx = ngx_http_get_module_ctx(r, ngx_http_proxy_module); - if (method.len == 5 - && ngx_strncasecmp(method.data, (u_char *) "HEAD ", 5) == 0) + if (method.len == 4 + && ngx_strncasecmp(method.data, (u_char *) "HEAD", 4) == 0) { ctx->head = 1; } - len = method.len + sizeof(ngx_http_proxy_version) - 1 + sizeof(CRLF) - 1; + len = method.len + 1 + sizeof(ngx_http_proxy_version) - 1 + + sizeof(CRLF) - 1; escape = 0; loc_len = 0; @@ -1294,6 +1293,7 @@ ngx_http_proxy_create_request(ngx_http_r /* the request line */ b->last = ngx_copy(b->last, method.data, method.len); + *b->last++ = ' '; u->uri.data = b->last; @@ -3159,13 +3159,6 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t ngx_conf_merge_str_value(conf->method, prev->method, ""); - if (conf->method.len - && conf->method.data[conf->method.len - 1] != ' ') - { - conf->method.data[conf->method.len] = ' '; - conf->method.len++; - } - ngx_conf_merge_value(conf->upstream.pass_request_headers, prev->upstream.pass_request_headers, 1); ngx_conf_merge_value(conf->upstream.pass_request_body, From ru at nginx.com Tue Dec 1 17:21:03 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 01 Dec 2015 17:21:03 +0000 Subject: [nginx] Stop emulating a space character after r->method_name. Message-ID: details: http://hg.nginx.org/nginx/rev/1d696c646d81 branches: changeset: 6312:1d696c646d81 user: Ruslan Ermilov date: Mon Nov 30 12:54:01 2015 +0300 description: Stop emulating a space character after r->method_name. This is an API change. The proxy module was modified to not depend on this in 44122bddd9a1. No known third-party modules seem to depend on this. diffstat: src/http/ngx_http_core_module.c | 2 +- src/http/v2/ngx_http_v2.c | 3 --- 2 files changed, 1 insertions(+), 4 deletions(-) diffs (25 lines): diff -r 44122bddd9a1 -r 1d696c646d81 src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c Fri Nov 06 15:21:51 2015 +0300 +++ b/src/http/ngx_http_core_module.c Mon Nov 30 12:54:01 2015 +0300 @@ -776,7 +776,7 @@ ngx_module_t ngx_http_core_module = { }; -ngx_str_t ngx_http_core_get_method = { 3, (u_char *) "GET " }; +ngx_str_t ngx_http_core_get_method = { 3, (u_char *) "GET" }; void diff -r 44122bddd9a1 -r 1d696c646d81 src/http/v2/ngx_http_v2.c --- a/src/http/v2/ngx_http_v2.c Fri Nov 06 15:21:51 2015 +0300 +++ b/src/http/v2/ngx_http_v2.c Mon Nov 30 12:54:01 2015 +0300 @@ -3294,9 +3294,6 @@ ngx_http_v2_construct_request_line(ngx_h ngx_memcpy(p, ending, sizeof(ending)); - /* some modules expect the space character after method name */ - r->method_name.data = r->request_line.data; - ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "http2 http request line: \"%V\"", &r->request_line); From mdounin at mdounin.ru Tue Dec 1 22:07:42 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 01 Dec 2015 22:07:42 +0000 Subject: [nginx] Style. Message-ID: details: http://hg.nginx.org/nginx/rev/be3aed17689c branches: changeset: 6313:be3aed17689c user: Maxim Dounin date: Wed Dec 02 01:06:54 2015 +0300 description: Style. diffstat: src/http/ngx_http_upstream.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -641,7 +641,7 @@ ngx_http_upstream_init_request(ngx_http_ ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "no port in upstream \"%V\"", host); ngx_http_upstream_finalize_request(r, u, - NGX_HTTP_INTERNAL_SERVER_ERROR); + NGX_HTTP_INTERNAL_SERVER_ERROR); return; } From aviram at adallom.com Wed Dec 2 07:39:25 2015 From: aviram at adallom.com (Aviram Cohen) Date: Wed, 2 Dec 2015 07:39:25 +0000 Subject: [BUG] Gunzip module may cause requests to fail In-Reply-To: <20151201132836.GN74233@mdounin.ru> References: <2571972.HTVaeomj3b@vbart-workstation> <20151130173717.GJ74233@mdounin.ru> <20151201132836.GN74233@mdounin.ru> Message-ID: Hello! Maxim, I want to set up a reverse proxy the gunzips responses of a backend server I cannot control (for example, a cache server for a development infrastructure). The response is marked with Content-Length set to zero, so Valentin's comment regarding why the response is empty is irrelevant. It is also irrelevant when the server sends a chunked response with an initial 0 sized chunk. I've seen servers that do both. The HTTP level is okay besides the extra gzip header, and so this is a "little bit pregnant" response ;) HTTP clients such as web browsers, wget and curl do handle this response, but my reverse proxy doesn't. What do you suggest that I'd do? Regarding the error description itself, a patch that can applied to better describe the error seems as easy for me as a fix, so to me it seems irrelevant as long as you refuse to fix the issue. Obviously I can provide a patch for the issue itself, which I still consider as an Nginx bug :) Regards Aviram -----Original Message----- From: nginx-devel [mailto:nginx-devel-bounces at nginx.org] On Behalf Of Maxim Dounin Sent: ????? 01 ????? 2015 15:29 To: nginx-devel at nginx.org Subject: Re: [BUG] Gunzip module may cause requests to fail Hello! On Tue, Dec 01, 2015 at 07:55:22AM +0000, Aviram Cohen wrote: > Maxim, great hearing from you. > I have said that the response is a bit malformed, which means > for me that even though it looks weird, it can be handled. > You are right that when Nginx gets unpredicted behaviors from the server/client, it should close the connection so that there won't be any damage. > However, this is an error that Nginx could have easily avoided. "Just a little bit pregnant" (c) The response is malformed, and nobody knows how it is expected to look like if it wasn't malformed. > Besides, this is not an explicit Nginx behavior, as Nginx is not aware that it got an empty response with the gzip header. > It happens implicitly due to a calculation of data size, which happens to be zero. The error log doesn't show a backend server error, but a zlib error. > This all suggests an obvious Nginx bug, and doesn't sound to me like the desired Nginx behavior. The error happens when gunzipping the response, and it's irrelevant if it was obtained from a backend or from disk. The error says "inflate() returned ... on response end", and it is belived to be clear enough and in line with other errors reported by gunzip filter. If you think the error message can be improved - feel free to suggest patches. -- Maxim Dounin https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fnginx.org%2f&data=01%7c01%7cavcohe%40064d.mgd.microsoft.com%7c0717e54f7a5341e676bc08d2fa535a2e%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=iY0M%2f3q3EydPoDgArT%2f7haglLl1PDvWF5ctvuIFcBOg%3d _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmailman.nginx.org%2fmailman%2flistinfo%2fnginx-devel&data=01%7c01%7cavcohe%40064d.mgd.microsoft.com%7c0717e54f7a5341e676bc08d2fa535a2e%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=zkaAsixoTcePtFD8bAvpiK9XO%2f7pPENlIrXmDvEItrk%3d From waikeen.woon at onapp.com Wed Dec 2 10:40:03 2015 From: waikeen.woon at onapp.com (Wai Keen Woon) Date: Wed, 2 Dec 2015 18:40:03 +0800 Subject: [PATCH] HTTP/2: fixed premature connection closure during reload (ticket #626). Message-ID: <565ECA83.1060905@onapp.com> # HG changeset patch # User Wai Keen Woon # Date 1449052722 -28800 # Wed Dec 02 18:38:42 2015 +0800 # Node ID 4b7ef34610ebe00eb6a6d52008a48f9864dadd33 # Parent be3aed17689c0edd36c2025ff5c36fe493b68bd7 HTTP/2: fixed premature connection closure during reload (ticket #626). HTTP/2 transfers may be closed prematurely during nginx reload, which logs "open socket #X left in connection Y" alerts. ngx_add_timer() isn't called when frames are sent faster than they can be created. The worker process therefore exits because there are no more timers scheduled, even though there are more data frames and finalization forthcoming. diff -r be3aed17689c -r 4b7ef34610eb src/http/v2/ngx_http_v2.c --- a/src/http/v2/ngx_http_v2.c Wed Dec 02 01:06:54 2015 +0300 +++ b/src/http/v2/ngx_http_v2.c Wed Dec 02 18:38:42 2015 +0800 @@ -535,7 +535,7 @@ c->tcp_nodelay = NGX_TCP_NODELAY_SET; } - if (cl) { + if (cl || h2c->processing) { ngx_add_timer(wev, clcf->send_timeout); } else { From mdounin at mdounin.ru Wed Dec 2 18:25:33 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Dec 2015 21:25:33 +0300 Subject: [BUG] Gunzip module may cause requests to fail In-Reply-To: References: <2571972.HTVaeomj3b@vbart-workstation> <20151130173717.GJ74233@mdounin.ru> <20151201132836.GN74233@mdounin.ru> Message-ID: <20151202182532.GX74233@mdounin.ru> Hello! On Wed, Dec 02, 2015 at 07:39:25AM +0000, Aviram Cohen wrote: > Maxim, I want to set up a reverse proxy the gunzips responses of > a backend server I cannot control (for example, a cache server > for a development infrastructure). > The response is marked with Content-Length set to zero, so > Valentin's comment regarding why the response is empty is > irrelevant. It is also irrelevant when the server sends a > chunked response with an initial 0 sized chunk. I've seen > servers that do both. > The HTTP level is okay besides the extra gzip header, and so > this is a "little bit pregnant" response ;) Both cases described indicate that the response was corrupted earlier, and then an attempt to recover the corruption was done, likely unintended. And, as previously said, nobody knows how the response is expected to look like if it wasn't malformed. Though in the case when Content-Length is known in advance it is possible to improve current behaviour by logging an error earlier and passing the request unmodified. Patches are welcome. > HTTP clients such as web browsers, wget and curl do handle this > response, but my reverse proxy doesn't. > What do you suggest that I'd do? The problem is that most programs fail to check return codes properly. This is not a plus though, but rather bugs in relevant programs. And these bugs make it very easy to confuse valid responses with corrupted ones, as well as to incorrectly gunzip valid responses. Trying to make nginx bug-to-bug compatible with browsers isn't likely to be beneficial though. By trying to "fix" nginx to be bug-to-bug compatible with browsers and produce the same results as careless browsers do will make it impossible for proper clients to detect errors. And this is what we are trying to avoid. As for what to do, an obvious solution would be, as already suggested, to disable gunzipping of responses from sources you can't control and are known to return malformed responses. > Regarding the error description itself, a patch that can applied > to better describe the error seems as easy for me as a fix, so > to me it seems irrelevant as long as you refuse to fix the > issue. > Obviously I can provide a patch for the issue itself, which I > still consider as an Nginx bug :) If you think you know how to improve things - feel free to provide patches. Though as we already tried to explain several times, it's not a bug. At most, it's a suboptimal error handling. -- Maxim Dounin http://nginx.org/ From emptydocks at gmail.com Wed Dec 2 21:22:47 2015 From: emptydocks at gmail.com (Koby Nachmany) Date: Wed, 2 Dec 2015 23:22:47 +0200 Subject: Configurable 'npoints' for ngx_http_upstream_hash In-Reply-To: References: Message-ID: > > Hello! > > I have a use case for an even, consistent balancing of a caching layer > upstream cluster. I.e using the "Ketama algorithm" > Current consistent hashing implementation in ngx_http_upstream_hash is > hard-coded to '160' vbuckets and real world results show a 20% variance in > balancing, which is not acceptable in our case. > > Following is a patch (Thanks to agentzh) that will allow a configurable > vbuckets configuration param. Default will remain the same = 160. > Please consider pushing this "upstream" . No pun intended ;) > > Koby N > > --- a/src/http/modules/ngx_http_upstream_hash_module.c 2015-07-15 > 00:46:06.000000000 +0800 > +++ b/src/http/modules/ngx_http_upstream_hash_module.c 2015-10-11 > 22:26:47.952670175 +0800 > @@ -23,6 +23,7 @@ typedef struct { > > > typedef struct { > + ngx_uint_t npoints; > ngx_http_complex_value_t key; > ngx_http_upstream_chash_points_t *points; > } ngx_http_upstream_hash_srv_conf_t; > @@ -66,7 +67,7 @@ static char *ngx_http_upstream_hash(ngx_ > static ngx_command_t ngx_http_upstream_hash_commands[] = { > > { ngx_string("hash"), > - NGX_HTTP_UPS_CONF|NGX_CONF_TAKE12, > + NGX_HTTP_UPS_CONF|NGX_CONF_TAKE123, > ngx_http_upstream_hash, > NGX_HTTP_SRV_CONF_OFFSET, > 0, > @@ -296,7 +297,10 @@ ngx_http_upstream_init_chash(ngx_conf_t > us->peer.init = ngx_http_upstream_init_chash_peer; > > peers = us->peer.data; > - npoints = peers->total_weight * 160; > + > + hcf = ngx_http_conf_upstream_srv_conf(us, > ngx_http_upstream_hash_module); > + > + npoints = peers->total_weight * hcf->npoints; > > size = sizeof(ngx_http_upstream_chash_points_t) > + sizeof(ngx_http_upstream_chash_point_t) * (npoints - 1); > @@ -355,7 +359,7 @@ ngx_http_upstream_init_chash(ngx_conf_t > ngx_crc32_update(&base_hash, port, port_len); > > prev_hash.value = 0; > - npoints = peer->weight * 160; > + npoints = peer->weight * hcf->npoints; > > for (j = 0; j < npoints; j++) { > hash = base_hash; > @@ -391,7 +395,6 @@ ngx_http_upstream_init_chash(ngx_conf_t > > points->number = i + 1; > > - hcf = ngx_http_conf_upstream_srv_conf(us, > ngx_http_upstream_hash_module); > hcf->points = points; > > return NGX_OK; > @@ -657,6 +660,19 @@ ngx_http_upstream_hash(ngx_conf_t *cf, n > } else if (ngx_strcmp(value[2].data, "consistent") == 0) { > uscf->peer.init_upstream = ngx_http_upstream_init_chash; > > + if (cf->args->nelts > 3) { > + hcf->npoints = ngx_atoi(value[3].data, value[3].len); > + > + if (hcf->npoints == (ngx_uint_t) NGX_ERROR) { > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > + "invalid npoints parameter \"%V\"", > &value[3]); > + return NGX_CONF_ERROR; > + } > + > + } else { > + hcf->npoints = 160; > + } > + > } else { > ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > "invalid parameter \"%V\"", &value[2]); > -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotrsikora at google.com Thu Dec 3 03:18:39 2015 From: piotrsikora at google.com (Piotr Sikora) Date: Wed, 02 Dec 2015 19:18:39 -0800 Subject: [PATCH] Core: use sysconf(_SC_PAGESIZE) instead of getpagesize() Message-ID: <60321d69523e74791e54.1449112719@piotrsikora.sfo.corp.google.com> # HG changeset patch # User Piotr Sikora # Date 1449112639 28800 # Wed Dec 02 19:17:19 2015 -0800 # Branch patch1 # Node ID 60321d69523e74791e541430ecf61ae404760dde # Parent be3aed17689c0edd36c2025ff5c36fe493b68bd7 Core: use sysconf(_SC_PAGESIZE) instead of getpagesize(). Signed-off-by: Piotr Sikora diff -r be3aed17689c -r 60321d69523e src/os/unix/ngx_posix_init.c --- a/src/os/unix/ngx_posix_init.c +++ b/src/os/unix/ngx_posix_init.c @@ -32,6 +32,7 @@ ngx_os_io_t ngx_os_io = { ngx_int_t ngx_os_init(ngx_log_t *log) { + long value; ngx_uint_t n; #if (NGX_HAVE_OS_SPECIFIC_INIT) @@ -44,7 +45,14 @@ ngx_os_init(ngx_log_t *log) return NGX_ERROR; } - ngx_pagesize = getpagesize(); + value = sysconf(_SC_PAGESIZE); + if (value == -1) { + ngx_log_error(NGX_LOG_ALERT, log, errno, + "sysconf(_SC_PAGESIZE) failed"); + return NGX_ERROR; + } + + ngx_pagesize = (ngx_uint_t) value; ngx_cacheline_size = NGX_CPU_CACHE_LINE; for (n = ngx_pagesize; n >>= 1; ngx_pagesize_shift++) { /* void */ } From piotrsikora at google.com Thu Dec 3 03:18:49 2015 From: piotrsikora at google.com (Piotr Sikora) Date: Wed, 02 Dec 2015 19:18:49 -0800 Subject: [PATCH] Core: fix typo in error message Message-ID: <049758c0f1f160a68e3a.1449112729@piotrsikora.sfo.corp.google.com> # HG changeset patch # User Piotr Sikora # Date 1449112639 28800 # Wed Dec 02 19:17:19 2015 -0800 # Branch patch2 # Node ID 049758c0f1f160a68e3a8d0c85896d24ac6e77d6 # Parent be3aed17689c0edd36c2025ff5c36fe493b68bd7 Core: fix typo in error message. Signed-off-by: Piotr Sikora diff -r be3aed17689c -r 049758c0f1f1 src/os/unix/ngx_posix_init.c --- a/src/os/unix/ngx_posix_init.c +++ b/src/os/unix/ngx_posix_init.c @@ -63,7 +63,7 @@ ngx_os_init(ngx_log_t *log) if (getrlimit(RLIMIT_NOFILE, &rlmt) == -1) { ngx_log_error(NGX_LOG_ALERT, log, errno, - "getrlimit(RLIMIT_NOFILE) failed)"); + "getrlimit(RLIMIT_NOFILE) failed"); return NGX_ERROR; } From wilson.judson at gmail.com Thu Dec 3 07:28:43 2015 From: wilson.judson at gmail.com (Judson Wilson) Date: Wed, 2 Dec 2015 23:28:43 -0800 Subject: ngx_ssl_shutdown() using SSL_shutdown() incorrectly? Message-ID: On inspecting some code for academic reasons, I noticed that ngx_ssl_shutdown() looks like it might be using SSL_shutdown() incorrectly? I haven't actually "used" the code, and have not tested it or seen any symptoms. The first hint of a problem is the following comment: /* SSL_shutdown() never returns -1, on error it returns 0 */ which does not match the OpenSSL man page very well, or the OpenSSL function ssl3_shutdown() definition. Second, it appears that with the way SSL_set_shutdown() is being used to stuff flags into the SSL state, SSL_shutdown() should be called until it returns 1, which may take multiple calls, even if there isn't a WANT_READ or WANT_WRITE condition upon returning -1 (or 0?). Generally one call is used to send a close_notify, which returns 0 (assuming SSL_set_shutdown hasn't stuffed in SSL_RECEIVED_SHUTDOWN), and further calls wont return 1 until it receives close_notify. Quite possibly I am missing some assumptions, which would make good comments in the code. I hope this is useful. - Judson -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Thu Dec 3 07:39:44 2015 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 3 Dec 2015 10:39:44 +0300 Subject: ngx_ssl_shutdown() using SSL_shutdown() incorrectly? In-Reply-To: References: Message-ID: <4E62C5F7-9851-49B2-BB21-C46EF00F6FBC@sysoev.ru> On 03 Dec 2015, at 10:28, Judson Wilson wrote: > On inspecting some code for academic reasons, I noticed that ngx_ssl_shutdown() looks like it might be using SSL_shutdown() incorrectly? > > I haven't actually "used" the code, and have not tested it or seen any symptoms. > > > The first hint of a problem is the following comment: > > /* SSL_shutdown() never returns -1, on error it returns 0 */ > > which does not match the OpenSSL man page very well, or the OpenSSL function ssl3_shutdown() definition. SSL_shutdown() never returned -1 prior to 0.9.8m version despite man page. > Second, it appears that with the way SSL_set_shutdown() is being used to stuff flags into the SSL state, SSL_shutdown() should be called until it returns 1, which may take multiple calls, even if there isn't a WANT_READ or WANT_WRITE condition upon returning -1 (or 0?). Generally one call is used to send a close_notify, which returns 0 (assuming SSL_set_shutdown hasn't stuffed in SSL_RECEIVED_SHUTDOWN), and further calls wont return 1 until it receives close_notify. > > Quite possibly I am missing some assumptions, which would make good comments in the code. > > I hope this is useful. Now code and the comment should be changed, thank you. -- Igor Sysoev http://nginx.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From wilson.judson at gmail.com Thu Dec 3 07:54:34 2015 From: wilson.judson at gmail.com (Judson Wilson) Date: Wed, 2 Dec 2015 23:54:34 -0800 Subject: ngx_ssl_shutdown() using SSL_shutdown() incorrectly? In-Reply-To: <4E62C5F7-9851-49B2-BB21-C46EF00F6FBC@sysoev.ru> References: <4E62C5F7-9851-49B2-BB21-C46EF00F6FBC@sysoev.ru> Message-ID: > SSL_shutdown() never returned -1 prior to 0.9.8m version despite man page. Ah, I didn't check that far back. I'll take this opportunity to remind everyone who might read this that support for all versions of OpenSSL before 1.0.1 ceases at the end of this month. https://openssl.org/policies/releasestrat.html On Wed, Dec 2, 2015 at 11:39 PM, Igor Sysoev wrote: > On 03 Dec 2015, at 10:28, Judson Wilson wrote: > > On inspecting some code for academic reasons, I noticed that > ngx_ssl_shutdown() looks like it might be using SSL_shutdown() incorrectly? > > I haven't actually "used" the code, and have not tested it or seen any > symptoms. > > > The first hint of a problem is the following comment: > > /* SSL_shutdown() never returns -1, on error it returns 0 */ > > which does not match the OpenSSL man page very well, or the OpenSSL > function ssl3_shutdown() definition. > > > SSL_shutdown() never returned -1 prior to 0.9.8m version despite man page. > > Second, it appears that with the way SSL_set_shutdown() is being used to > stuff flags into the SSL state, SSL_shutdown() should be called until it > returns 1, which may take multiple calls, even if there isn't a WANT_READ > or WANT_WRITE condition upon returning -1 (or 0?). Generally one call is > used to send a close_notify, which returns 0 (assuming SSL_set_shutdown > hasn't stuffed in SSL_RECEIVED_SHUTDOWN), and further calls wont return 1 > until it receives close_notify. > > Quite possibly I am missing some assumptions, which would make good > comments in the code. > > I hope this is useful. > > > Now code and the comment should be changed, thank you. > > > -- > Igor Sysoev > http://nginx.com > > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kipras at zenedge.com Thu Dec 3 10:55:15 2015 From: kipras at zenedge.com (=?UTF-8?Q?Kipras_Mancevi=C4=8Dius?=) Date: Thu, 3 Dec 2015 12:55:15 +0200 Subject: ngx_http_upstream_copy_allow_ranges() issue when using ModSecurity Message-ID: Hey everyone, looks like nginx versions >= 1.7.7 have issues with the modsecurity module, because of the new proxy_force_ranges directive. The problem is that modsecurity calls ngx_http_upstream_header_t->copy_handler() for all ngx_http_upstream_headers_in headers specified in ngx_http_upstream. And in ngx_http_upstream_copy_allow_ranges() the check for that configuration value [1] results in a segfault, because r->upstream->conf is probably NULL at that point, which causes nginx to crash. One way to work around this is to set "proxy_force_ranges" to on in nginx config. However another simple fix is to check if r->upstream->conf exists, before accessing r->upstream->conf->force_ranges. And this shouldn't change the behavior of nginx (which changing the value of this flag does). More info: see @driehuls comment in https://github.com/SpiderLabs/ModSecurity/issues/823 Also, here is a backtrace of the crash, taken on nginx 1.9.3.2: $ gdb sbin/nginx cores/core > GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.2) 7.7.1 > Copyright (C) 2014 Free Software Foundation, Inc. > License GPLv3+: GNU GPL version 3 or later < > http://gnu.org/licenses/gpl.html> > This is free software: you are free to change and redistribute it. > There is NO WARRANTY, to the extent permitted by law. Type "show copying" > and "show warranty" for details. > This GDB was configured as "x86_64-linux-gnu". > Type "show configuration" for configuration details. > For bug reporting instructions, please see: > . > Find the GDB manual and other documentation resources online at: > . > For help, type "help". > Type "apropos word" to search for commands related to "word"... > Reading symbols from sbin/nginx...done. > > [New LWP 12171] > [Thread debugging using libthread_db enabled] > Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". > Core was generated by `nginx: worker > process '. > Program terminated with signal SIGSEGV, Segmentation fault. > #0 0x00000000004682e6 in ngx_http_upstream_copy_allow_ranges (r=0xd11be0, > h=0x7fff38119ff0, offset=) > at src/http/ngx_http_upstream.c:4780 > > 4780 if (r->upstream->conf->force_ranges) { > (gdb) bt > #0 0x00000000004682e6 in ngx_http_upstream_copy_allow_ranges (r=0xd11be0, > h=0x7fff38119ff0, offset=) > at src/http/ngx_http_upstream.c:4780 > #1 0x0000000000d11be0 in ?? () > #2 0x000000000000000d in ?? () > #3 0x00000000004fab3b in ngx_http_modsecurity_save_headers_out_visitor > (data=0xd11be0, key=, value=0xf29c5e "bytes") > at > /vagrant/openresty/vendor/mod_security/nginx/modsecurity/ngx_http_modsecurity.c:854 > #4 0x00007f924cd21d7d in apr_table_vdo () from > /usr/lib/x86_64-linux-gnu/libapr-1.so.0 > #5 0x00007f924cd21e32 in apr_table_do () from > /usr/lib/x86_64-linux-gnu/libapr-1.so.0 > #6 0x00000000004fb84e in ngx_http_modsecurity_save_headers_out > (r=0xd11be0) > at > /vagrant/openresty/vendor/mod_security/nginx/modsecurity/ngx_http_modsecurity.c:792 > #7 ngx_http_modsecurity_body_filter (r=, in= out>) > at > /vagrant/openresty/vendor/mod_security/nginx/modsecurity/ngx_http_modsecurity.c:1413 > #8 0x00000000004d1194 in ngx_http_lua_capture_body_filter (r=0xd11be0, > in=0x7fff3811a350) > at ../ngx_lua-0.9.19/src/ngx_http_lua_capturefilter.c:133 > #9 0x0000000000428015 in ngx_output_chain (ctx=ctx at entry=0xf29b10, > in=in at entry=0x7fff3811a350) at src/core/ngx_output_chain.c:74 > #10 0x000000000045d9d3 in ngx_http_copy_filter (r=0xd11be0, > in=0x7fff3811a350) at src/http/ngx_http_copy_filter_module.c:152 > #11 0x00000000004531eb in ngx_http_output_filter (r=r at entry=0xd11be0, > in=in at entry=0x7fff3811a350) > at src/http/ngx_http_core_module.c:1969 > #12 0x0000000000456bd3 in ngx_http_send_special (r=r at entry=0xd11be0, > flags=flags at entry=1) at src/http/ngx_http_request.c:3355 > #13 0x00000000004673e5 in ngx_http_upstream_finalize_request (r=0xd11be0, > u=0xd570c8, rc=0) at src/http/ngx_http_upstream.c:4071 > #14 0x00000000004681c5 in ngx_http_upstream_process_request (r=0xd11be0, > u=0xd570c8) at src/http/ngx_http_upstream.c:3667 > #15 0x000000000046a583 in ngx_http_upstream_send_response (u=0xd570c8, > r=0xd11be0) at src/http/ngx_http_upstream.c:2963 > #16 ngx_http_upstream_process_header (r=0xd11be0, u=0xd570c8) at > src/http/ngx_http_upstream.c:2165 > #17 0x00000000004674b9 in ngx_http_upstream_handler (ev=) > at src/http/ngx_http_upstream.c:1092 > #18 0x000000000043d630 in ngx_event_process_posted (cycle=cycle at entry=0xd0bbb0, > posted=0x7fd780 ) > at src/event/ngx_event_posted.c:33 > #19 0x000000000043d210 in ngx_process_events_and_timers (cycle=cycle at entry=0xd0bbb0) > at src/event/ngx_event.c:259 > #20 0x0000000000442a8d in ngx_worker_process_cycle (cycle=cycle at entry=0xd0bbb0, > data=data at entry=0x0) > at src/os/unix/ngx_process_cycle.c:769 > #21 0x0000000000441534 in ngx_spawn_process (cycle=cycle at entry=0xd0bbb0, > proc=0x4429e0 , data=0x0, > name=0x54b804 "worker process", respawn=respawn at entry=0) at > src/os/unix/ngx_process.c:198 > #22 0x0000000000443b32 in ngx_reap_children (cycle=0xd0bbb0) at > src/os/unix/ngx_process_cycle.c:621 > #23 ngx_master_process_cycle (cycle=cycle at entry=0xd0bbb0) at > src/os/unix/ngx_process_cycle.c:174 > #24 0x00000000004236cf in main (argc=, argv= out>) at src/core/nginx.c:415 > [1] https://github.com/nginx/nginx/blob/master/src/http/ngx_http_upstream.c#L4788 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Dec 3 14:05:29 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 3 Dec 2015 17:05:29 +0300 Subject: ngx_http_upstream_copy_allow_ranges() issue when using ModSecurity In-Reply-To: References: Message-ID: <20151203140529.GA74233@mdounin.ru> Hello! On Thu, Dec 03, 2015 at 12:55:15PM +0200, Kipras Mancevi?ius wrote: > Hey everyone, > > looks like nginx versions >= 1.7.7 have issues with the modsecurity module, > because of the new proxy_force_ranges directive. The problem is that > modsecurity calls ngx_http_upstream_header_t->copy_handler() for all > ngx_http_upstream_headers_in headers specified in ngx_http_upstream. > > And in ngx_http_upstream_copy_allow_ranges() the check for that > configuration value [1] results in a segfault, because r->upstream->conf is > probably NULL at that point, which causes nginx to crash. > > One way to work around this is to set "proxy_force_ranges" to on in nginx > config. However another simple fix is to check if r->upstream->conf exists, > before accessing r->upstream->conf->force_ranges. And this shouldn't change > the behavior of nginx (which changing the value of this flag does). > > More info: see @driehuls comment in > https://github.com/SpiderLabs/ModSecurity/issues/823 [...] What ModSecurity does looks like a hack abusing part of the upstream module, and the segmentation fault is an expected result of the approach taken. ModSecurity module should be rewritten to avoid the hack, or the hack should be updated to the changes in nginx. In the latter case more segfaults are expected in the future. Just in case, here is a (closed invalid) ticket in nginx trac about this: https://trac.nginx.org/nginx/ticket/690 -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Dec 3 17:54:25 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 03 Dec 2015 17:54:25 +0000 Subject: [nginx] Style: NGX_PTR_SIZE replaced with sizeof(void *). Message-ID: details: http://hg.nginx.org/nginx/rev/fcbac620ae83 branches: changeset: 6314:fcbac620ae83 user: Maxim Dounin date: Thu Dec 03 20:06:45 2015 +0300 description: Style: NGX_PTR_SIZE replaced with sizeof(void *). The NGX_PTR_SIZE macro is only needed in preprocessor directives where it's not possible to use sizeof(). diffstat: src/core/ngx_string.c | 2 +- src/http/ngx_http_core_module.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diffs (24 lines): diff --git a/src/core/ngx_string.c b/src/core/ngx_string.c --- a/src/core/ngx_string.c +++ b/src/core/ngx_string.c @@ -410,7 +410,7 @@ ngx_vslprintf(u_char *buf, u_char *last, hex = 2; sign = 0; zero = '0'; - width = NGX_PTR_SIZE * 2; + width = 2 * sizeof(void *); break; case 'c': diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c +++ b/src/http/ngx_http_core_module.c @@ -3503,7 +3503,7 @@ ngx_http_core_merge_srv_conf(ngx_conf_t /* TODO: it does not merge, it inits only */ ngx_conf_merge_size_value(conf->connection_pool_size, - prev->connection_pool_size, NGX_PTR_SIZE * 64); + prev->connection_pool_size, 64 * sizeof(void *)); ngx_conf_merge_size_value(conf->request_pool_size, prev->request_pool_size, 4096); ngx_conf_merge_msec_value(conf->client_header_timeout, From mdounin at mdounin.ru Thu Dec 3 17:54:28 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 03 Dec 2015 17:54:28 +0000 Subject: [nginx] Core: fix typo in error message. Message-ID: details: http://hg.nginx.org/nginx/rev/cb31017e961b branches: changeset: 6315:cb31017e961b user: Piotr Sikora date: Wed Dec 02 19:17:19 2015 -0800 description: Core: fix typo in error message. Signed-off-by: Piotr Sikora diffstat: src/os/unix/ngx_posix_init.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff --git a/src/os/unix/ngx_posix_init.c b/src/os/unix/ngx_posix_init.c --- a/src/os/unix/ngx_posix_init.c +++ b/src/os/unix/ngx_posix_init.c @@ -63,7 +63,7 @@ ngx_os_init(ngx_log_t *log) if (getrlimit(RLIMIT_NOFILE, &rlmt) == -1) { ngx_log_error(NGX_LOG_ALERT, log, errno, - "getrlimit(RLIMIT_NOFILE) failed)"); + "getrlimit(RLIMIT_NOFILE) failed"); return NGX_ERROR; } From mdounin at mdounin.ru Thu Dec 3 17:54:44 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 3 Dec 2015 20:54:44 +0300 Subject: [PATCH] Core: fix typo in error message In-Reply-To: <049758c0f1f160a68e3a.1449112729@piotrsikora.sfo.corp.google.com> References: <049758c0f1f160a68e3a.1449112729@piotrsikora.sfo.corp.google.com> Message-ID: <20151203175444.GE74233@mdounin.ru> Hello! On Wed, Dec 02, 2015 at 07:18:49PM -0800, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1449112639 28800 > # Wed Dec 02 19:17:19 2015 -0800 > # Branch patch2 > # Node ID 049758c0f1f160a68e3a8d0c85896d24ac6e77d6 > # Parent be3aed17689c0edd36c2025ff5c36fe493b68bd7 > Core: fix typo in error message. > > Signed-off-by: Piotr Sikora > > diff -r be3aed17689c -r 049758c0f1f1 src/os/unix/ngx_posix_init.c > --- a/src/os/unix/ngx_posix_init.c > +++ b/src/os/unix/ngx_posix_init.c > @@ -63,7 +63,7 @@ ngx_os_init(ngx_log_t *log) > > if (getrlimit(RLIMIT_NOFILE, &rlmt) == -1) { > ngx_log_error(NGX_LOG_ALERT, log, errno, > - "getrlimit(RLIMIT_NOFILE) failed)"); > + "getrlimit(RLIMIT_NOFILE) failed"); > return NGX_ERROR; > } Committed, thanks. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Dec 3 18:21:29 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 3 Dec 2015 21:21:29 +0300 Subject: [PATCH] Core: use sysconf(_SC_PAGESIZE) instead of getpagesize() In-Reply-To: <60321d69523e74791e54.1449112719@piotrsikora.sfo.corp.google.com> References: <60321d69523e74791e54.1449112719@piotrsikora.sfo.corp.google.com> Message-ID: <20151203182129.GF74233@mdounin.ru> Hello! On Wed, Dec 02, 2015 at 07:18:39PM -0800, Piotr Sikora wrote: > # HG changeset patch > # User Piotr Sikora > # Date 1449112639 28800 > # Wed Dec 02 19:17:19 2015 -0800 > # Branch patch1 > # Node ID 60321d69523e74791e541430ecf61ae404760dde > # Parent be3aed17689c0edd36c2025ff5c36fe493b68bd7 > Core: use sysconf(_SC_PAGESIZE) instead of getpagesize(). Any practical reasons for the change? While getpagesize() isn't required by the current issue of POSIX, it is easier to use and used to be more widely available than sysconf(_SC_PAGESIZE). > > Signed-off-by: Piotr Sikora > > diff -r be3aed17689c -r 60321d69523e src/os/unix/ngx_posix_init.c > --- a/src/os/unix/ngx_posix_init.c > +++ b/src/os/unix/ngx_posix_init.c > @@ -32,6 +32,7 @@ ngx_os_io_t ngx_os_io = { > ngx_int_t > ngx_os_init(ngx_log_t *log) > { > + long value; > ngx_uint_t n; > > #if (NGX_HAVE_OS_SPECIFIC_INIT) > @@ -44,7 +45,14 @@ ngx_os_init(ngx_log_t *log) > return NGX_ERROR; > } > > - ngx_pagesize = getpagesize(); > + value = sysconf(_SC_PAGESIZE); > + if (value == -1) { > + ngx_log_error(NGX_LOG_ALERT, log, errno, > + "sysconf(_SC_PAGESIZE) failed"); > + return NGX_ERROR; > + } > + > + ngx_pagesize = (ngx_uint_t) value; The cast looks unneeded. > ngx_cacheline_size = NGX_CPU_CACHE_LINE; > > for (n = ngx_pagesize; n >>= 1; ngx_pagesize_shift++) { /* void */ } > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Dec 3 19:03:16 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 3 Dec 2015 22:03:16 +0300 Subject: [PATCH]add proxy_protocol_port variable for rfc6302 In-Reply-To: References: Message-ID: <20151203190316.GG74233@mdounin.ru> Hello! On Tue, Dec 01, 2015 at 08:08:30AM +0900, junpei yoshino wrote: > # HG changeset patch > # User Junpei Yoshino > # Date 1446723407 -32400 > # Thu Nov 05 20:36:47 2015 +0900 > # Node ID 59cadccedf402ec325b078cb72a284465639e0fe > # Parent 4ccb37b04454dec6afb9476d085c06aea00adaa0 > Http: add proxy_protocol_port variable for rfc6302 > > Logging source port is recommended in rfc6302. > use case > logging > sending information by http request headers Proper source port logging is something nginx don't currently do well in various places related to addresses got from external sources, including realip module, X-Forwarded-For parsing, [ha]proxy protocol and so on. Improving port handling in various related areas is something that should be done, though it needs to be handled more or less consistently in all affected areas. Providing ports in some places but not others can be misleading for users. And that's why the patch isn't yet reviewed - sorry for the delay, but we need someone to do the rest of the work. Below are some comments about the patch itself. > diff -r 4ccb37b04454 -r 59cadccedf40 src/core/ngx_connection.h > --- a/src/core/ngx_connection.h Fri Oct 30 21:43:30 2015 +0300 > +++ b/src/core/ngx_connection.h Thu Nov 05 20:36:47 2015 +0900 > @@ -146,6 +146,7 @@ > ngx_str_t addr_text; > > ngx_str_t proxy_protocol_addr; > + ngx_str_t proxy_protocol_port; Using a string (which takes 2 pointers) for a 16 bit port value seems to be excessive. It should be possible to just store a number instead. [...] > @@ -71,8 +71,56 @@ > ngx_memcpy(c->proxy_protocol_addr.data, addr, len); > c->proxy_protocol_addr.len = len; > > + for ( ;; ) { > + if (p == last) { > + goto invalid; > + } > + > + ch = *p++; > + > + if (ch == ' ') { > + break; > + } > + > + if (ch != ':' && ch != '.' > + && (ch < 'a' || ch > 'f') > + && (ch < 'A' || ch > 'F') > + && (ch < '0' || ch > '9')) > + { > + goto invalid; > + } > + } This is probably excessive. Just using a space as a separator should be enough, as we aren't using the destination address. [...] > ngx_log_debug1(NGX_LOG_DEBUG_CORE, c->log, 0, > "PROXY protocol address: \"%V\"", > &c->proxy_protocol_addr); > + ngx_log_debug1(NGX_LOG_DEBUG_CORE, c->log, 0, > + "PROXY protocol port: \"%V\"", &c->proxy_protocol_port); Logging the address and the port at once should be enough. [...] -- Maxim Dounin http://nginx.org/ From piotrsikora at google.com Fri Dec 4 03:24:51 2015 From: piotrsikora at google.com (Piotr Sikora) Date: Thu, 3 Dec 2015 19:24:51 -0800 Subject: [PATCH] Core: use sysconf(_SC_PAGESIZE) instead of getpagesize() In-Reply-To: <20151203182129.GF74233@mdounin.ru> References: <60321d69523e74791e54.1449112719@piotrsikora.sfo.corp.google.com> <20151203182129.GF74233@mdounin.ru> Message-ID: Hey Maxim, > Any practical reasons for the change? Kind of... OS X doesn't expose getpagesize() when it's compiled with _POSIX_C_SOURCE >= 200112L and no other define (not even _DARWIN_C_SOURCE) can change that. Side note: I considered adding _DARWIN_C_SOURCE to ngx_darwin_config.h, like it's done for _GNU_SOURCE, but I decided against it, because I found the ability to enforce strict POSIX interface (even if it doesn't really work right now) quite handy... and it's off by default, so someone needs to force it anyway. > While getpagesize() isn't required by the current issue of POSIX, > it is easier to use and used to be more widely available than > sysconf(_SC_PAGESIZE). Considering that sysconf(_SC_PAGESIZE) is part of POSIX and getpagesize() isn't, I don't believe this is true anymore. > The cast looks unneeded. It's signed-to-unsigned conversion... it's a little bit on the pedantic side, but I wouldn't say it's unneeded... and it follows style from similar cast just few lines below. Best regards, Piotr Sikora From ru at nginx.com Fri Dec 4 07:00:45 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 4 Dec 2015 10:00:45 +0300 Subject: [PATCH] Core: use sysconf(_SC_PAGESIZE) instead of getpagesize() In-Reply-To: References: <60321d69523e74791e54.1449112719@piotrsikora.sfo.corp.google.com> <20151203182129.GF74233@mdounin.ru> Message-ID: <20151204070045.GB53719@lo0.su> On Thu, Dec 03, 2015 at 07:24:51PM -0800, Piotr Sikora wrote: > Hey Maxim, > > > Any practical reasons for the change? > > Kind of... OS X doesn't expose getpagesize() when it's compiled with > _POSIX_C_SOURCE >= 200112L and no other define (not even > _DARWIN_C_SOURCE) can change that. > > Side note: I considered adding _DARWIN_C_SOURCE to > ngx_darwin_config.h, like it's done for _GNU_SOURCE, but I decided > against it, because I found the ability to enforce strict POSIX > interface (even if it doesn't really work right now) quite handy... > and it's off by default, so someone needs to force it anyway. > > > While getpagesize() isn't required by the current issue of POSIX, > > it is easier to use and used to be more widely available than > > sysconf(_SC_PAGESIZE). > > Considering that sysconf(_SC_PAGESIZE) is part of POSIX and > getpagesize() isn't, I don't believe this is true anymore. > > > The cast looks unneeded. > > It's signed-to-unsigned conversion... it's a little bit on the > pedantic side, but I wouldn't say it's unneeded... and it follows > style from similar cast just few lines below. diff --git a/auto/unix b/auto/unix --- a/auto/unix +++ b/auto/unix @@ -842,6 +842,16 @@ ngx_feature_test="sysconf(_SC_NPROCESSOR . auto/feature +ngx_feature="sysconf(_SC_PAGESIZE)" +ngx_feature_name="NGX_HAVE_SC_PAGESIZE" +ngx_feature_run=no +ngx_feature_incs= +ngx_feature_path= +ngx_feature_libs= +ngx_feature_test="sysconf(_SC_PAGESIZE)" +. auto/feature + + ngx_feature="openat(), fstatat()" ngx_feature_name="NGX_HAVE_OPENAT" ngx_feature_run=no diff --git a/src/os/unix/ngx_posix_init.c b/src/os/unix/ngx_posix_init.c --- a/src/os/unix/ngx_posix_init.c +++ b/src/os/unix/ngx_posix_init.c @@ -44,7 +44,12 @@ ngx_os_init(ngx_log_t *log) return NGX_ERROR; } +#if (NGX_HAVE_SC_PAGESIZE) + ngx_pagesize = sysconf(_SC_PAGESIZE); +#else ngx_pagesize = getpagesize(); +#endif + ngx_cacheline_size = NGX_CPU_CACHE_LINE; for (n = ngx_pagesize; n >>= 1; ngx_pagesize_shift++) { /* void */ } From piotrsikora at google.com Fri Dec 4 07:08:57 2015 From: piotrsikora at google.com (Piotr Sikora) Date: Thu, 3 Dec 2015 23:08:57 -0800 Subject: [PATCH] Core: use sysconf(_SC_PAGESIZE) instead of getpagesize() In-Reply-To: <20151204070045.GB53719@lo0.su> References: <60321d69523e74791e54.1449112719@piotrsikora.sfo.corp.google.com> <20151203182129.GF74233@mdounin.ru> <20151204070045.GB53719@lo0.su> Message-ID: Hey Ruslan, > +ngx_feature="sysconf(_SC_PAGESIZE)" > +ngx_feature_name="NGX_HAVE_SC_PAGESIZE" > +ngx_feature_run=no > +ngx_feature_incs= > +ngx_feature_path= > +ngx_feature_libs= > +ngx_feature_test="sysconf(_SC_PAGESIZE)" > +. auto/feature This options is required by POSIX and it's not a custom extension available only on a few operating systems, so it doesn't make sense to add a feature check for it, IMHO. Unless you found a (still supported) system that doesn't have it? Best regards, Piotr Sikora From mdounin at mdounin.ru Fri Dec 4 12:19:37 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 4 Dec 2015 15:19:37 +0300 Subject: [PATCH] Core: use sysconf(_SC_PAGESIZE) instead of getpagesize() In-Reply-To: References: <60321d69523e74791e54.1449112719@piotrsikora.sfo.corp.google.com> <20151203182129.GF74233@mdounin.ru> Message-ID: <20151204121936.GH74233@mdounin.ru> Hello! On Thu, Dec 03, 2015 at 07:24:51PM -0800, Piotr Sikora wrote: > Hey Maxim, > > > Any practical reasons for the change? > > Kind of... OS X doesn't expose getpagesize() when it's compiled with > _POSIX_C_SOURCE >= 200112L and no other define (not even > _DARWIN_C_SOURCE) can change that. > > Side note: I considered adding _DARWIN_C_SOURCE to > ngx_darwin_config.h, like it's done for _GNU_SOURCE, but I decided > against it, because I found the ability to enforce strict POSIX > interface (even if it doesn't really work right now) quite handy... > and it's off by default, so someone needs to force it anyway. So the real reason for the change is that you are trying to define _POSIX_C_SOURCE for some unknown reason, and it breaks things, right? I suspect there are lots of things that will be broken by defining _POSIX_C_SOURCE, and there should be really good reason to define it. > > While getpagesize() isn't required by the current issue of POSIX, > > it is easier to use and used to be more widely available than > > sysconf(_SC_PAGESIZE). > > Considering that sysconf(_SC_PAGESIZE) is part of POSIX and > getpagesize() isn't, I don't believe this is true anymore. AFAIK, _SC_PAGESIZE is not available on HP-UX. Though I have no access to one now to test. > > The cast looks unneeded. > > It's signed-to-unsigned conversion... it's a little bit on the > pedantic side, but I wouldn't say it's unneeded... and it follows > style from similar cast just few lines below. Signed-to-unsigned conversion is perfectly defined in C for all possible values. The cast below is unknown-to-signed, which can overflow with undefined result, and thus in some cases causes warnings. General approach is to only add casts where they are needed, that is, where otherwise the behaviour will be wrong and/or a warning will be generated. -- Maxim Dounin http://nginx.org/ From piotrsikora at google.com Fri Dec 4 22:43:11 2015 From: piotrsikora at google.com (Piotr Sikora) Date: Fri, 4 Dec 2015 14:43:11 -0800 Subject: [PATCH] Core: use sysconf(_SC_PAGESIZE) instead of getpagesize() In-Reply-To: <20151204121936.GH74233@mdounin.ru> References: <60321d69523e74791e54.1449112719@piotrsikora.sfo.corp.google.com> <20151203182129.GF74233@mdounin.ru> <20151204121936.GH74233@mdounin.ru> Message-ID: Hey Maxim, > So the real reason for the change is that you are trying to define > _POSIX_C_SOURCE for some unknown reason, and it breaks things, > right? > > I suspect there are lots of things that will be broken by defining > _POSIX_C_SOURCE, and there should be really good reason to define > it. I want to be able to compile nginx as a strictly conforming POSIX application, which doesn't depend on any GNU-ism and/or BSD-ism. For me, this is a really good reason, especially when virtually all operating systems support defining _POSIX_C_SOURCE and/or _XOPEN_SOURCE for exactly this purpose. Yes, getpagesize() isn't the only thing that breaks when enforcing strict POSIX compliance, but it's a low-hanging fruit and it's the only thing that breaks build on OS X when compiling with -D_POSIX_C_SOURCE=200809L -D_DARWIN_C_SOURCE. > AFAIK, _SC_PAGESIZE is not available on HP-UX. Though I have no > access to one now to test. I would be surprised if that's the case, considering that HP-UX is one of the few UNIX03-certified operating systems. > Signed-to-unsigned conversion is perfectly defined in C for all > possible values. The cast below is unknown-to-signed, which can > overflow with undefined result, and thus in some cases causes > warnings. General approach is to only add casts where they are > needed, that is, where otherwise the behaviour will be wrong > and/or a warning will be generated. I never said it's undefined. However, it's an error, when you try to be strict enough: error: implicit conversion changes signedness: 'long' to 'ngx_uint_t' (aka 'unsigned long') [-Werror,-Wsign-conversion] ngx_pagesize = value; ~ ^~~~~ 1 error generated. (But that's a moot point, since nginx breaks all over the place with -Wconversion). Feel free to drop the cast from the change, if you ever decide it's worth committing. Best regards, Piotr Sikora From wilson.judson at gmail.com Sat Dec 5 07:17:54 2015 From: wilson.judson at gmail.com (Judson Wilson) Date: Sat, 05 Dec 2015 07:17:54 +0000 Subject: [PATCH] SSL: shutdown cleanly when other endpoint starts shutdown Message-ID: # HG changeset patch # User Judson Wilson # Date 1449296759 0 # Sat Dec 05 06:25:59 2015 +0000 # Node ID f41799d322f02c8998a800953d81e7274a9d3376 # Parent cb31017e961b4a54e83c4fc1be46c18842696207 SSL: shutdown cleanly when other endpoint starts shutdown Before this change, if the other endpoint sends an SSL close_notify, nginx will kill the SSL connection without sending a close_notify in response. This behavior does not follow RFC 5246 section 7.2.1: Unless some other fatal alert has been transmitted, each party is required to send a close_notify alert before closing the write side of the connection. This change fixes this behavior in this specific situation, causing nginx to reply with a close_notify before shutting down the conneciton. diff -r cb31017e961b -r f41799d322f0 src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c Wed Dec 02 19:17:19 2015 -0800 +++ b/src/event/ngx_event_openssl.c Sat Dec 05 06:25:59 2015 +0000 @@ -1472,7 +1472,6 @@ } c->ssl->no_wait_shutdown = 1; - c->ssl->no_send_shutdown = 1; if (sslerr == SSL_ERROR_ZERO_RETURN || ERR_peek_error() == 0) { ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, @@ -1480,6 +1479,8 @@ return NGX_DONE; } + c->ssl->no_send_shutdown = 1; + ngx_ssl_connection_error(c, sslerr, err, "SSL_read() failed"); return NGX_ERROR; From emptydocks at gmail.com Sun Dec 6 05:57:03 2015 From: emptydocks at gmail.com (Koby Nachmany) Date: Sun, 6 Dec 2015 07:57:03 +0200 Subject: [PATCH] Core: Configurable 'npoints' for ngx_http_upstream_hash Message-ID: > Hello! > > I have a use case for an even, consistent balancing of a caching layer > upstream cluster. I.e using the "Ketama algorithm" > Current consistent hashing implementation in ngx_http_upstream_hash is > hard-coded to '160' vbuckets and real world results show a 20% variance in > balancing, which is not acceptable in our case. > > Following is a patch (Thanks to agentz) that will allow a configurable > vbuckets configuration param. Default will remain the same = 160. > Please consider pushing this "upstream" . No pun intended ;) > > Koby N > > --- a/src/http/modules/ngx_http_upstream_hash_module.c 2015-07-15 > 00:46:06.000000000 +0800 > +++ b/src/http/modules/ngx_http_upstream_hash_module.c 2015-10-11 > 22:26:47.952670175 +0800 > @@ -23,6 +23,7 @@ typedef struct { > > > typedef struct { > + ngx_uint_t npoints; > ngx_http_complex_value_t key; > ngx_http_upstream_chash_points_t *points; > } ngx_http_upstream_hash_srv_conf_t; > @@ -66,7 +67,7 @@ static char *ngx_http_upstream_hash(ngx_ > static ngx_command_t ngx_http_upstream_hash_commands[] = { > > { ngx_string("hash"), > - NGX_HTTP_UPS_CONF|NGX_CONF_TAKE12, > + NGX_HTTP_UPS_CONF|NGX_CONF_TAKE123, > ngx_http_upstream_hash, > NGX_HTTP_SRV_CONF_OFFSET, > 0, > @@ -296,7 +297,10 @@ ngx_http_upstream_init_chash(ngx_conf_t > us->peer.init = ngx_http_upstream_init_chash_peer; > > peers = us->peer.data; > - npoints = peers->total_weight * 160; > + > + hcf = ngx_http_conf_upstream_srv_conf(us, > ngx_http_upstream_hash_module); > + > + npoints = peers->total_weight * hcf->npoints; > > size = sizeof(ngx_http_upstream_chash_points_t) > + sizeof(ngx_http_upstream_chash_point_t) * (npoints - 1); > @@ -355,7 +359,7 @@ ngx_http_upstream_init_chash(ngx_conf_t > ngx_crc32_update(&base_hash, port, port_len); > > prev_hash.value = 0; > - npoints = peer->weight * 160; > + npoints = peer->weight * hcf->npoints; > > for (j = 0; j < npoints; j++) { > hash = base_hash; > @@ -391,7 +395,6 @@ ngx_http_upstream_init_chash(ngx_conf_t > > points->number = i + 1; > > - hcf = ngx_http_conf_upstream_srv_conf(us, > ngx_http_upstream_hash_module); > hcf->points = points; > > return NGX_OK; > @@ -657,6 +660,19 @@ ngx_http_upstream_hash(ngx_conf_t *cf, n > } else if (ngx_strcmp(value[2].data, "consistent") == 0) { > uscf->peer.init_upstream = ngx_http_upstream_init_chash; > > + if (cf->args->nelts > 3) { > + hcf->npoints = ngx_atoi(value[3].data, value[3].len); > + > + if (hcf->npoints == (ngx_uint_t) NGX_ERROR) { > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > + "invalid npoints parameter \"%V\"", > &value[3]); > + return NGX_CONF_ERROR; > + } > + > + } else { > + hcf->npoints = 160; > + } > + > } else { > ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > "invalid parameter \"%V\"", &value[2]); > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aviram at adallom.com Sun Dec 6 12:00:02 2015 From: aviram at adallom.com (Aviram Cohen) Date: Sun, 6 Dec 2015 12:00:02 +0000 Subject: [BUG] Gunzip module may cause requests to fail In-Reply-To: <20151202182532.GX74233@mdounin.ru> References: <2571972.HTVaeomj3b@vbart-workstation> <20151130173717.GJ74233@mdounin.ru> <20151201132836.GN74233@mdounin.ru> <20151202182532.GX74233@mdounin.ru> Message-ID: Thank you, Maxim. I think we pretty much agree. The following is the suggested patch. Didn't include a patch for a chunked response, as you are right, that can evolve from an application error. Feel free to change the error message. diff -r cebc9a2c2144 src/http/modules/ngx_http_gunzip_filter_module.c --- a/src/http/modules/ngx_http_gunzip_filter_module.c Tue Apr 21 17:11:58 2015 +0300 +++ b/src/http/modules/ngx_http_gunzip_filter_module.c Fri Dec 04 01:54:14 2015 +0200 @@ -140,6 +140,12 @@ r->gzip_vary = 1; + if (r->headers_out.content_length_n == 0) { + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "gunzip filter: zero length data to decompress"); + return ngx_http_next_header_filter(r); + } + if (!r->gzip_tested) { if (ngx_http_gzip_ok(r) == NGX_OK) { return ngx_http_next_header_filter(r); -----Original Message----- From: nginx-devel [mailto:nginx-devel-bounces at nginx.org] On Behalf Of Maxim Dounin Sent: ????? 02 ????? 2015 20:26 To: nginx-devel at nginx.org Subject: Re: [BUG] Gunzip module may cause requests to fail Hello! On Wed, Dec 02, 2015 at 07:39:25AM +0000, Aviram Cohen wrote: > Maxim, I want to set up a reverse proxy the gunzips responses of a > backend server I cannot control (for example, a cache server for a > development infrastructure). > The response is marked with Content-Length set to zero, so Valentin's > comment regarding why the response is empty is irrelevant. It is also > irrelevant when the server sends a chunked response with an initial 0 > sized chunk. I've seen servers that do both. > The HTTP level is okay besides the extra gzip header, and so this is a > "little bit pregnant" response ;) Both cases described indicate that the response was corrupted earlier, and then an attempt to recover the corruption was done, likely unintended. And, as previously said, nobody knows how the response is expected to look like if it wasn't malformed. Though in the case when Content-Length is known in advance it is possible to improve current behaviour by logging an error earlier and passing the request unmodified. Patches are welcome. > HTTP clients such as web browsers, wget and curl do handle this > response, but my reverse proxy doesn't. > What do you suggest that I'd do? The problem is that most programs fail to check return codes properly. This is not a plus though, but rather bugs in relevant programs. And these bugs make it very easy to confuse valid responses with corrupted ones, as well as to incorrectly gunzip valid responses. Trying to make nginx bug-to-bug compatible with browsers isn't likely to be beneficial though. By trying to "fix" nginx to be bug-to-bug compatible with browsers and produce the same results as careless browsers do will make it impossible for proper clients to detect errors. And this is what we are trying to avoid. As for what to do, an obvious solution would be, as already suggested, to disable gunzipping of responses from sources you can't control and are known to return malformed responses. > Regarding the error description itself, a patch that can applied to > better describe the error seems as easy for me as a fix, so to me it > seems irrelevant as long as you refuse to fix the issue. > Obviously I can provide a patch for the issue itself, which I still > consider as an Nginx bug :) If you think you know how to improve things - feel free to provide patches. Though as we already tried to explain several times, it's not a bug. At most, it's a suboptimal error handling. -- Maxim Dounin https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fnginx.org%2f&data=01%7c01%7cavcohe%40064d.mgd.microsoft.com%7cdaa80652f928400f4da508d2fb45fe6e%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=bPCynOkotayJWM7HJqvxms1mQ7D7YdHTek35BtYzPGc%3d _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmailman.nginx.org%2fmailman%2flistinfo%2fnginx-devel&data=01%7c01%7cavcohe%40064d.mgd.microsoft.com%7cdaa80652f928400f4da508d2fb45fe6e%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=z7Xc6UVwQCDGjS6iSbmzchq9yjmBp1p77k89F2XcDAY%3d From junpei.yoshino at gmail.com Sun Dec 6 15:14:38 2015 From: junpei.yoshino at gmail.com (junpei yoshino) Date: Mon, 7 Dec 2015 00:14:38 +0900 Subject: [PATCH]add proxy_protocol_port variable for rfc6302 In-Reply-To: <20151203190316.GG74233@mdounin.ru> References: <20151203190316.GG74233@mdounin.ru> Message-ID: hello thank you for reply and background. > but we need someone to do the rest of the work. Could I contribute it? At first, I will revise this patch along your review. On Fri, Dec 4, 2015 at 4:03 AM, Maxim Dounin wrote: > Hello! > > On Tue, Dec 01, 2015 at 08:08:30AM +0900, junpei yoshino wrote: > >> # HG changeset patch >> # User Junpei Yoshino >> # Date 1446723407 -32400 >> # Thu Nov 05 20:36:47 2015 +0900 >> # Node ID 59cadccedf402ec325b078cb72a284465639e0fe >> # Parent 4ccb37b04454dec6afb9476d085c06aea00adaa0 >> Http: add proxy_protocol_port variable for rfc6302 >> >> Logging source port is recommended in rfc6302. >> use case >> logging >> sending information by http request headers > > Proper source port logging is something nginx don't currently do > well in various places related to addresses got from external > sources, including realip module, X-Forwarded-For parsing, > [ha]proxy protocol and so on. > > Improving port handling in various related areas is something that > should be done, though it needs to be handled more or less > consistently in all affected areas. Providing ports in some > places but not others can be misleading for users. And that's why > the patch isn't yet reviewed - sorry for the delay, but we need > someone to do the rest of the work. > > Below are some comments about the patch itself. > >> diff -r 4ccb37b04454 -r 59cadccedf40 src/core/ngx_connection.h >> --- a/src/core/ngx_connection.h Fri Oct 30 21:43:30 2015 +0300 >> +++ b/src/core/ngx_connection.h Thu Nov 05 20:36:47 2015 +0900 >> @@ -146,6 +146,7 @@ >> ngx_str_t addr_text; >> >> ngx_str_t proxy_protocol_addr; >> + ngx_str_t proxy_protocol_port; > > Using a string (which takes 2 pointers) for a 16 bit port value > seems to be excessive. It should be possible to just store a > number instead. > > [...] > >> @@ -71,8 +71,56 @@ >> ngx_memcpy(c->proxy_protocol_addr.data, addr, len); >> c->proxy_protocol_addr.len = len; >> >> + for ( ;; ) { >> + if (p == last) { >> + goto invalid; >> + } >> + >> + ch = *p++; >> + >> + if (ch == ' ') { >> + break; >> + } >> + >> + if (ch != ':' && ch != '.' >> + && (ch < 'a' || ch > 'f') >> + && (ch < 'A' || ch > 'F') >> + && (ch < '0' || ch > '9')) >> + { >> + goto invalid; >> + } >> + } > > This is probably excessive. Just using a space as a separator > should be enough, as we aren't using the destination address. > > [...] > >> ngx_log_debug1(NGX_LOG_DEBUG_CORE, c->log, 0, >> "PROXY protocol address: \"%V\"", >> &c->proxy_protocol_addr); >> + ngx_log_debug1(NGX_LOG_DEBUG_CORE, c->log, 0, >> + "PROXY protocol port: \"%V\"", &c->proxy_protocol_port); > > Logging the address and the port at once should be enough. > > [...] > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- junpei.yoshino at gmail.com From tigran.bayburtsyan at gmail.com Sun Dec 6 22:56:21 2015 From: tigran.bayburtsyan at gmail.com (Tigran Bayburtsyan) Date: Mon, 7 Dec 2015 02:56:21 +0400 Subject: Handling large responses Message-ID: <001901d13079$54e00a80$fea01f80$@gmail.com> Hi I'm doing module for my company to handle custom Unix socket protocol file streaming through Ngxinx. Now we are just splitting big files into small ones , saving it in directory and sending through Nginx, but when traffic goes up, Hard driver read write is getting extremely high, so we decided to write Nginx module for streaming directly. But I can't find example or documentation how to stream large files, without using Upstream functionality, because we have non usual stream, we are modifying every sertain amount of bytes. How can I set callback on response write to send second package for request ? or something similar. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Dec 7 02:51:42 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Dec 2015 05:51:42 +0300 Subject: [PATCH]add proxy_protocol_port variable for rfc6302 In-Reply-To: References: <20151203190316.GG74233@mdounin.ru> Message-ID: <20151207025141.GT74233@mdounin.ru> Hello! On Mon, Dec 07, 2015 at 12:14:38AM +0900, junpei yoshino wrote: > > but we need someone to do the rest of the work. > > Could I contribute it? > At first, I will revise this patch along your review. It may be a bit too many for someone with small nginx coding experience, but you may try to. -- Maxim Dounin http://nginx.org/ From tigran.bayburtsyan at gmail.com Mon Dec 7 08:43:25 2015 From: tigran.bayburtsyan at gmail.com (Tigran Bayburtsyan) Date: Mon, 7 Dec 2015 12:43:25 +0400 Subject: Upstream always NULL Message-ID: <003601d130cb$5787ffc0$0697ff40$@gmail.com> function ngx_http_upstream_create , always returning NGX_ERROR . using Debugger I saw that after this lines u = ngx_pcalloc(r->pool, sizeof(ngx_http_upstream_t)); if (u == NULL) { return NGX_ERROR; } ngx_ pcalloc always returning null. Is there anybody know this issue ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Dec 7 12:46:24 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Dec 2015 15:46:24 +0300 Subject: Upstream always NULL In-Reply-To: <003601d130cb$5787ffc0$0697ff40$@gmail.com> References: <003601d130cb$5787ffc0$0697ff40$@gmail.com> Message-ID: <20151207124624.GU74233@mdounin.ru> Hello! On Mon, Dec 07, 2015 at 12:43:25PM +0400, Tigran Bayburtsyan wrote: > function ngx_http_upstream_create , always returning NGX_ERROR . using > Debugger I saw that after this lines > > u = ngx_pcalloc(r->pool, sizeof(ngx_http_upstream_t)); > > if (u == NULL) { > > return NGX_ERROR; > > } > > > > ngx_ pcalloc always returning null. > > Is there anybody know this issue ? You are doing something wrong in your code. -- Maxim Dounin http://nginx.org/ From tigran.bayburtsyan at gmail.com Mon Dec 7 13:28:32 2015 From: tigran.bayburtsyan at gmail.com (Tigran Bayburtsyan) Date: Mon, 7 Dec 2015 17:28:32 +0400 Subject: Upstream always NULL In-Reply-To: <20151207124624.GU74233@mdounin.ru> References: <003601d130cb$5787ffc0$0697ff40$@gmail.com> <20151207124624.GU74233@mdounin.ru> Message-ID: <004801d130f3$2befb150$83cf13f0$@gmail.com> Can you please send me some good example for upstream module implementation ? I've been googling all time this 2 days. Thanks -----Original Message----- From: nginx-devel [mailto:nginx-devel-bounces at nginx.org] On Behalf Of Maxim Dounin Sent: Monday, December 7, 2015 4:46 PM To: nginx-devel at nginx.org Subject: Re: Upstream always NULL Hello! On Mon, Dec 07, 2015 at 12:43:25PM +0400, Tigran Bayburtsyan wrote: > function ngx_http_upstream_create , always returning NGX_ERROR . using > Debugger I saw that after this lines > > u = ngx_pcalloc(r->pool, sizeof(ngx_http_upstream_t)); > > if (u == NULL) { > > return NGX_ERROR; > > } > > > > ngx_ pcalloc always returning null. > > Is there anybody know this issue ? You are doing something wrong in your code. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel From arut at nginx.com Mon Dec 7 14:14:51 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 07 Dec 2015 14:14:51 +0000 Subject: [nginx] Upstream: fill r->headers_out.content_range from upstrea... Message-ID: details: http://hg.nginx.org/nginx/rev/f44de0d12143 branches: changeset: 6316:f44de0d12143 user: Roman Arutyunyan date: Mon Dec 07 16:30:47 2015 +0300 description: Upstream: fill r->headers_out.content_range from upstream response. diffstat: src/http/ngx_http_upstream.c | 5 +++++ 1 files changed, 5 insertions(+), 0 deletions(-) diffs (15 lines): diff -r cb31017e961b -r f44de0d12143 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Wed Dec 02 19:17:19 2015 -0800 +++ b/src/http/ngx_http_upstream.c Mon Dec 07 16:30:47 2015 +0300 @@ -250,6 +250,11 @@ ngx_http_upstream_header_t ngx_http_ups ngx_http_upstream_copy_allow_ranges, offsetof(ngx_http_headers_out_t, accept_ranges), 1 }, + { ngx_string("Content-Range"), + ngx_http_upstream_ignore_header_line, 0, + ngx_http_upstream_copy_header_line, + offsetof(ngx_http_headers_out_t, content_range), 0 }, + { ngx_string("Connection"), ngx_http_upstream_process_connection, 0, ngx_http_upstream_ignore_header_line, 0, 0 }, From arut at nginx.com Mon Dec 7 14:14:53 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 07 Dec 2015 14:14:53 +0000 Subject: [nginx] Slice filter. Message-ID: details: http://hg.nginx.org/nginx/rev/29f35e60840b branches: changeset: 6317:29f35e60840b user: Roman Arutyunyan date: Mon Dec 07 16:30:48 2015 +0300 description: Slice filter. Splits a request into subrequests, each providing a specific range of response. The variable "$slice_range" must be used to set subrequest range and proper cache key. The directive "slice" sets slice size. The following example splits requests into 1-megabyte cacheable subrequests. server { listen 8000; location / { slice 1m; proxy_cache cache; proxy_cache_key $uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_cache_valid 200 206 1h; proxy_pass http://127.0.0.1:9000; } } diffstat: auto/modules | 15 +- auto/options | 3 + auto/sources | 4 + src/http/modules/ngx_http_range_filter_module.c | 28 +- src/http/modules/ngx_http_slice_filter_module.c | 521 ++++++++++++++++++++++++ src/http/ngx_http_request.h | 2 + 6 files changed, 567 insertions(+), 6 deletions(-) diffs (truncated from 697 to 300 lines): diff -r f44de0d12143 -r 29f35e60840b auto/modules --- a/auto/modules Mon Dec 07 16:30:47 2015 +0300 +++ b/auto/modules Mon Dec 07 16:30:48 2015 +0300 @@ -73,6 +73,11 @@ if [ $HTTP_SSI = YES ]; then fi +if [ $HTTP_SLICE = YES ]; then + HTTP_POSTPONE=YES +fi + + if [ $HTTP_ADDITION = YES ]; then HTTP_POSTPONE=YES fi @@ -110,6 +115,7 @@ fi # ngx_http_copy_filter # ngx_http_range_body_filter # ngx_http_not_modified_filter +# ngx_http_slice_filter HTTP_FILTER_MODULES="$HTTP_WRITE_FILTER_MODULE \ $HTTP_HEADER_FILTER_MODULE \ @@ -179,6 +185,12 @@ if [ $HTTP_USERID = YES ]; then HTTP_SRCS="$HTTP_SRCS $HTTP_USERID_SRCS" fi +if [ $HTTP_SLICE = YES ]; then + HTTP_SRCS="$HTTP_SRCS $HTTP_SLICE_SRCS" +else + HTTP_SLICE_FILTER_MODULE="" +fi + if [ $HTTP_V2 = YES ]; then have=NGX_HTTP_V2 . auto/have @@ -461,7 +473,8 @@ if [ $HTTP = YES ]; then $HTTP_AUX_FILTER_MODULES \ $HTTP_COPY_FILTER_MODULE \ $HTTP_RANGE_BODY_FILTER_MODULE \ - $HTTP_NOT_MODIFIED_FILTER_MODULE" + $HTTP_NOT_MODIFIED_FILTER_MODULE \ + $HTTP_SLICE_FILTER_MODULE" NGX_ADDON_DEPS="$NGX_ADDON_DEPS \$(HTTP_DEPS)" fi diff -r f44de0d12143 -r 29f35e60840b auto/options --- a/auto/options Mon Dec 07 16:30:47 2015 +0300 +++ b/auto/options Mon Dec 07 16:30:48 2015 +0300 @@ -71,6 +71,7 @@ HTTP_ACCESS=YES HTTP_AUTH_BASIC=YES HTTP_AUTH_REQUEST=NO HTTP_USERID=YES +HTTP_SLICE=NO HTTP_AUTOINDEX=YES HTTP_RANDOM_INDEX=NO HTTP_STATUS=NO @@ -226,6 +227,7 @@ do --with-http_random_index_module) HTTP_RANDOM_INDEX=YES ;; --with-http_secure_link_module) HTTP_SECURE_LINK=YES ;; --with-http_degradation_module) HTTP_DEGRADATION=YES ;; + --with-http_slice_module) HTTP_SLICE=YES ;; --without-http_charset_module) HTTP_CHARSET=NO ;; --without-http_gzip_module) HTTP_GZIP=NO ;; @@ -394,6 +396,7 @@ cat << END --with-http_random_index_module enable ngx_http_random_index_module --with-http_secure_link_module enable ngx_http_secure_link_module --with-http_degradation_module enable ngx_http_degradation_module + --with-http_slice_module enable ngx_http_slice_module --with-http_stub_status_module enable ngx_http_stub_status_module --without-http_charset_module disable ngx_http_charset_module diff -r f44de0d12143 -r 29f35e60840b auto/sources --- a/auto/sources Mon Dec 07 16:30:47 2015 +0300 +++ b/auto/sources Mon Dec 07 16:30:48 2015 +0300 @@ -360,6 +360,10 @@ HTTP_USERID_FILTER_MODULE=ngx_http_useri HTTP_USERID_SRCS=src/http/modules/ngx_http_userid_filter_module.c +HTTP_SLICE_FILTER_MODULE=ngx_http_slice_filter_module +HTTP_SLICE_SRCS=src/http/modules/ngx_http_slice_filter_module.c + + HTTP_REALIP_MODULE=ngx_http_realip_module HTTP_REALIP_SRCS=src/http/modules/ngx_http_realip_module.c diff -r f44de0d12143 -r 29f35e60840b src/http/modules/ngx_http_range_filter_module.c --- a/src/http/modules/ngx_http_range_filter_module.c Mon Dec 07 16:30:47 2015 +0300 +++ b/src/http/modules/ngx_http_range_filter_module.c Mon Dec 07 16:30:48 2015 +0300 @@ -154,7 +154,7 @@ ngx_http_range_header_filter(ngx_http_re if (r->http_version < NGX_HTTP_VERSION_10 || r->headers_out.status != NGX_HTTP_OK - || r != r->main + || (r != r->main && !r->subrequest_ranges) || r->headers_out.content_length_n == -1 || !r->allow_ranges) { @@ -222,6 +222,8 @@ parse: return NGX_ERROR; } + ctx->offset = r->headers_out.content_offset; + if (ngx_array_init(&ctx->ranges, r->pool, 1, sizeof(ngx_http_range_t)) != NGX_OK) { @@ -273,10 +275,21 @@ static ngx_int_t ngx_http_range_parse(ngx_http_request_t *r, ngx_http_range_filter_ctx_t *ctx, ngx_uint_t ranges) { - u_char *p; - off_t start, end, size, content_length, cutoff, cutlim; - ngx_uint_t suffix; - ngx_http_range_t *range; + u_char *p; + off_t start, end, size, content_length, cutoff, + cutlim; + ngx_uint_t suffix; + ngx_http_range_t *range; + ngx_http_range_filter_ctx_t *mctx; + + if (r != r->main) { + mctx = ngx_http_get_module_ctx(r->main, + ngx_http_range_body_filter_module); + if (mctx) { + ctx->ranges = mctx->ranges; + return NGX_OK; + } + } p = r->headers_in.range->value.data + 6; size = 0; @@ -395,6 +408,10 @@ ngx_http_range_singlepart_header(ngx_htt ngx_table_elt_t *content_range; ngx_http_range_t *range; + if (r != r->main) { + return ngx_http_next_header_filter(r); + } + content_range = ngx_list_push(&r->headers_out.headers); if (content_range == NULL) { return NGX_ERROR; @@ -422,6 +439,7 @@ ngx_http_range_singlepart_header(ngx_htt - content_range->value.data; r->headers_out.content_length_n = range->end - range->start; + r->headers_out.content_offset = range->start; if (r->headers_out.content_length) { r->headers_out.content_length->hash = 0; diff -r f44de0d12143 -r 29f35e60840b src/http/modules/ngx_http_slice_filter_module.c --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/src/http/modules/ngx_http_slice_filter_module.c Mon Dec 07 16:30:48 2015 +0300 @@ -0,0 +1,521 @@ + +/* + * Copyright (C) Roman Arutyunyan + * Copyright (C) Nginx, Inc. + */ + + +#include +#include +#include + + +typedef struct { + size_t size; +} ngx_http_slice_loc_conf_t; + + +typedef struct { + off_t start; + off_t end; + ngx_str_t range; + ngx_str_t etag; + ngx_uint_t last; /* unsigned last:1; */ +} ngx_http_slice_ctx_t; + + +typedef struct { + off_t start; + off_t end; + off_t complete_length; +} ngx_http_slice_content_range_t; + + +static ngx_int_t ngx_http_slice_header_filter(ngx_http_request_t *r); +static ngx_int_t ngx_http_slice_body_filter(ngx_http_request_t *r, + ngx_chain_t *in); +static ngx_int_t ngx_http_slice_parse_content_range(ngx_http_request_t *r, + ngx_http_slice_content_range_t *cr); +static ngx_int_t ngx_http_slice_range_variable(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data); +static off_t ngx_http_slice_get_start(ngx_http_request_t *r); +static void *ngx_http_slice_create_loc_conf(ngx_conf_t *cf); +static char *ngx_http_slice_merge_loc_conf(ngx_conf_t *cf, void *parent, + void *child); +static ngx_int_t ngx_http_slice_add_variables(ngx_conf_t *cf); +static ngx_int_t ngx_http_slice_init(ngx_conf_t *cf); + + +static ngx_command_t ngx_http_slice_filter_commands[] = { + + { ngx_string("slice"), + NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1, + ngx_conf_set_size_slot, + NGX_HTTP_LOC_CONF_OFFSET, + offsetof(ngx_http_slice_loc_conf_t, size), + NULL }, + + ngx_null_command +}; + + +static ngx_http_module_t ngx_http_slice_filter_module_ctx = { + ngx_http_slice_add_variables, /* preconfiguration */ + ngx_http_slice_init, /* postconfiguration */ + + NULL, /* create main configuration */ + NULL, /* init main configuration */ + + NULL, /* create server configuration */ + NULL, /* merge server configuration */ + + ngx_http_slice_create_loc_conf, /* create location configuration */ + ngx_http_slice_merge_loc_conf /* merge location configuration */ +}; + + +ngx_module_t ngx_http_slice_filter_module = { + NGX_MODULE_V1, + &ngx_http_slice_filter_module_ctx, /* module context */ + ngx_http_slice_filter_commands, /* module directives */ + NGX_HTTP_MODULE, /* module type */ + NULL, /* init master */ + NULL, /* init module */ + NULL, /* init process */ + NULL, /* init thread */ + NULL, /* exit thread */ + NULL, /* exit process */ + NULL, /* exit master */ + NGX_MODULE_V1_PADDING +}; + + +static ngx_str_t ngx_http_slice_range_name = ngx_string("slice_range"); + +static ngx_http_output_header_filter_pt ngx_http_next_header_filter; +static ngx_http_output_body_filter_pt ngx_http_next_body_filter; + + +static ngx_int_t +ngx_http_slice_header_filter(ngx_http_request_t *r) +{ + off_t end; + ngx_int_t rc; + ngx_table_elt_t *h; + ngx_http_slice_ctx_t *ctx; + ngx_http_slice_loc_conf_t *slcf; + ngx_http_slice_content_range_t cr; + + ctx = ngx_http_get_module_ctx(r, ngx_http_slice_filter_module); + if (ctx == NULL) { + return ngx_http_next_header_filter(r); + } + + if (r->headers_out.status != NGX_HTTP_PARTIAL_CONTENT) { + if (r == r->main) { + ngx_http_set_ctx(r, NULL, ngx_http_slice_filter_module); + return ngx_http_next_header_filter(r); + } + + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "unexpected status code %ui in slice response", + r->headers_out.status); + return NGX_ERROR; + } + + h = r->headers_out.etag; + + if (ctx->etag.len) { + if (h == NULL + || h->value.len != ctx->etag.len + || ngx_strncmp(h->value.data, ctx->etag.data, ctx->etag.len) + != 0) + { + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, + "etag mismatch in slice response"); + return NGX_ERROR; + } + } + + if (h) { + ctx->etag = h->value; + } + From mdounin at mdounin.ru Mon Dec 7 14:37:38 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Dec 2015 17:37:38 +0300 Subject: Upstream always NULL In-Reply-To: <004801d130f3$2befb150$83cf13f0$@gmail.com> References: <003601d130cb$5787ffc0$0697ff40$@gmail.com> <20151207124624.GU74233@mdounin.ru> <004801d130f3$2befb150$83cf13f0$@gmail.com> Message-ID: <20151207143738.GZ74233@mdounin.ru> Hello! On Mon, Dec 07, 2015 at 05:28:32PM +0400, Tigran Bayburtsyan wrote: > Can you please send me some good example for upstream module implementation > ? > I've been googling all time this 2 days. The upstream module as well as 5 protocol implementations are available in nginx sources: http://hg.nginx.org/nginx/file/tip/src/http/ngx_http_upstream.c http://hg.nginx.org/nginx/file/tip/src/http/modules/ngx_http_proxy_module.c http://hg.nginx.org/nginx/file/tip/src/http/modules/ngx_http_fastcgi_module.c http://hg.nginx.org/nginx/file/tip/src/http/modules/ngx_http_scgi_module.c http://hg.nginx.org/nginx/file/tip/src/http/modules/ngx_http_uwsgi_module.c http://hg.nginx.org/nginx/file/tip/src/http/modules/ngx_http_memcached_module.c Most simple one is memcached, and I would recommend you to start with reading its code if you want to understand how nginx works with upstream servers. -- Maxim Dounin http://nginx.org/ From junpei.yoshino at gmail.com Mon Dec 7 15:56:07 2015 From: junpei.yoshino at gmail.com (junpei yoshino) Date: Tue, 8 Dec 2015 00:56:07 +0900 Subject: [PATCH]add proxy_protocol_port variable for rfc6302 In-Reply-To: <20151207025141.GT74233@mdounin.ru> References: <20151203190316.GG74233@mdounin.ru> <20151207025141.GT74233@mdounin.ru> Message-ID: Hello. I wrote additional patch. # HG changeset patch # User Junpei Yoshino # Date 1449499172 -32400 # Mon Dec 07 23:39:32 2015 +0900 # Node ID e2984af905ff8cf523b22860620a9f3ff22d555a # Parent 59cadccedf402ec325b078cb72a284465639e0fe Change definition of proxy_protocol_port diff -r 59cadccedf40 -r e2984af905ff src/core/ngx_connection.h --- a/src/core/ngx_connection.h Thu Nov 05 20:36:47 2015 +0900 +++ b/src/core/ngx_connection.h Mon Dec 07 23:39:32 2015 +0900 @@ -146,7 +146,7 @@ ngx_str_t addr_text; ngx_str_t proxy_protocol_addr; - ngx_str_t proxy_protocol_port; + ngx_int_t proxy_protocol_port; #if (NGX_SSL) ngx_ssl_connection_t *ssl; diff -r 59cadccedf40 -r e2984af905ff src/core/ngx_proxy_protocol.c --- a/src/core/ngx_proxy_protocol.c Thu Nov 05 20:36:47 2015 +0900 +++ b/src/core/ngx_proxy_protocol.c Mon Dec 07 23:39:32 2015 +0900 @@ -81,14 +81,6 @@ if (ch == ' ') { break; } - - if (ch != ':' && ch != '.' - && (ch < 'a' || ch > 'f') - && (ch < 'A' || ch > 'F') - && (ch < '0' || ch > '9')) - { - goto invalid; - } } port = p; for ( ;; ) { @@ -108,19 +100,11 @@ } } len = p - port - 1; - c->proxy_protocol_port.data = ngx_pnalloc(c->pool, len); + c->proxy_protocol_port = ngx_atoi(port,len); - if (c->proxy_protocol_port.data == NULL) { - return NULL; - } - - ngx_memcpy(c->proxy_protocol_port.data, port, len); - c->proxy_protocol_port.len = len; - - ngx_log_debug1(NGX_LOG_DEBUG_CORE, c->log, 0, - "PROXY protocol address: \"%V\"", &c->proxy_protocol_addr); - ngx_log_debug1(NGX_LOG_DEBUG_CORE, c->log, 0, - "PROXY protocol port: \"%V\"", &c->proxy_protocol_port); + ngx_log_debug2(NGX_LOG_DEBUG_CORE, c->log, 0, + "PROXY protocol address: \"%V\", PROXY protocol port: \"%d\"", + &c->proxy_protocol_addr, c->proxy_protocol_port); skip: diff -r 59cadccedf40 -r e2984af905ff src/http/ngx_http_variables.c --- a/src/http/ngx_http_variables.c Thu Nov 05 20:36:47 2015 +0900 +++ b/src/http/ngx_http_variables.c Mon Dec 07 23:39:32 2015 +0900 @@ -1258,11 +1258,20 @@ ngx_http_variable_proxy_protocol_port(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data) { - v->len = r->connection->proxy_protocol_port.len; + ngx_int_t port = r->connection->proxy_protocol_port; + + v->len = 0; v->valid = 1; v->no_cacheable = 0; v->not_found = 0; - v->data = r->connection->proxy_protocol_port.data; + v->data = ngx_pnalloc(r->pool, sizeof("65535") - 1); + + if (v->data == NULL) { + return NGX_ERROR; + } + if (port > 0 && port < 65536) { + v->len = ngx_sprintf(v->data, "%ui", port) - v->data; + } return NGX_OK; } On Mon, Dec 7, 2015 at 11:51 AM, Maxim Dounin wrote: > Hello! > > On Mon, Dec 07, 2015 at 12:14:38AM +0900, junpei yoshino wrote: > >> > but we need someone to do the rest of the work. >> >> Could I contribute it? >> At first, I will revise this patch along your review. > > It may be a bit too many for someone with small nginx coding > experience, but you may try to. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -- junpei.yoshino at gmail.com From mdounin at mdounin.ru Mon Dec 7 17:13:45 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 07 Dec 2015 17:13:45 +0000 Subject: [nginx] Added slice module to win32 builds. Message-ID: details: http://hg.nginx.org/nginx/rev/3250a5783787 branches: changeset: 6318:3250a5783787 user: Maxim Dounin date: Mon Dec 07 20:08:13 2015 +0300 description: Added slice module to win32 builds. diffstat: misc/GNUmakefile | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff --git a/misc/GNUmakefile b/misc/GNUmakefile --- a/misc/GNUmakefile +++ b/misc/GNUmakefile @@ -79,6 +79,7 @@ win32: --with-http_auth_request_module \ --with-http_random_index_module \ --with-http_secure_link_module \ + --with-http_slice_module \ --with-mail \ --with-stream \ --with-openssl=$(OBJS)/lib/$(OPENSSL) \ From mdounin at mdounin.ru Mon Dec 7 17:13:48 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 07 Dec 2015 17:13:48 +0000 Subject: [nginx] Updated OpenSSL and PCRE used for win32 builds. Message-ID: details: http://hg.nginx.org/nginx/rev/fe0ace132a25 branches: changeset: 6319:fe0ace132a25 user: Maxim Dounin date: Mon Dec 07 20:09:34 2015 +0300 description: Updated OpenSSL and PCRE used for win32 builds. diffstat: misc/GNUmakefile | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (15 lines): diff --git a/misc/GNUmakefile b/misc/GNUmakefile --- a/misc/GNUmakefile +++ b/misc/GNUmakefile @@ -5,9 +5,9 @@ NGINX = nginx-$(VER) TEMP = tmp OBJS = objs.msvc8 -OPENSSL = openssl-1.0.2d +OPENSSL = openssl-1.0.2e ZLIB = zlib-1.2.8 -PCRE = pcre-8.37 +PCRE = pcre-8.38 release: export From mdounin at mdounin.ru Mon Dec 7 18:31:18 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Dec 2015 21:31:18 +0300 Subject: [PATCH] SSL: shutdown cleanly when other endpoint starts shutdown In-Reply-To: References: Message-ID: <20151207183117.GB74233@mdounin.ru> Hello! On Sat, Dec 05, 2015 at 07:17:54AM +0000, Judson Wilson wrote: > # HG changeset patch > # User Judson Wilson > # Date 1449296759 0 > # Sat Dec 05 06:25:59 2015 +0000 > # Node ID f41799d322f02c8998a800953d81e7274a9d3376 > # Parent cb31017e961b4a54e83c4fc1be46c18842696207 > SSL: shutdown cleanly when other endpoint starts shutdown > > Before this change, if the other endpoint sends an SSL close_notify, nginx > will kill the SSL connection without sending a close_notify in response. > This behavior does not follow RFC 5246 section 7.2.1: > > Unless some other fatal alert has been transmitted, each party is > required to send a close_notify alert before closing the write side > of the connection. > > This change fixes this behavior in this specific situation, causing > nginx to reply with a close_notify before shutting down the conneciton. > > diff -r cb31017e961b -r f41799d322f0 src/event/ngx_event_openssl.c > --- a/src/event/ngx_event_openssl.c Wed Dec 02 19:17:19 2015 -0800 > +++ b/src/event/ngx_event_openssl.c Sat Dec 05 06:25:59 2015 +0000 > @@ -1472,7 +1472,6 @@ > } > > c->ssl->no_wait_shutdown = 1; > - c->ssl->no_send_shutdown = 1; > > if (sslerr == SSL_ERROR_ZERO_RETURN || ERR_peek_error() == 0) { > ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, > @@ -1480,6 +1479,8 @@ > return NGX_DONE; > } > > + c->ssl->no_send_shutdown = 1; > + > ngx_ssl_connection_error(c, sslerr, err, "SSL_read() failed"); > > return NGX_ERROR; There is a problem with this change: in most cases in practice close from a client means that the socket is no longer open, and trying to send a close_notify alert won't do anything good but will result in RST instead. Common practice seems to be don't send close_notify alerts in such a case (e.g., I see Chrome doing the same), and section 7.2.1 mentions this as well. Note well that section 7.2.1 you refer to is about avoiding truncation attacks. On the other hand, the code path you are editing doesn't ensure that a close_notify notify alert was received from the other side. That is, sending a close_notify alert will make truncation attacks easier, not stop them. If you still think that changes in this area are needed - please provide some more details on what you are trying to fix. -- Maxim Dounin http://nginx.org/ From wilson.judson at gmail.com Mon Dec 7 22:38:14 2015 From: wilson.judson at gmail.com (Judson Wilson) Date: Mon, 7 Dec 2015 14:38:14 -0800 Subject: [PATCH] SSL: shutdown cleanly when other endpoint starts shutdown In-Reply-To: <20151207183117.GB74233@mdounin.ru> References: <20151207183117.GB74233@mdounin.ru> Message-ID: > There is a problem with this change: in most cases in practice > close from a client means that the socket is no longer open, and > trying to send a close_notify alert won't do anything good but > will result in RST instead. Common practice seems to be don't > send close_notify alerts in such a case (e.g., I see Chrome doing > the same), and section 7.2.1 mentions this as well. The behavior I am attempting to produce matches what I am observing in the Apache web server (assuming default configuration), so it's not unheard of for HTTPS. > If you still think that changes in this area are needed - please > provide some more details on what you are trying to fix. I am researching ways of auditing TLS communications in a read-only way. One method of achieving this is by having the TLS client give the keyblock to an auditor after the client is sure that the server will not accept any more data protected by the keys (assume the software for the client and server are controlled by the same party). From my investigation of nginx and apache, if a client receives close_notify, they can be assured that no more data will be accepted by the server under the key (both web server software packages immediately destroy the SSL object after calling SSL_shutdown). Unfortunately, nginx has this case where a client would attempt to close a connection, but never get close_notify in response. It seems like simply obeying the TLS standard would fix this. (close_notify IS sent on "connection: close" header or keepalive timeout as I would expect.) > Note well that section 7.2.1 you refer to is about avoiding > truncation attacks. On the other hand, the code path you are > editing doesn't ensure that a close_notify notify alert was > received from the other side. That is, sending a close_notify > alert will make truncation attacks easier, not stop them. Yes truncation attacks were not on my mind - that's a different problem. I'd be happy to look into this a further if you think this work has a chance of being incorporated. Thanks, Judson On Mon, Dec 7, 2015 at 10:31 AM, Maxim Dounin wrote: > Hello! > > On Sat, Dec 05, 2015 at 07:17:54AM +0000, Judson Wilson wrote: > > > # HG changeset patch > > # User Judson Wilson > > # Date 1449296759 0 > > # Sat Dec 05 06:25:59 2015 +0000 > > # Node ID f41799d322f02c8998a800953d81e7274a9d3376 > > # Parent cb31017e961b4a54e83c4fc1be46c18842696207 > > SSL: shutdown cleanly when other endpoint starts shutdown > > > > Before this change, if the other endpoint sends an SSL close_notify, > nginx > > will kill the SSL connection without sending a close_notify in response. > > This behavior does not follow RFC 5246 section 7.2.1: > > > > Unless some other fatal alert has been transmitted, each party is > > required to send a close_notify alert before closing the write side > > of the connection. > > > > This change fixes this behavior in this specific situation, causing > > nginx to reply with a close_notify before shutting down the conneciton. > > > > diff -r cb31017e961b -r f41799d322f0 src/event/ngx_event_openssl.c > > --- a/src/event/ngx_event_openssl.c Wed Dec 02 19:17:19 2015 -0800 > > +++ b/src/event/ngx_event_openssl.c Sat Dec 05 06:25:59 2015 +0000 > > @@ -1472,7 +1472,6 @@ > > } > > > > c->ssl->no_wait_shutdown = 1; > > - c->ssl->no_send_shutdown = 1; > > > > if (sslerr == SSL_ERROR_ZERO_RETURN || ERR_peek_error() == 0) { > > ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, > > @@ -1480,6 +1479,8 @@ > > return NGX_DONE; > > } > > > > + c->ssl->no_send_shutdown = 1; > > + > > ngx_ssl_connection_error(c, sslerr, err, "SSL_read() failed"); > > > > return NGX_ERROR; > > There is a problem with this change: in most cases in practice > close from a client means that the socket is no longer open, and > trying to send a close_notify alert won't do anything good but > will result in RST instead. Common practice seems to be don't > send close_notify alerts in such a case (e.g., I see Chrome doing > the same), and section 7.2.1 mentions this as well. > > Note well that section 7.2.1 you refer to is about avoiding > truncation attacks. On the other hand, the code path you are > editing doesn't ensure that a close_notify notify alert was > received from the other side. That is, sending a close_notify > alert will make truncation attacks easier, not stop them. > > If you still think that changes in this area are needed - please > provide some more details on what you are trying to fix. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From junpei.yoshino at gmail.com Tue Dec 8 00:35:32 2015 From: junpei.yoshino at gmail.com (junpei yoshino) Date: Tue, 8 Dec 2015 09:35:32 +0900 Subject: [PATCH]add proxy_protocol_port variable for rfc6302 In-Reply-To: References: <20151203190316.GG74233@mdounin.ru> <20151207025141.GT74233@mdounin.ru> Message-ID: Hello, I made merged patch. # HG changeset patch # User Junpei Yoshino # Date 1449499172 -32400 # Mon Dec 07 23:39:32 2015 +0900 # Node ID f4cd90a03eca5c330f51ac4ba2673e64348c622e # Parent 29f35e60840b8eed2927dd3495ef2d8e524862f7 Http: add proxy_protocol_port variable for rfc6302 diff -r 29f35e60840b -r f4cd90a03eca src/core/ngx_connection.h --- a/src/core/ngx_connection.h Mon Dec 07 16:30:48 2015 +0300 +++ b/src/core/ngx_connection.h Mon Dec 07 23:39:32 2015 +0900 @@ -146,6 +146,7 @@ ngx_str_t addr_text; ngx_str_t proxy_protocol_addr; + ngx_int_t proxy_protocol_port; #if (NGX_SSL) ngx_ssl_connection_t *ssl; diff -r 29f35e60840b -r f4cd90a03eca src/core/ngx_proxy_protocol.c --- a/src/core/ngx_proxy_protocol.c Mon Dec 07 16:30:48 2015 +0300 +++ b/src/core/ngx_proxy_protocol.c Mon Dec 07 23:39:32 2015 +0900 @@ -13,7 +13,7 @@ ngx_proxy_protocol_read(ngx_connection_t *c, u_char *buf, u_char *last) { size_t len; - u_char ch, *p, *addr; + u_char ch, *p, *addr, *port; p = buf; len = last - buf; @@ -71,8 +71,40 @@ ngx_memcpy(c->proxy_protocol_addr.data, addr, len); c->proxy_protocol_addr.len = len; - ngx_log_debug1(NGX_LOG_DEBUG_CORE, c->log, 0, - "PROXY protocol address: \"%V\"", &c->proxy_protocol_addr); + for ( ;; ) { + if (p == last) { + goto invalid; + } + + ch = *p++; + + if (ch == ' ') { + break; + } + } + port = p; + for ( ;; ) { + if (p == last) { + goto invalid; + } + + ch = *p++; + + if (ch == ' ') { + break; + } + + if (ch < '0' || ch > '9') + { + goto invalid; + } + } + len = p - port - 1; + c->proxy_protocol_port = ngx_atoi(port,len); + + ngx_log_debug2(NGX_LOG_DEBUG_CORE, c->log, 0, + "PROXY protocol address: \"%V\", PROXY protocol port: \"%d\"", + &c->proxy_protocol_addr, c->proxy_protocol_port); skip: diff -r 29f35e60840b -r f4cd90a03eca src/http/ngx_http_variables.c --- a/src/http/ngx_http_variables.c Mon Dec 07 16:30:48 2015 +0300 +++ b/src/http/ngx_http_variables.c Mon Dec 07 23:39:32 2015 +0900 @@ -58,6 +58,8 @@ ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_proxy_protocol_addr(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); +static ngx_int_t ngx_http_variable_proxy_protocol_port(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_server_addr(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data); static ngx_int_t ngx_http_variable_server_port(ngx_http_request_t *r, @@ -192,6 +194,9 @@ { ngx_string("proxy_protocol_addr"), NULL, ngx_http_variable_proxy_protocol_addr, 0, 0, 0 }, + { ngx_string("proxy_protocol_port"), NULL, + ngx_http_variable_proxy_protocol_port, 0, 0, 0 }, + { ngx_string("server_addr"), NULL, ngx_http_variable_server_addr, 0, 0, 0 }, { ngx_string("server_port"), NULL, ngx_http_variable_server_port, 0, 0, 0 }, @@ -1250,6 +1255,29 @@ static ngx_int_t +ngx_http_variable_proxy_protocol_port(ngx_http_request_t *r, + ngx_http_variable_value_t *v, uintptr_t data) +{ + ngx_int_t port = r->connection->proxy_protocol_port; + + v->len = 0; + v->valid = 1; + v->no_cacheable = 0; + v->not_found = 0; + v->data = ngx_pnalloc(r->pool, sizeof("65535") - 1); + + if (v->data == NULL) { + return NGX_ERROR; + } + if (port > 0 && port < 65536) { + v->len = ngx_sprintf(v->data, "%ui", port) - v->data; + } + + return NGX_OK; +} + + +static ngx_int_t ngx_http_variable_server_addr(ngx_http_request_t *r, ngx_http_variable_value_t *v, uintptr_t data) { On Tue, Dec 8, 2015 at 12:56 AM, junpei yoshino wrote: > Hello. > > I wrote additional patch. > > # HG changeset patch > # User Junpei Yoshino > # Date 1449499172 -32400 > # Mon Dec 07 23:39:32 2015 +0900 > # Node ID e2984af905ff8cf523b22860620a9f3ff22d555a > # Parent 59cadccedf402ec325b078cb72a284465639e0fe > Change definition of proxy_protocol_port > > diff -r 59cadccedf40 -r e2984af905ff src/core/ngx_connection.h > --- a/src/core/ngx_connection.h Thu Nov 05 20:36:47 2015 +0900 > +++ b/src/core/ngx_connection.h Mon Dec 07 23:39:32 2015 +0900 > @@ -146,7 +146,7 @@ > ngx_str_t addr_text; > > ngx_str_t proxy_protocol_addr; > - ngx_str_t proxy_protocol_port; > + ngx_int_t proxy_protocol_port; > > #if (NGX_SSL) > ngx_ssl_connection_t *ssl; > diff -r 59cadccedf40 -r e2984af905ff src/core/ngx_proxy_protocol.c > --- a/src/core/ngx_proxy_protocol.c Thu Nov 05 20:36:47 2015 +0900 > +++ b/src/core/ngx_proxy_protocol.c Mon Dec 07 23:39:32 2015 +0900 > @@ -81,14 +81,6 @@ > if (ch == ' ') { > break; > } > - > - if (ch != ':' && ch != '.' > - && (ch < 'a' || ch > 'f') > - && (ch < 'A' || ch > 'F') > - && (ch < '0' || ch > '9')) > - { > - goto invalid; > - } > } > port = p; > for ( ;; ) { > @@ -108,19 +100,11 @@ > } > } > len = p - port - 1; > - c->proxy_protocol_port.data = ngx_pnalloc(c->pool, len); > + c->proxy_protocol_port = ngx_atoi(port,len); > > - if (c->proxy_protocol_port.data == NULL) { > - return NULL; > - } > - > - ngx_memcpy(c->proxy_protocol_port.data, port, len); > - c->proxy_protocol_port.len = len; > - > - ngx_log_debug1(NGX_LOG_DEBUG_CORE, c->log, 0, > - "PROXY protocol address: \"%V\"", &c->proxy_protocol_addr); > - ngx_log_debug1(NGX_LOG_DEBUG_CORE, c->log, 0, > - "PROXY protocol port: \"%V\"", &c->proxy_protocol_port); > + ngx_log_debug2(NGX_LOG_DEBUG_CORE, c->log, 0, > + "PROXY protocol address: \"%V\", PROXY protocol > port: \"%d\"", > + &c->proxy_protocol_addr, c->proxy_protocol_port); > > skip: > > diff -r 59cadccedf40 -r e2984af905ff src/http/ngx_http_variables.c > --- a/src/http/ngx_http_variables.c Thu Nov 05 20:36:47 2015 +0900 > +++ b/src/http/ngx_http_variables.c Mon Dec 07 23:39:32 2015 +0900 > @@ -1258,11 +1258,20 @@ > ngx_http_variable_proxy_protocol_port(ngx_http_request_t *r, > ngx_http_variable_value_t *v, uintptr_t data) > { > - v->len = r->connection->proxy_protocol_port.len; > + ngx_int_t port = r->connection->proxy_protocol_port; > + > + v->len = 0; > v->valid = 1; > v->no_cacheable = 0; > v->not_found = 0; > - v->data = r->connection->proxy_protocol_port.data; > + v->data = ngx_pnalloc(r->pool, sizeof("65535") - 1); > + > + if (v->data == NULL) { > + return NGX_ERROR; > + } > + if (port > 0 && port < 65536) { > + v->len = ngx_sprintf(v->data, "%ui", port) - v->data; > + } > > return NGX_OK; > } > > > On Mon, Dec 7, 2015 at 11:51 AM, Maxim Dounin wrote: >> Hello! >> >> On Mon, Dec 07, 2015 at 12:14:38AM +0900, junpei yoshino wrote: >> >>> > but we need someone to do the rest of the work. >>> >>> Could I contribute it? >>> At first, I will revise this patch along your review. >> >> It may be a bit too many for someone with small nginx coding >> experience, but you may try to. >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > > > > -- > junpei.yoshino at gmail.com -- junpei.yoshino at gmail.com From mdounin at mdounin.ru Tue Dec 8 13:15:49 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 Dec 2015 16:15:49 +0300 Subject: [PATCH] SSL: shutdown cleanly when other endpoint starts shutdown In-Reply-To: References: <20151207183117.GB74233@mdounin.ru> Message-ID: <20151208131548.GC74233@mdounin.ru> Hello! On Mon, Dec 07, 2015 at 02:38:14PM -0800, Judson Wilson wrote: > > There is a problem with this change: in most cases in practice > > close from a client means that the socket is no longer open, and > > trying to send a close_notify alert won't do anything good but > > will result in RST instead. Common practice seems to be don't > > send close_notify alerts in such a case (e.g., I see Chrome doing > > the same), and section 7.2.1 mentions this as well. > > The behavior I am attempting to produce matches what I am > observing in the Apache web server (assuming default > configuration), so it's not unheard of for HTTPS. For sure it's not something unheard. The question is mostly about balance of positive and negative results in a particular code path. In this particular case it looks like nginx behaviour is good enough. > > If you still think that changes in this area are needed - please > > provide some more details on what you are trying to fix. > > I am researching ways of auditing TLS communications > in a read-only way. One method of achieving this is by having the TLS > client give the keyblock to an auditor after the client is sure that the > server > will not accept any more data protected by the keys (assume the software > for the client and server are controlled by the same party). From my > investigation of nginx and apache, if a client receives close_notify, > they can be assured that no more data will be accepted by the server > under the key (both web server software packages immediately > destroy the SSL object after calling SSL_shutdown). > > Unfortunately, nginx has this case where a client would attempt to > close a connection, but never get close_notify in response. > It seems like simply obeying the TLS standard would fix this. > (close_notify IS sent on "connection: close" header or keepalive > timeout as I would expect.) As far as I understand, just looking for TCP FIN should be good enough for this task. > > Note well that section 7.2.1 you refer to is about avoiding > > truncation attacks. On the other hand, the code path you are > > editing doesn't ensure that a close_notify notify alert was > > received from the other side. That is, sending a close_notify > > alert will make truncation attacks easier, not stop them. > > Yes truncation attacks were not on my mind - that's a different > problem. > > I'd be happy to look into this a further if you think this work has > a chance of being incorporated. I don't think that this change is needed, especially given the reasons for the change, and potentical security implications of the proposed patch. -- Maxim Dounin http://nginx.org/ From pluknet at nginx.com Tue Dec 8 14:02:17 2015 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 08 Dec 2015 14:02:17 +0000 Subject: [nginx] SSL: fixed possible segfault on renegotiation (ticket #8... Message-ID: details: http://hg.nginx.org/nginx/rev/a6902a941279 branches: changeset: 6320:a6902a941279 user: Sergey Kandaurov date: Tue Dec 08 16:59:43 2015 +0300 description: SSL: fixed possible segfault on renegotiation (ticket #845). Skip SSL_CTX_set_tlsext_servername_callback in case of renegotiation. Do nothing in SNI callback as in this case it will be supplied with request in c->data which isn't expected and doesn't work this way. This was broken by b40af2fd1c16 (1.9.6) with OpenSSL master branch and LibreSSL. diffstat: src/http/ngx_http_request.c | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diffs (14 lines): diff -r fe0ace132a25 -r a6902a941279 src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Mon Dec 07 20:09:34 2015 +0300 +++ b/src/http/ngx_http_request.c Tue Dec 08 16:59:43 2015 +0300 @@ -837,6 +837,10 @@ ngx_http_ssl_servername(ngx_ssl_conn_t * c = ngx_ssl_get_connection(ssl_conn); + if (c->ssl->renegotiation) { + return SSL_TLSEXT_ERR_NOACK; + } + ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, "SSL server name: \"%s\"", servername); From arut at nginx.com Tue Dec 8 14:41:33 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 08 Dec 2015 14:41:33 +0000 Subject: [nginx] Slice filter: never run subrequests when main request is... Message-ID: details: http://hg.nginx.org/nginx/rev/bc9ea464e354 branches: changeset: 6321:bc9ea464e354 user: Roman Arutyunyan date: Tue Dec 08 17:39:56 2015 +0300 description: Slice filter: never run subrequests when main request is buffered. With main request buffered, it's possible, that a slice subrequest will send output before it. For example, while main request is waiting for aio read to complete, a slice subrequest can start an aio operation as well. The order in which aio callbacks are called is undetermined. diffstat: src/http/modules/ngx_http_slice_filter_module.c | 4 ++++ 1 files changed, 4 insertions(+), 0 deletions(-) diffs (14 lines): diff -r a6902a941279 -r bc9ea464e354 src/http/modules/ngx_http_slice_filter_module.c --- a/src/http/modules/ngx_http_slice_filter_module.c Tue Dec 08 16:59:43 2015 +0300 +++ b/src/http/modules/ngx_http_slice_filter_module.c Tue Dec 08 17:39:56 2015 +0300 @@ -239,6 +239,10 @@ ngx_http_slice_body_filter(ngx_http_requ return rc; } + if (r->buffered) { + return rc; + } + if (ngx_http_subrequest(r, &r->uri, &r->args, &sr, NULL, 0) != NGX_OK) { return NGX_ERROR; } From arut at nginx.com Tue Dec 8 14:41:35 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 08 Dec 2015 14:41:35 +0000 Subject: [nginx] Slice filter: terminate first slice with last_in_chain f... Message-ID: details: http://hg.nginx.org/nginx/rev/4f0f4f02c98f branches: changeset: 6322:4f0f4f02c98f user: Roman Arutyunyan date: Tue Dec 08 17:39:56 2015 +0300 description: Slice filter: terminate first slice with last_in_chain flag. This flag makes sub filter flush buffered data and optimizes allocation in copy filter. diffstat: src/http/modules/ngx_http_slice_filter_module.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff -r bc9ea464e354 -r 4f0f4f02c98f src/http/modules/ngx_http_slice_filter_module.c --- a/src/http/modules/ngx_http_slice_filter_module.c Tue Dec 08 17:39:56 2015 +0300 +++ b/src/http/modules/ngx_http_slice_filter_module.c Tue Dec 08 17:39:56 2015 +0300 @@ -222,6 +222,7 @@ ngx_http_slice_body_filter(ngx_http_requ for (cl = in; cl; cl = cl->next) { if (cl->buf->last_buf) { cl->buf->last_buf = 0; + cl->buf->last_in_chain = 1; cl->buf->sync = 1; ctx->last = 1; } From mdounin at mdounin.ru Tue Dec 8 15:19:47 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 08 Dec 2015 15:19:47 +0000 Subject: [nginx] nginx-1.9.8-RELEASE Message-ID: details: http://hg.nginx.org/nginx/rev/1bdc497c8160 branches: changeset: 6323:1bdc497c8160 user: Maxim Dounin date: Tue Dec 08 18:16:51 2015 +0300 description: nginx-1.9.8-RELEASE diffstat: docs/xml/nginx/changes.xml | 54 ++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 54 insertions(+), 0 deletions(-) diffs (64 lines): diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml +++ b/docs/xml/nginx/changes.xml @@ -5,6 +5,60 @@ + + + + +????????? pwritev(). + + +pwritev() support. + + + + + +????????? include ? ????? upstream. + + +the "include" directive inside the "upstream" block. + + + + + +?????? ngx_http_slice_module. + + +the ngx_http_slice_module. + + + + + +??? ????????????? LibreSSL +? ??????? ???????? ??? ????????? segmentation fault; +?????? ????????? ? 1.9.6. + + +a segmentation fault might occur in a worker process +when using LibreSSL; +the bug had appeared in 1.9.6. + + + + + +nginx ??? ?? ?????????? ?? OS X. + + +nginx could not be built on OS X in some cases. + + + + + + From mdounin at mdounin.ru Tue Dec 8 15:19:49 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 08 Dec 2015 15:19:49 +0000 Subject: [nginx] release-1.9.8 tag Message-ID: details: http://hg.nginx.org/nginx/rev/df31fbbd6f7f branches: changeset: 6324:df31fbbd6f7f user: Maxim Dounin date: Tue Dec 08 18:16:52 2015 +0300 description: release-1.9.8 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -390,3 +390,4 @@ 5cb7e2eed2031e32d2e5422caf9402758c38a6ad 942475e10cb47654205ede7ccbe7d568698e665b release-1.9.5 b78018cfaa2f0ec20494fccb16252daa87c48a31 release-1.9.6 54117529e40b988590ea2d38aae909b0b191663f release-1.9.7 +1bdc497c81607d854e3edf8b9a3be324c3d136b6 release-1.9.8 From wilson.judson at gmail.com Tue Dec 8 21:21:41 2015 From: wilson.judson at gmail.com (Judson Wilson) Date: Tue, 8 Dec 2015 13:21:41 -0800 Subject: [PATCH] SSL: shutdown cleanly when other endpoint starts shutdown In-Reply-To: <20151208131548.GC74233@mdounin.ru> References: <20151207183117.GB74233@mdounin.ru> <20151208131548.GC74233@mdounin.ru> Message-ID: > As far as I understand, just looking for TCP FIN should be good > enough for this task. TCP FIN can not be authenticated. A man in the middle can make one. On Tue, Dec 8, 2015 at 5:15 AM, Maxim Dounin wrote: > Hello! > > On Mon, Dec 07, 2015 at 02:38:14PM -0800, Judson Wilson wrote: > > > > There is a problem with this change: in most cases in practice > > > close from a client means that the socket is no longer open, and > > > trying to send a close_notify alert won't do anything good but > > > will result in RST instead. Common practice seems to be don't > > > send close_notify alerts in such a case (e.g., I see Chrome doing > > > the same), and section 7.2.1 mentions this as well. > > > > The behavior I am attempting to produce matches what I am > > observing in the Apache web server (assuming default > > configuration), so it's not unheard of for HTTPS. > > For sure it's not something unheard. The question is mostly about > balance of positive and negative results in a particular code > path. In this particular case it looks like nginx behaviour is > good enough. > > > > If you still think that changes in this area are needed - please > > > provide some more details on what you are trying to fix. > > > > I am researching ways of auditing TLS communications > > in a read-only way. One method of achieving this is by having the TLS > > client give the keyblock to an auditor after the client is sure that the > > server > > will not accept any more data protected by the keys (assume the software > > for the client and server are controlled by the same party). From my > > investigation of nginx and apache, if a client receives close_notify, > > they can be assured that no more data will be accepted by the server > > under the key (both web server software packages immediately > > destroy the SSL object after calling SSL_shutdown). > > > > Unfortunately, nginx has this case where a client would attempt to > > close a connection, but never get close_notify in response. > > It seems like simply obeying the TLS standard would fix this. > > (close_notify IS sent on "connection: close" header or keepalive > > timeout as I would expect.) > > As far as I understand, just looking for TCP FIN should be good > enough for this task. > > > > Note well that section 7.2.1 you refer to is about avoiding > > > truncation attacks. On the other hand, the code path you are > > > editing doesn't ensure that a close_notify notify alert was > > > received from the other side. That is, sending a close_notify > > > alert will make truncation attacks easier, not stop them. > > > > Yes truncation attacks were not on my mind - that's a different > > problem. > > > > I'd be happy to look into this a further if you think this work has > > a chance of being incorporated. > > I don't think that this change is needed, especially given the > reasons for the change, and potentical security implications of > the proposed patch. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Wed Dec 9 06:55:59 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 9 Dec 2015 09:55:59 +0300 Subject: [PATCH] Core: Configurable 'npoints' for ngx_http_upstream_hash In-Reply-To: References: Message-ID: Hello Koby, > On 06 Dec 2015, at 08:57, Koby Nachmany wrote: > > > Hello! > > I have a use case for an even, consistent balancing of a caching layer upstream cluster. I.e using the "Ketama algorithm" > Current consistent hashing implementation in ngx_http_upstream_hash is hard-coded to '160' vbuckets and real world results show a 20% variance in balancing, which is not acceptable in our case. Please describe the test case to see 20% variance. How many servers do you have in the upstream? > Following is a patch (Thanks to agentz) that will allow a configurable vbuckets configuration param. Default will remain the same = 160. > Please consider pushing this "upstream" . No pun intended ;) > > Koby N > > --- a/src/http/modules/ngx_http_upstream_hash_module.c 2015-07-15 00:46:06.000000000 +0800 > +++ b/src/http/modules/ngx_http_upstream_hash_module.c 2015-10-11 22:26:47.952670175 +0800 > @@ -23,6 +23,7 @@ typedef struct { > > > typedef struct { > + ngx_uint_t npoints; > ngx_http_complex_value_t key; > ngx_http_upstream_chash_points_t *points; > } ngx_http_upstream_hash_srv_conf_t; > @@ -66,7 +67,7 @@ static char *ngx_http_upstream_hash(ngx_ > static ngx_command_t ngx_http_upstream_hash_commands[] = { > > { ngx_string("hash"), > - NGX_HTTP_UPS_CONF|NGX_CONF_TAKE12, > + NGX_HTTP_UPS_CONF|NGX_CONF_TAKE123, > ngx_http_upstream_hash, > NGX_HTTP_SRV_CONF_OFFSET, > 0, > @@ -296,7 +297,10 @@ ngx_http_upstream_init_chash(ngx_conf_t > us->peer.init = ngx_http_upstream_init_chash_peer; > > peers = us->peer.data; > - npoints = peers->total_weight * 160; > + > + hcf = ngx_http_conf_upstream_srv_conf(us, ngx_http_upstream_hash_module); > + > + npoints = peers->total_weight * hcf->npoints; > > size = sizeof(ngx_http_upstream_chash_points_t) > + sizeof(ngx_http_upstream_chash_point_t) * (npoints - 1); > @@ -355,7 +359,7 @@ ngx_http_upstream_init_chash(ngx_conf_t > ngx_crc32_update(&base_hash, port, port_len); > > prev_hash.value = 0; > - npoints = peer->weight * 160; > + npoints = peer->weight * hcf->npoints; > > for (j = 0; j < npoints; j++) { > hash = base_hash; > @@ -391,7 +395,6 @@ ngx_http_upstream_init_chash(ngx_conf_t > > points->number = i + 1; > > - hcf = ngx_http_conf_upstream_srv_conf(us, ngx_http_upstream_hash_module); > hcf->points = points; > > return NGX_OK; > @@ -657,6 +660,19 @@ ngx_http_upstream_hash(ngx_conf_t *cf, n > } else if (ngx_strcmp(value[2].data, "consistent") == 0) { > uscf->peer.init_upstream = ngx_http_upstream_init_chash; > > + if (cf->args->nelts > 3) { > + hcf->npoints = ngx_atoi(value[3].data, value[3].len); > + > + if (hcf->npoints == (ngx_uint_t) NGX_ERROR) { > + ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > + "invalid npoints parameter \"%V\"", &value[3]); > + return NGX_CONF_ERROR; > + } > + > + } else { > + hcf->npoints = 160; > + } > + > } else { > ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, > "invalid parameter \"%V\"", &value[2]); The patch looks more or less ok. However, I?d probably make a named parameter points=XXX instead. -- Roman Arutyunyan From ok at cargoserver.ch Wed Dec 9 10:37:02 2015 From: ok at cargoserver.ch (Oli Kessler) Date: Wed, 9 Dec 2015 11:37:02 +0100 Subject: mod_zip module - Freelancer to fix an issue Message-ID: <4816BC80-94B3-4F8B-B7FE-E69B34B65ADB@cargoserver.ch> Hi all We are looking for someone to fix an outstanding issue in mod_zip, a 3rd party module creating ZIP archives on the fly. The issue seems to be related to buffering/timing and SSL, see https://github.com/evanmiller/mod_zip/issues/44 for details. We are willing to put a bounty on it or pay someone directly to fix this issue and create a MR. Anyone on this list interested? Looking forward to hearing from you! Cheers, -ok From ru at nginx.com Wed Dec 9 13:28:01 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 09 Dec 2015 13:28:01 +0000 Subject: [nginx] Version bump. Message-ID: details: http://hg.nginx.org/nginx/rev/00079605a9b8 branches: changeset: 6325:00079605a9b8 user: Ruslan Ermilov date: Wed Dec 09 14:41:16 2015 +0300 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff -r df31fbbd6f7f -r 00079605a9b8 src/core/nginx.h --- a/src/core/nginx.h Tue Dec 08 18:16:52 2015 +0300 +++ b/src/core/nginx.h Wed Dec 09 14:41:16 2015 +0300 @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1009008 -#define NGINX_VERSION "1.9.8" +#define nginx_version 1009009 +#define NGINX_VERSION "1.9.9" #define NGINX_VER "nginx/" NGINX_VERSION #ifdef NGX_BUILD From ru at nginx.com Wed Dec 9 13:28:04 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Wed, 09 Dec 2015 13:28:04 +0000 Subject: [nginx] Fixed fastcgi_pass with UNIX socket and variables (ticke... Message-ID: details: http://hg.nginx.org/nginx/rev/705c356ce664 branches: changeset: 6326:705c356ce664 user: Ruslan Ermilov date: Wed Dec 09 16:26:59 2015 +0300 description: Fixed fastcgi_pass with UNIX socket and variables (ticket #855). This was broken in a93345ee8f52 (1.9.8). diffstat: src/http/ngx_http_upstream.c | 4 +++- 1 files changed, 3 insertions(+), 1 deletions(-) diffs (14 lines): diff -r 00079605a9b8 -r 705c356ce664 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Wed Dec 09 14:41:16 2015 +0300 +++ b/src/http/ngx_http_upstream.c Wed Dec 09 16:26:59 2015 +0300 @@ -642,7 +642,9 @@ ngx_http_upstream_init_request(ngx_http_ if (u->resolved->sockaddr) { - if (u->resolved->port == 0) { + if (u->resolved->port == 0 + && u->resolved->sockaddr->sa_family != AF_UNIX) + { ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "no port in upstream \"%V\"", host); ngx_http_upstream_finalize_request(r, u, From mdounin at mdounin.ru Wed Dec 9 13:34:20 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Dec 2015 16:34:20 +0300 Subject: [PATCH] SSL: shutdown cleanly when other endpoint starts shutdown In-Reply-To: References: <20151207183117.GB74233@mdounin.ru> <20151208131548.GC74233@mdounin.ru> Message-ID: <20151209133420.GL74233@mdounin.ru> Hello! On Tue, Dec 08, 2015 at 01:21:41PM -0800, Judson Wilson wrote: > > As far as I understand, just looking for TCP FIN should be good > > enough for this task. > > TCP FIN can not be authenticated. A man in the middle can make one. The same is true for close_notify with your patch. Just keeping in mind that no close_notify means that the response may be truncated should work. Note well that if it's client who closes the connection it's likely that the response is truncated (or the HTTP layer has enough information to check that it wasn't). -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Dec 9 14:58:09 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 09 Dec 2015 14:58:09 +0000 Subject: [nginx] nginx-1.9.9-RELEASE Message-ID: details: http://hg.nginx.org/nginx/rev/ef107f3ddc23 branches: changeset: 6327:ef107f3ddc23 user: Maxim Dounin date: Wed Dec 09 17:47:20 2015 +0300 description: nginx-1.9.9-RELEASE diffstat: docs/xml/nginx/changes.xml | 16 ++++++++++++++++ 1 files changed, 16 insertions(+), 0 deletions(-) diffs (26 lines): diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml +++ b/docs/xml/nginx/changes.xml @@ -5,6 +5,22 @@ + + + + +????????????? ? unix domain ?????? ?? ???????? ??? ????????????? ??????????; +?????? ????????? ? 1.9.8. + + +proxying to unix domain sockets did not work when using variables; +the bug had appeared in 1.9.8. + + + + + + From mdounin at mdounin.ru Wed Dec 9 14:58:12 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 09 Dec 2015 14:58:12 +0000 Subject: [nginx] release-1.9.9 tag Message-ID: details: http://hg.nginx.org/nginx/rev/dfe68c41f34f branches: changeset: 6328:dfe68c41f34f user: Maxim Dounin date: Wed Dec 09 17:47:21 2015 +0300 description: release-1.9.9 tag diffstat: .hgtags | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (8 lines): diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -391,3 +391,4 @@ 942475e10cb47654205ede7ccbe7d568698e665b b78018cfaa2f0ec20494fccb16252daa87c48a31 release-1.9.6 54117529e40b988590ea2d38aae909b0b191663f release-1.9.7 1bdc497c81607d854e3edf8b9a3be324c3d136b6 release-1.9.8 +ef107f3ddc237a3007e2769ec04adde0dcf627fa release-1.9.9 From wilson.judson at gmail.com Wed Dec 9 21:41:55 2015 From: wilson.judson at gmail.com (Judson Wilson) Date: Wed, 9 Dec 2015 13:41:55 -0800 Subject: [PATCH] SSL: shutdown cleanly when other endpoint starts shutdown In-Reply-To: <20151209133420.GL74233@mdounin.ru> References: <20151207183117.GB74233@mdounin.ru> <20151208131548.GC74233@mdounin.ru> <20151209133420.GL74233@mdounin.ru> Message-ID: > > > As far as I understand, just looking for TCP FIN should be good > > > enough for this task. > > > > TCP FIN can not be authenticated. A man in the middle can make one. > > The same is true for close_notify with your patch. close_notify is encrypted using the current session state. A MITM cannot spoof a close_notify if it does not have the keys. > Just keeping in mind that no close_notify means that the > response may be truncated should work. Note well that > if it's client who closes the connection it's likely that the > response is truncated (or the HTTP layer has enough > information to check that it wasn't). Sorry if this is incorrect, but isn't it easy to tell if a response is truncated in HTTP/1.1 at the HTTP protocol layer? Again I want to reiterate that truncation is NOT my concern (although I agree it IS important). My setting involves releasing keys to a trusted monitor with read-only privileges, on the client's behalf (such as an exfiltration detector). The client needs to protect the integrity of its session, and must ensure that the key can not be used to masquerade as the client to the server. On Wed, Dec 9, 2015 at 5:34 AM, Maxim Dounin wrote: > Hello! > > On Tue, Dec 08, 2015 at 01:21:41PM -0800, Judson Wilson wrote: > > > > As far as I understand, just looking for TCP FIN should be good > > > enough for this task. > > > > TCP FIN can not be authenticated. A man in the middle can make one. > > The same is true for close_notify with your patch. Just keeping > in mind that no close_notify means that the response may be > truncated should work. Note well that if it's client who closes > the connection it's likely that the response is truncated (or the > HTTP layer has enough information to check that it wasn't). > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wandenberg at gmail.com Thu Dec 10 16:55:32 2015 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Thu, 10 Dec 2015 14:55:32 -0200 Subject: Should ngx_atoi and ngx_atof functions change their signature?! Message-ID: Hi, today I realized a possible problem on the ngx_atoi and ngx_atof functions (may be on all ngx_ato* functions). There is no way to distinguish between an error and a valid "-1" string. For instance, ngx_str_t some_string = ngx_string("-1"); ngx_int_t x = ngx_atoi(some_string.data, some_string.len); if (x == NGX_ERROR) { ngx_log_debug(NGX_LOG_DEBUG, ngx_cycle->log, 0, "ERROR"); } else { ngx_log_debug(NGX_LOG_DEBUG, ngx_cycle->log, 0, "SUCCESS %d", x); } this code will produce an "ERROR" as output instead of "SUCCESS -1". The same result as if the some_string had a value like "xyz". I know that this is unlikely to happen that you have a string with "-1", but it is possible. What do you think about change these function signature, or add a new one, to receive the output as a parameter something like ngx_int_t ngx_atoi(u_char *line, size_t n, ngx_int_t *result) using the "result" parameter to store the parsed value and return NGX_OK or NGX_ERROR. Kind regards, Wandenberg -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu Dec 10 17:01:53 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 10 Dec 2015 20:01:53 +0300 Subject: Should ngx_atoi and ngx_atof functions change their signature?! In-Reply-To: References: Message-ID: <2542003.uaeL90X10l@vbart-workstation> On Thursday 10 December 2015 14:55:32 Wandenberg Peixoto wrote: > Hi, > > today I realized a possible problem on the ngx_atoi and ngx_atof functions > (may be on all ngx_ato* functions). > > There is no way to distinguish between an error and a valid "-1" string. > > For instance, > > ngx_str_t some_string = ngx_string("-1"); > ngx_int_t x = ngx_atoi(some_string.data, some_string.len); > > if (x == NGX_ERROR) { > ngx_log_debug(NGX_LOG_DEBUG, ngx_cycle->log, 0, "ERROR"); > } else { > ngx_log_debug(NGX_LOG_DEBUG, ngx_cycle->log, 0, "SUCCESS %d", x); > } > > this code will produce an "ERROR" as output instead of "SUCCESS -1". > The same result as if the some_string had a value like "xyz". > > I know that this is unlikely to happen that you have a string with "-1", > but it is possible. > > What do you think about change these function signature, or add a new one, > to receive the output as a parameter something like > > ngx_int_t > ngx_atoi(u_char *line, size_t n, ngx_int_t *result) > > using the "result" parameter to store the parsed value and return NGX_OK or > NGX_ERROR. > What makes you think that these functions are able to parse negative numbers? wbr, Valentin V. Bartenev From wandenberg at gmail.com Thu Dec 10 17:15:04 2015 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Thu, 10 Dec 2015 15:15:04 -0200 Subject: Should ngx_atoi and ngx_atof functions change their signature?! In-Reply-To: <2542003.uaeL90X10l@vbart-workstation> References: <2542003.uaeL90X10l@vbart-workstation> Message-ID: Ops. Sorry, I didn't realize that it only deals with positive numbers. The return type as ngx_int_t tricked me, and I didn't checked the use case as "-2". Sorry again. On Thu, Dec 10, 2015 at 3:01 PM, Valentin V. Bartenev wrote: > On Thursday 10 December 2015 14:55:32 Wandenberg Peixoto wrote: > > Hi, > > > > today I realized a possible problem on the ngx_atoi and ngx_atof > functions > > (may be on all ngx_ato* functions). > > > > There is no way to distinguish between an error and a valid "-1" string. > > > > For instance, > > > > ngx_str_t some_string = ngx_string("-1"); > > ngx_int_t x = ngx_atoi(some_string.data, some_string.len); > > > > if (x == NGX_ERROR) { > > ngx_log_debug(NGX_LOG_DEBUG, ngx_cycle->log, 0, "ERROR"); > > } else { > > ngx_log_debug(NGX_LOG_DEBUG, ngx_cycle->log, 0, "SUCCESS %d", x); > > } > > > > this code will produce an "ERROR" as output instead of "SUCCESS -1". > > The same result as if the some_string had a value like "xyz". > > > > I know that this is unlikely to happen that you have a string with "-1", > > but it is possible. > > > > What do you think about change these function signature, or add a new > one, > > to receive the output as a parameter something like > > > > ngx_int_t > > ngx_atoi(u_char *line, size_t n, ngx_int_t *result) > > > > using the "result" parameter to store the parsed value and return NGX_OK > or > > NGX_ERROR. > > > > What makes you think that these functions are able to parse negative > numbers? > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pskhanapur at gmail.com Fri Dec 11 11:52:23 2015 From: pskhanapur at gmail.com (Prasanna Khanapur) Date: Fri, 11 Dec 2015 11:52:23 +0000 Subject: Nginx Proxy redirect to clients when upstreams are busy Message-ID: Hi, I have built a custom load balancer "my_loadbalancer" which load balances request from end users. -Ignore any syntax errors, if any- upstream myservers { my_loadbalancer; server abc123; server pqr123; } location XYZ { proxy_pass http://myservers ; } I have basically followed Emiller's Guide to get all this working. "my_loadbalancer" loadbalances upstreams based on some conditions. Right now, loadbalancer's "get_peer" function returns (1) NGX_OK when it successfully finds upstream (2) NGX_BUSY when it fails to find upstream. In above case (1) works fine and client gets response and case (2) works fine, loadbalancer sends the client a 502 Bad Gateway. Now I want to do something more where I need help. In case (2), instead of sending a 502, I would like loadbalancer to send a redirect (either 301 or 302, not decided yet) so that client gets a redirect and connects to completely different Loadbalancer. I'm looking at ngx_http_upstream_connect() in ngx_http_upstream.c where return from loadbalancer is handled. I dont see a simple way to make it generate redirect response. I want to do this programtically because the different loadbalancer instance which I want the client should connect to(indicated through redirect) is decided on the fly. Anyone has any inputs/suggestions ? Thanks! Regards Prasanna -- Best Regards Prasanna Khanapur Oslo, Norway Mobile: +4795417774 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Dec 11 13:36:45 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 11 Dec 2015 16:36:45 +0300 Subject: Nginx Proxy redirect to clients when upstreams are busy In-Reply-To: References: Message-ID: <20151211133645.GI74233@mdounin.ru> Hello! On Fri, Dec 11, 2015 at 11:52:23AM +0000, Prasanna Khanapur wrote: > Hi, > > I have built a custom load balancer "my_loadbalancer" which load balances > request from end users. > > -Ignore any syntax errors, if any- > > upstream myservers { > my_loadbalancer; > server abc123; > server pqr123; > } > > location XYZ { > proxy_pass http://myservers ; > } > > I have basically followed Emiller's Guide to get all this working. > "my_loadbalancer" loadbalances upstreams based on some conditions. Right > now, loadbalancer's "get_peer" function returns > (1) NGX_OK when it successfully finds upstream > (2) NGX_BUSY when it fails to find upstream. > > > In above case (1) works fine and client gets response and case (2) works > fine, loadbalancer sends the client a 502 Bad Gateway. > > Now I want to do something more where I need help. > In case (2), instead of sending a 502, I would like loadbalancer to send a > redirect (either 301 or 302, not decided yet) so that client gets a > redirect and connects to completely different Loadbalancer. > > I'm looking at ngx_http_upstream_connect() in ngx_http_upstream.c where > return from loadbalancer is handled. I dont see a simple way to make it > generate redirect response. > > > I want to do this programtically because the different loadbalancer > instance which I want the client should connect to(indicated through > redirect) is decided on the fly. > > > Anyone has any inputs/suggestions ? Thanks! This is something you can easily do with configuration, using the error_page directive (http://nginx.org/r/error_page), like this: error_page 502 = @fallback; location @fallback { return 302 $some_other_uri; } If you want to provide URI to redirect to from your module, you can export it using a variable ($some_other_uri in the example). -- Maxim Dounin http://nginx.org/ From pskhanapur at gmail.com Fri Dec 11 21:34:41 2015 From: pskhanapur at gmail.com (Prasanna Khanapur) Date: Fri, 11 Dec 2015 21:34:41 +0000 Subject: Nginx Proxy redirect to clients when upstreams are busy Message-ID: Hi maxim, Thanks a lot. Could give some inputs on how to export variable from inside my module? I'm kind of new to nginx variable stuff. Do I need it to use other modules like lua,resty? I would prefer to avoid them if possible. Regards -- Best Regards Prasanna Khanapur Oslo, Norway Mobile: +4795417774 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Sat Dec 12 07:39:42 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Sat, 12 Dec 2015 07:39:42 +0000 Subject: [nginx] Fixed a typo. Message-ID: details: http://hg.nginx.org/nginx/rev/def9c9c9ae05 branches: changeset: 6329:def9c9c9ae05 user: Ruslan Ermilov date: Sat Dec 12 10:32:58 2015 +0300 description: Fixed a typo. diffstat: docs/xml/nginx/changes.xml | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diffs (12 lines): diff -r dfe68c41f34f -r def9c9c9ae05 docs/xml/nginx/changes.xml --- a/docs/xml/nginx/changes.xml Wed Dec 09 17:47:21 2015 +0300 +++ b/docs/xml/nginx/changes.xml Sat Dec 12 10:32:58 2015 +0300 @@ -465,7 +465,7 @@ connection limiting in the stream module -??????????? ???????? ? ?????? stream. +??????????? ???????? ? ?????? stream. data rate limiting in the stream module. From mdounin at mdounin.ru Mon Dec 14 15:06:45 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Dec 2015 18:06:45 +0300 Subject: Nginx Proxy redirect to clients when upstreams are busy In-Reply-To: References: Message-ID: <20151214150645.GO74233@mdounin.ru> Hello! On Fri, Dec 11, 2015 at 09:34:41PM +0000, Prasanna Khanapur wrote: > Hi maxim, > Thanks a lot. Could give some inputs on how to export variable from inside > my module? I'm kind of new to nginx variable stuff. Do I need it to use > other modules like lua,resty? I would prefer to avoid them if possible. Variables can be exported by any module using the ngx_http_add_variable() function. Take a look, e.g., on the secure link module (src/http/modules/ngx_http_secure_link_module.c), it contains a mostly trivial example on how to export variables with predefined names. -- Maxim Dounin http://nginx.org/ From thorvaldur.thorvaldsson at gmail.com Mon Dec 14 17:17:21 2015 From: thorvaldur.thorvaldsson at gmail.com (Thorvaldur Thorvaldsson) Date: Mon, 14 Dec 2015 18:17:21 +0100 Subject: [PATCH] Upstream: Cache "immediately stale" responses if revalidate is on. Message-ID: # HG changeset patch # User Thorvaldur Thorvaldsson # Date 1450109082 -3600 # Mon Dec 14 17:04:42 2015 +0100 # Node ID e017db1282d4b8c4d541416e42fe7df8abc73302 # Parent def9c9c9ae05cfa7467b0ec96e76afa180c23dfb Upstream: Cache "immediately stale" responses if revalidate is on. Previously, the proxy cache would never store responses if max-age=0, even when "proxy_cache_revalidate" was "on" and the response included an ETag. This came as a surprise. Now, a header like "Cache-Control: max-age=0, must-revalidate" can be used to make nginx cache responses that always require revalidation, like, when authorization is required (and cheap). diff -r def9c9c9ae05 -r e017db1282d4 src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c Sat Dec 12 10:32:58 2015 +0300 +++ b/src/http/ngx_http_file_cache.c Mon Dec 14 17:04:42 2015 +0100 @@ -628,7 +628,7 @@ now = ngx_time(); - if (c->valid_sec < now) { + if (c->valid_sec <= now) { ngx_shmtx_lock(&cache->shpool->mutex); @@ -831,7 +831,7 @@ if (fcn->error) { - if (fcn->valid_sec < ngx_time()) { + if (fcn->valid_sec <= ngx_time()) { goto renew; } diff -r def9c9c9ae05 -r e017db1282d4 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Sat Dec 12 10:32:58 2015 +0300 +++ b/src/http/ngx_http_upstream.c Mon Dec 14 17:04:42 2015 +0100 @@ -2819,7 +2819,7 @@ } } - if (valid) { + if (valid || r->upstream->conf->cache_revalidate) { r->cache->date = now; r->cache->body_start = (u_short) (u->buffer.pos - u->buffer.start); @@ -4272,7 +4272,7 @@ return NGX_OK; } - if (n == 0) { + if (n == 0 && !r->upstream->conf->cache_revalidate) { u->cacheable = 0; return NGX_OK; } @@ -4312,7 +4312,9 @@ expires = ngx_parse_http_time(h->value.data, h->value.len); - if (expires == NGX_ERROR || expires < ngx_time()) { + if (expires == NGX_ERROR + || (expires < ngx_time() && !r->upstream->conf->cache_revalidate)) + { u->cacheable = 0; return NGX_OK; } @@ -4356,7 +4358,9 @@ switch (n) { case 0: - u->cacheable = 0; + if (!r->upstream->conf->cache_revalidate) { + u->cacheable = 0; + } /* fall through */ case NGX_ERROR: -------------- next part -------------- An HTML attachment was scrubbed... URL: From joel.cunningham at me.com Mon Dec 14 23:23:26 2015 From: joel.cunningham at me.com (Joel Cunningham) Date: Mon, 14 Dec 2015 23:23:26 +0000 (GMT) Subject: autoindex module error handling Message-ID: <6ff68053-4931-4e86-84d7-db8e3e944031@me.com> An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Dec 15 01:53:57 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 Dec 2015 04:53:57 +0300 Subject: autoindex module error handling In-Reply-To: <6ff68053-4931-4e86-84d7-db8e3e944031@me.com> References: <6ff68053-4931-4e86-84d7-db8e3e944031@me.com> Message-ID: <20151215015357.GV74233@mdounin.ru> Hello! On Mon, Dec 14, 2015 at 11:23:26PM +0000, Joel Cunningham wrote: Just a side note: sending html email with an empty text/plain alternative isn't a best way to ensure the message will be seen. > Hi, > > I'm seeing an issue with the autoindex module (version 1.9.5) where > when an error is encountered while generating the index, NGINX does not > send any response back to the client. > > I ran through the debugging and what I found is that the header is > sent, but doesn't actually make it to the socket buffer because in > ngx_http_write_filter returns early because size < > clcf->postpone_output. This seemed normal to me. > > Then during generation of the index, an error is encountered and > ngx_http_autoindex_error() is called. Since r->header_sent is 1, > NGX_ERROR is returned and the socket is closed without sending the > header or any additional error response > > Chrome and Firefox end up displaying error pages about "no response" or > "the connection was reset" when TCP connection was closed without a > response. > > Should NGINX be doing something better? Since the response is chunk > encoded, I understand the header is already sent, but is there a better > way to report errors while generating a chunked response? (although > autoindex doesn't seem to actually generate multiple chunks). As long as the header is sent, it's already too late to do anything with the response or otherwise indicate an error to the client. So we have two basic options to handle errors which happen after the header is send: 1) generate the response, ignoring/handling errors somehow; 2) terminate the connection. Depending on a particular error approach (1) may or may not be possible. E.g., SSI module handles cases when subrequests to other resources fail. But if memory allocation fails, likely it won't be possible to "handle" it somehow. The default behaviour is (2), as it's always possible. And that's what happens in this particular case. If you think that (1) is possible for the particular error you've seen, and it worth the effort - feel free to provide patches. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Dec 15 02:21:29 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 Dec 2015 05:21:29 +0300 Subject: [PATCH] Upstream: Cache "immediately stale" responses if revalidate is on. In-Reply-To: References: Message-ID: <20151215022129.GW74233@mdounin.ru> Hello! On Mon, Dec 14, 2015 at 06:17:21PM +0100, Thorvaldur Thorvaldsson wrote: > # HG changeset patch > # User Thorvaldur Thorvaldsson > # Date 1450109082 -3600 > # Mon Dec 14 17:04:42 2015 +0100 > # Node ID e017db1282d4b8c4d541416e42fe7df8abc73302 > # Parent def9c9c9ae05cfa7467b0ec96e76afa180c23dfb > Upstream: Cache "immediately stale" responses if revalidate is on. > > Previously, the proxy cache would never store responses if max-age=0, > even when "proxy_cache_revalidate" was "on" and the response included an > ETag. This came as a surprise. > > Now, a header like "Cache-Control: max-age=0, must-revalidate" can be > used to make nginx cache responses that always require revalidation, > like, when authorization is required (and cheap). [...] > diff -r def9c9c9ae05 -r e017db1282d4 src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c Sat Dec 12 10:32:58 2015 +0300 > +++ b/src/http/ngx_http_upstream.c Mon Dec 14 17:04:42 2015 +0100 > @@ -2819,7 +2819,7 @@ > } > } > > - if (valid) { > + if (valid || r->upstream->conf->cache_revalidate) { > r->cache->date = now; > r->cache->body_start = (u_short) (u->buffer.pos - > u->buffer.start); > At least this part looks wrong, as it will enable caching of all responses, even ones without Cache-Control/Expires/X-Accel-Expires and proxy_cache_valid configured. Note well that revalidation only make sense if there is a validator, i.e., Last-Modified or ETag. It's probably wrong to assume they are always present. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Dec 15 02:35:29 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 Dec 2015 05:35:29 +0300 Subject: [BUG] Gunzip module may cause requests to fail In-Reply-To: References: <2571972.HTVaeomj3b@vbart-workstation> <20151130173717.GJ74233@mdounin.ru> <20151201132836.GN74233@mdounin.ru> <20151202182532.GX74233@mdounin.ru> Message-ID: <20151215023529.GX74233@mdounin.ru> Hello! On Sun, Dec 06, 2015 at 12:00:02PM +0000, Aviram Cohen wrote: > Thank you, Maxim. I think we pretty much agree. > The following is the suggested patch. Didn't include a patch for a chunked response, as you are right, that can evolve from an application error. > Feel free to change the error message. > > diff -r cebc9a2c2144 src/http/modules/ngx_http_gunzip_filter_module.c > --- a/src/http/modules/ngx_http_gunzip_filter_module.c Tue Apr 21 17:11:58 2015 +0300 > +++ b/src/http/modules/ngx_http_gunzip_filter_module.c Fri Dec 04 01:54:14 2015 +0200 > @@ -140,6 +140,12 @@ > > r->gzip_vary = 1; > > + if (r->headers_out.content_length_n == 0) { > + ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, > + "gunzip filter: zero length data to decompress"); > + return ngx_http_next_header_filter(r); > + } > + - There are no reasons for r->gzip_vary to be set in case of error. - As minimal gzip file size is 20 bytes, and checking only specific case of 0 bytes is suboptimal. [...] -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Tue Dec 15 10:06:00 2015 From: vbart at nginx.com (=?utf-8?B?0JLQsNC70LXQvdGC0LjQvSDQkdCw0YDRgtC10L3QtdCy?=) Date: Tue, 15 Dec 2015 13:06 +0300 Subject: HTTP Response Splitting via $uri In-Reply-To: <209C75083180374A8FBD86F0FDB361442D892B30@EXCHANGE-MB4.msk.rian> References: <209C75083180374A8FBD86F0FDB361442D892B30@EXCHANGE-MB4.msk.rian> Message-ID: <2042659.d6pPWxUDTr@vbart-workstation> ? ????? ???????? ??? ???????????? On Tuesday 15 December 2015 07:58:50 r.golovachev at rian.ru wrote: > ????????????, ??? ?????? ???????????? ????? ??????????, (???????? ????), ?? ????? ???????????, ? ???????? ??? ???: > > ??? ????? ????? ??????? > > port_in_redirect off; > if ($request_uri !~ "\?"){ > rewrite ^(?!(load|loadfm)/)([^.]*[^/])$ $uri/ permanent; > } > [..] > ??? ????, ???? ? ??????? ???????? uri ?? request_uri - ?? ??? ????????? > > if ($request_uri !~ "\?"){ > rewrite ^(?!(load|loadfm)/)([^.]*[^/])$ $uri/ permanent; > } > [..] ?? ???????? ??? ?????????? ??????? ???????????? (? ????????? ?? ????????? port_in_redirect), ??????? ?? ????? ??????? ??? ? ??? ???? ????????. ??? ???? ???????? ? ?????????? Set-Cookie ?? ???? ????????? ????? ?? nginx, ? ??? ??????. -- ???????? ???????? From thorvaldur.thorvaldsson at gmail.com Tue Dec 15 14:40:53 2015 From: thorvaldur.thorvaldsson at gmail.com (Thorvaldur Thorvaldsson) Date: Tue, 15 Dec 2015 15:40:53 +0100 Subject: [PATCH] Upstream: Cache "immediately stale" responses if revalidate is on. In-Reply-To: References: Message-ID: Hello! Thanks for your reply, Maxim. My (old) subscription settings prevent me from replying directly to your message so I'll just continue here. You're correct, my original patch was too aggressive in that it would simply cache all stale (but otherwise cacheble) responses so long as "proxy_cache_revalidate" was "on". I was trying to cover the https://trac.nginx.org/nginx/ticket/778 ticket as well, even if it was not needed for my use case. I've prepared another patch that I'll submit in a moment. Best regards, Thorvaldur -------------- next part -------------- An HTML attachment was scrubbed... URL: From thorvaldur.thorvaldsson at gmail.com Tue Dec 15 14:45:03 2015 From: thorvaldur.thorvaldsson at gmail.com (Thorvaldur Thorvaldsson) Date: Tue, 15 Dec 2015 15:45:03 +0100 Subject: [PATCH] Upstream: Cache stale responses if they may be revalidated. Message-ID: # HG changeset patch # User Thorvaldur Thorvaldsson # Date 1450190015 -3600 # Tue Dec 15 15:33:35 2015 +0100 # Node ID 4a1914481e2bd3eecdc5d23e1386c01e4fb08414 # Parent def9c9c9ae05cfa7467b0ec96e76afa180c23dfb Upstream: Cache stale responses if they may be revalidated. Previously, the proxy cache would never store stale responses, e.g., when the "Cache-Control" header contained "max-age=0", even if the "proxy_cache_revalidate" directive was "on" and the response included both an "ETag" and a "Last-Modified" header. This came as a surprise. Now, a header like "Cache-Control: max-age=0, must-revalidate" can be used to make nginx cache responses that always require revalidation, e.g., when authorization is required (and cheap). diff -r def9c9c9ae05 -r 4a1914481e2b src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c Sat Dec 12 10:32:58 2015 +0300 +++ b/src/http/ngx_http_file_cache.c Tue Dec 15 15:33:35 2015 +0100 @@ -628,7 +628,7 @@ now = ngx_time(); - if (c->valid_sec < now) { + if (c->valid_sec <= now) { ngx_shmtx_lock(&cache->shpool->mutex); @@ -831,7 +831,7 @@ if (fcn->error) { - if (fcn->valid_sec < ngx_time()) { + if (fcn->valid_sec <= ngx_time()) { goto renew; } diff -r def9c9c9ae05 -r 4a1914481e2b src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Sat Dec 12 10:32:58 2015 +0300 +++ b/src/http/ngx_http_upstream.c Tue Dec 15 15:33:35 2015 +0100 @@ -2815,11 +2815,15 @@ valid = ngx_http_file_cache_valid(u->conf->cache_valid, u->headers_in.status_n); if (valid) { - r->cache->valid_sec = now + valid; + valid += now; + r->cache->valid_sec = valid; } } - if (valid) { + if (valid > now + || (r->upstream->conf->cache_revalidate + && (u->headers_in.etag || u->headers_in.last_modified_time))) + { r->cache->date = now; r->cache->body_start = (u_short) (u->buffer.pos - u->buffer.start); @@ -4272,11 +4276,6 @@ return NGX_OK; } - if (n == 0) { - u->cacheable = 0; - return NGX_OK; - } - r->cache->valid_sec = ngx_time() + n; } #endif @@ -4312,7 +4311,7 @@ expires = ngx_parse_http_time(h->value.data, h->value.len); - if (expires == NGX_ERROR || expires < ngx_time()) { + if (expires == NGX_ERROR) { u->cacheable = 0; return NGX_OK; } @@ -4355,10 +4354,6 @@ n = ngx_atoi(p, len); switch (n) { - case 0: - u->cacheable = 0; - /* fall through */ - case NGX_ERROR: return NGX_OK; -------------- next part -------------- An HTML attachment was scrubbed... URL: From joel.cunningham at me.com Tue Dec 15 15:11:19 2015 From: joel.cunningham at me.com (Joel Cunningham) Date: Tue, 15 Dec 2015 09:11:19 -0600 Subject: autoindex module error handling Message-ID: <5E74C5F8-8082-4174-BE66-B2AEA0FE566E@me.com> Maxim, See responses in-line Joel On Dec 14, 2015, at 07:54 PM, Maxim Dounin wrote: > Hello! > > On Mon, Dec 14, 2015 at 11:23:26PM +0000, Joel Cunningham wrote: > > Just a side note: sending html email with an empty text/plain > alternative isn't a best way to ensure the message will be > seen. I checked the archive and see what you mean, are HTML formatted emails not supported or preferred for this mailing list? I simply just sent an email from the iCloud mail client > > >> Hi, >> >> I'm seeing an issue with the autoindex module (version 1.9.5) where >> when an error is encountered while generating the index, NGINX does not >> send any response back to the client. >> >> I ran through the debugging and what I found is that the header is >> sent, but doesn't actually make it to the socket buffer because in >> ngx_http_write_filter returns early because size < >> clcf->postpone_output. This seemed normal to me. >> >> Then during generation of the index, an error is encountered and >> ngx_http_autoindex_error() is called. Since r->header_sent is 1, >> NGX_ERROR is returned and the socket is closed without sending the >> header or any additional error response >> >> Chrome and Firefox end up displaying error pages about "no response" or >> "the connection was reset" when TCP connection was closed without a >> response. >> >> Should NGINX be doing something better? Since the response is chunk >> encoded, I understand the header is already sent, but is there a better >> way to report errors while generating a chunked response? (although >> autoindex doesn't seem to actually generate multiple chunks). > > As long as the header is sent, it's already too late to do > anything with the response or otherwise indicate an error to the > client. So we have two basic options to handle errors which > happen after the header is send: > > 1) generate the response, ignoring/handling errors somehow; > > 2) terminate the connection. > > Depending on a particular error approach (1) may or may not be > possible. E.g., SSI module handles cases when subrequests to > other resources fail. But if memory allocation fails, likely it > won't be possible to "handle" it somehow. The default behaviour > is (2), as it's always possible. And that's what happens in this > particular case. If you think that (1) is possible for the > particular error you've seen, and it worth the effort - feel free > to provide patches. > This makes sense, thanks for explaining the error handling. I?m still fairly new to the NGINX source and this is helpful to know! The only other thought I had was that we could send an empty chunked body when an error is encountered since the auto index module only sends a single chunk. I think the value of this is questionable though since the client learns of the error anyways though the connection closing without a response. > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From mdounin at mdounin.ru Tue Dec 15 15:48:43 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 Dec 2015 18:48:43 +0300 Subject: autoindex module error handling In-Reply-To: <5E74C5F8-8082-4174-BE66-B2AEA0FE566E@me.com> References: <5E74C5F8-8082-4174-BE66-B2AEA0FE566E@me.com> Message-ID: <20151215154843.GA74233@mdounin.ru> Hello! On Tue, Dec 15, 2015 at 09:11:19AM -0600, Joel Cunningham wrote: > On Dec 14, 2015, at 07:54 PM, Maxim Dounin wrote: > > > On Mon, Dec 14, 2015 at 11:23:26PM +0000, Joel Cunningham wrote: > > > > Just a side note: sending html email with an empty text/plain > > alternative isn't a best way to ensure the message will be > > seen. > > I checked the archive and see what you mean, are HTML formatted > emails not supported or preferred for this mailing list? I > simply just sent an email from the iCloud mail client HTML emails aren't welcomed in technical mailing lists in general, and in this one in particular. The message you've sent was HTML and it also contained an empty text/plain version of the message, thus preventing any client which is set to prefer text/plain from showing anything. No idea if iCloud mail client can be tuned to behave better. [...] > > Depending on a particular error approach (1) may or may not be > > possible. E.g., SSI module handles cases when subrequests to > > other resources fail. But if memory allocation fails, likely it > > won't be possible to "handle" it somehow. The default behaviour > > is (2), as it's always possible. And that's what happens in this > > particular case. If you think that (1) is possible for the > > particular error you've seen, and it worth the effort - feel free > > to provide patches. > > > > This makes sense, thanks for explaining the error handling. I?m > still fairly new to the NGINX source and this is helpful to > know! The only other thought I had was that we could send an > empty chunked body when an error is encountered since the auto > index module only sends a single chunk. I think the value of > this is questionable though since the client learns of the error > anyways though the connection closing without a response. Sending an empty body is a bad idea, as the client won't know that there was an error. -- Maxim Dounin http://nginx.org/ From thorvaldur.thorvaldsson at gmail.com Tue Dec 15 15:50:27 2015 From: thorvaldur.thorvaldsson at gmail.com (Thorvaldur Thorvaldsson) Date: Tue, 15 Dec 2015 16:50:27 +0100 Subject: [PATCH] Upstream: Cache stale responses if they may be revalidated. In-Reply-To: References: Message-ID: Hmmm, last_modified_time should have been just last_modifed there. I should resend a patch with this change. Regards, Thorvaldur On Tue, Dec 15, 2015 at 3:45 PM, Thorvaldur Thorvaldsson < thorvaldur.thorvaldsson at gmail.com> wrote: > # HG changeset patch > # User Thorvaldur Thorvaldsson > # Date 1450190015 -3600 > # Tue Dec 15 15:33:35 2015 +0100 > # Node ID 4a1914481e2bd3eecdc5d23e1386c01e4fb08414 > # Parent def9c9c9ae05cfa7467b0ec96e76afa180c23dfb > Upstream: Cache stale responses if they may be revalidated. > > Previously, the proxy cache would never store stale responses, e.g., > when the "Cache-Control" header contained "max-age=0", even if the > "proxy_cache_revalidate" directive was "on" and the response included > both an "ETag" and a "Last-Modified" header. This came as a surprise. > > Now, a header like "Cache-Control: max-age=0, must-revalidate" can be > used to make nginx cache responses that always require revalidation, > e.g., when authorization is required (and cheap). > > diff -r def9c9c9ae05 -r 4a1914481e2b src/http/ngx_http_file_cache.c > --- a/src/http/ngx_http_file_cache.c Sat Dec 12 10:32:58 2015 +0300 > +++ b/src/http/ngx_http_file_cache.c Tue Dec 15 15:33:35 2015 +0100 > @@ -628,7 +628,7 @@ > > now = ngx_time(); > > - if (c->valid_sec < now) { > + if (c->valid_sec <= now) { > > ngx_shmtx_lock(&cache->shpool->mutex); > > @@ -831,7 +831,7 @@ > > if (fcn->error) { > > - if (fcn->valid_sec < ngx_time()) { > + if (fcn->valid_sec <= ngx_time()) { > goto renew; > } > > diff -r def9c9c9ae05 -r 4a1914481e2b src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c Sat Dec 12 10:32:58 2015 +0300 > +++ b/src/http/ngx_http_upstream.c Tue Dec 15 15:33:35 2015 +0100 > @@ -2815,11 +2815,15 @@ > valid = ngx_http_file_cache_valid(u->conf->cache_valid, > u->headers_in.status_n); > if (valid) { > - r->cache->valid_sec = now + valid; > + valid += now; > + r->cache->valid_sec = valid; > } > } > > - if (valid) { > + if (valid > now > + || (r->upstream->conf->cache_revalidate > + && (u->headers_in.etag || > u->headers_in.last_modified_time))) > + { > r->cache->date = now; > r->cache->body_start = (u_short) (u->buffer.pos - > u->buffer.start); > > @@ -4272,11 +4276,6 @@ > return NGX_OK; > } > > - if (n == 0) { > - u->cacheable = 0; > - return NGX_OK; > - } > - > r->cache->valid_sec = ngx_time() + n; > } > #endif > @@ -4312,7 +4311,7 @@ > > expires = ngx_parse_http_time(h->value.data, h->value.len); > > - if (expires == NGX_ERROR || expires < ngx_time()) { > + if (expires == NGX_ERROR) { > u->cacheable = 0; > return NGX_OK; > } > @@ -4355,10 +4354,6 @@ > n = ngx_atoi(p, len); > > switch (n) { > - case 0: > - u->cacheable = 0; > - /* fall through */ > - > case NGX_ERROR: > return NGX_OK; > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thorvaldur.thorvaldsson at gmail.com Tue Dec 15 15:58:30 2015 From: thorvaldur.thorvaldsson at gmail.com (Thorvaldur Thorvaldsson) Date: Tue, 15 Dec 2015 16:58:30 +0100 Subject: [PATCH] Upstream: Cache stale responses if they may be revalidated. In-Reply-To: References: Message-ID: # HG changeset patch # User Thorvaldur Thorvaldsson # Date 1450190015 -3600 # Tue Dec 15 15:33:35 2015 +0100 # Node ID 2990e56d509f5a5ad503babdca888d5890251579 # Parent def9c9c9ae05cfa7467b0ec96e76afa180c23dfb Upstream: Cache stale responses if they may be revalidated. Previously, the proxy cache would never store stale responses, e.g., when the "Cache-Control" header contained "max-age=0", even if the "proxy_cache_revalidate" directive was "on" and the response included both an "ETag" and a "Last-Modified" header. This came as a surprise. Now, a header like "Cache-Control: max-age=0, must-revalidate" can be used to make nginx cache responses that always require revalidation, e.g., when authorization is required (and cheap). diff -r def9c9c9ae05 -r 2990e56d509f src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c Sat Dec 12 10:32:58 2015 +0300 +++ b/src/http/ngx_http_file_cache.c Tue Dec 15 15:33:35 2015 +0100 @@ -628,7 +628,7 @@ now = ngx_time(); - if (c->valid_sec < now) { + if (c->valid_sec <= now) { ngx_shmtx_lock(&cache->shpool->mutex); @@ -831,7 +831,7 @@ if (fcn->error) { - if (fcn->valid_sec < ngx_time()) { + if (fcn->valid_sec <= ngx_time()) { goto renew; } diff -r def9c9c9ae05 -r 2990e56d509f src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Sat Dec 12 10:32:58 2015 +0300 +++ b/src/http/ngx_http_upstream.c Tue Dec 15 15:33:35 2015 +0100 @@ -2815,11 +2815,15 @@ valid = ngx_http_file_cache_valid(u->conf->cache_valid, u->headers_in.status_n); if (valid) { - r->cache->valid_sec = now + valid; + valid += now; + r->cache->valid_sec = valid; } } - if (valid) { + if (valid > now + || (r->upstream->conf->cache_revalidate + && (u->headers_in.etag || u->headers_in.last_modified))) + { r->cache->date = now; r->cache->body_start = (u_short) (u->buffer.pos - u->buffer.start); @@ -4272,11 +4276,6 @@ return NGX_OK; } - if (n == 0) { - u->cacheable = 0; - return NGX_OK; - } - r->cache->valid_sec = ngx_time() + n; } #endif @@ -4312,7 +4311,7 @@ expires = ngx_parse_http_time(h->value.data, h->value.len); - if (expires == NGX_ERROR || expires < ngx_time()) { + if (expires == NGX_ERROR) { u->cacheable = 0; return NGX_OK; } @@ -4355,10 +4354,6 @@ n = ngx_atoi(p, len); switch (n) { - case 0: - u->cacheable = 0; - /* fall through */ - case NGX_ERROR: return NGX_OK; From danny at saru.moe Wed Dec 16 03:59:29 2015 From: danny at saru.moe (DannyAAM) Date: Wed, 16 Dec 2015 11:59:29 +0800 Subject: [PATCH] Fix ptr resolving with cname Message-ID: <9d8c7332b7300908414e.1450238369@AAMStorage.lan> # HG changeset patch # User DannyAAM # Date 1449696194 -28800 # Thu Dec 10 05:23:14 2015 +0800 # Branch fix-ptr-cname # Node ID 9d8c7332b7300908414e3bec78a90d9d14b30af8 # Parent dfe68c41f34f865bc7b45cbe6b7d0f639de283fc Fix ptr resolving with cname Make ptr process aware of cname & follow it. (This depends on resolver's recursive answer.) diff -r dfe68c41f34f -r 9d8c7332b730 src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c Wed Dec 09 17:47:21 2015 +0300 +++ b/src/core/ngx_resolver.c Thu Dec 10 05:23:14 2015 +0800 @@ -2032,7 +2032,7 @@ int32_t ttl; ngx_int_t octet; ngx_str_t name; - ngx_uint_t i, mask, qident, class; + ngx_uint_t i, mask, qident, type, class; ngx_queue_t *expire_queue; ngx_rbtree_t *tree; ngx_resolver_an_t *an; @@ -2196,9 +2196,14 @@ goto invalid; } + + an = (ngx_resolver_an_t *) &buf[i + 2]; +cname_continue: + class = (an->class_hi << 8) + an->class_lo; + type = (an->type_hi << 8) + an->type_lo; len = (an->len_hi << 8) + an->len_lo; ttl = (an->ttl[0] << 24) + (an->ttl[1] << 16) + (an->ttl[2] << 8) + (an->ttl[3]); @@ -2213,6 +2218,34 @@ ttl = 0; } + /* CNAME processing */ + if (type == NGX_RESOLVE_CNAME) { + do { + if (buf[i] == 0xc0) { + i += 2; + break; + } else { + i += 1 + buf[i]; + } + } while (buf[i] != 0); + an = (ngx_resolver_an_t *) &buf[i]; + len = (an->len_hi << 8) + an->len_lo; + i += sizeof(ngx_resolver_an_t) + len; + + ngx_uint_t nameidx = i; + do { + if (buf[nameidx] == 0xc0) { + nameidx += 2; + break; + } else { + nameidx += 1 + buf[nameidx]; + } + } while (buf[nameidx] != 0); + an = (ngx_resolver_an_t *) &buf[nameidx]; + + goto cname_continue; + } + ngx_log_debug3(NGX_LOG_DEBUG_CORE, r->log, 0, "resolver qt:%ui cl:%ui len:%uz", (an->type_hi << 8) + an->type_lo, From thorvaldur.thorvaldsson at gmail.com Wed Dec 16 14:30:59 2015 From: thorvaldur.thorvaldsson at gmail.com (Thorvaldur Thorvaldsson) Date: Wed, 16 Dec 2015 15:30:59 +0100 Subject: [PATCH] Upstream: Cache stale responses if they may be revalidated. In-Reply-To: References: Message-ID: Hello! I should improve two things here: 1) Check for a valid Last-Modified header instead of just existence. 2) After removing the fall-through case in the switch statement, I should rewrite it as an if/else statement. I'll send an updated patch in a moment. I've also written some test cases to accompany this patch. I'll post them separately. Best regards, Thorvaldur On Tue, Dec 15, 2015 at 3:45 PM, Thorvaldur Thorvaldsson wrote: > # HG changeset patch > # User Thorvaldur Thorvaldsson > # Date 1450190015 -3600 > # Tue Dec 15 15:33:35 2015 +0100 > # Node ID 4a1914481e2bd3eecdc5d23e1386c01e4fb08414 > # Parent def9c9c9ae05cfa7467b0ec96e76afa180c23dfb > Upstream: Cache stale responses if they may be revalidated. > > Previously, the proxy cache would never store stale responses, e.g., > when the "Cache-Control" header contained "max-age=0", even if the > "proxy_cache_revalidate" directive was "on" and the response included > both an "ETag" and a "Last-Modified" header. This came as a surprise. > > Now, a header like "Cache-Control: max-age=0, must-revalidate" can be > used to make nginx cache responses that always require revalidation, > e.g., when authorization is required (and cheap). > > diff -r def9c9c9ae05 -r 4a1914481e2b src/http/ngx_http_file_cache.c > --- a/src/http/ngx_http_file_cache.c Sat Dec 12 10:32:58 2015 +0300 > +++ b/src/http/ngx_http_file_cache.c Tue Dec 15 15:33:35 2015 +0100 > @@ -628,7 +628,7 @@ > > now = ngx_time(); > > - if (c->valid_sec < now) { > + if (c->valid_sec <= now) { > > ngx_shmtx_lock(&cache->shpool->mutex); > > @@ -831,7 +831,7 @@ > > if (fcn->error) { > > - if (fcn->valid_sec < ngx_time()) { > + if (fcn->valid_sec <= ngx_time()) { > goto renew; > } > > diff -r def9c9c9ae05 -r 4a1914481e2b src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c Sat Dec 12 10:32:58 2015 +0300 > +++ b/src/http/ngx_http_upstream.c Tue Dec 15 15:33:35 2015 +0100 > @@ -2815,11 +2815,15 @@ > valid = ngx_http_file_cache_valid(u->conf->cache_valid, > u->headers_in.status_n); > if (valid) { > - r->cache->valid_sec = now + valid; > + valid += now; > + r->cache->valid_sec = valid; > } > } > > - if (valid) { > + if (valid > now > + || (r->upstream->conf->cache_revalidate > + && (u->headers_in.etag || > u->headers_in.last_modified_time))) > + { > r->cache->date = now; > r->cache->body_start = (u_short) (u->buffer.pos - > u->buffer.start); > > @@ -4272,11 +4276,6 @@ > return NGX_OK; > } > > - if (n == 0) { > - u->cacheable = 0; > - return NGX_OK; > - } > - > r->cache->valid_sec = ngx_time() + n; > } > #endif > @@ -4312,7 +4311,7 @@ > > expires = ngx_parse_http_time(h->value.data, h->value.len); > > - if (expires == NGX_ERROR || expires < ngx_time()) { > + if (expires == NGX_ERROR) { > u->cacheable = 0; > return NGX_OK; > } > @@ -4355,10 +4354,6 @@ > n = ngx_atoi(p, len); > > switch (n) { > - case 0: > - u->cacheable = 0; > - /* fall through */ > - > case NGX_ERROR: > return NGX_OK; > > From thorvaldur.thorvaldsson at gmail.com Wed Dec 16 14:37:27 2015 From: thorvaldur.thorvaldsson at gmail.com (Thorvaldur Thorvaldsson) Date: Wed, 16 Dec 2015 15:37:27 +0100 Subject: [PATCH] Upstream: Cache stale responses if they may be revalidated. In-Reply-To: References: Message-ID: # HG changeset patch # User Thorvaldur Thorvaldsson # Date 1450190015 -3600 # Tue Dec 15 15:33:35 2015 +0100 # Node ID 2c1f00c7f857c12587f0ac47323f04c6a881843a # Parent def9c9c9ae05cfa7467b0ec96e76afa180c23dfb Upstream: Cache stale responses if they may be revalidated. Previously, the proxy cache would never store stale responses, e.g., when the "Cache-Control" header contained "max-age=0", even if the "proxy_cache_revalidate" directive was "on" and the response included both an "ETag" and a "Last-Modified" header. This came as a surprise. Now, a header like "Cache-Control: max-age=0, must-revalidate" can be used along with an ETag/Last-Modified header to make nginx cache responses that always require revalidation, e.g., when authorization is required (and cheap). diff -r def9c9c9ae05 -r 2c1f00c7f857 src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c Sat Dec 12 10:32:58 2015 +0300 +++ b/src/http/ngx_http_file_cache.c Tue Dec 15 15:33:35 2015 +0100 @@ -628,7 +628,7 @@ now = ngx_time(); - if (c->valid_sec < now) { + if (c->valid_sec <= now) { ngx_shmtx_lock(&cache->shpool->mutex); @@ -831,7 +831,7 @@ if (fcn->error) { - if (fcn->valid_sec < ngx_time()) { + if (fcn->valid_sec <= ngx_time()) { goto renew; } diff -r def9c9c9ae05 -r 2c1f00c7f857 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Sat Dec 12 10:32:58 2015 +0300 +++ b/src/http/ngx_http_upstream.c Tue Dec 15 15:33:35 2015 +0100 @@ -2815,11 +2815,16 @@ valid = ngx_http_file_cache_valid(u->conf->cache_valid, u->headers_in.status_n); if (valid) { - r->cache->valid_sec = now + valid; + valid += now; + r->cache->valid_sec = valid; } } - if (valid) { + if (valid > now + || (r->upstream->conf->cache_revalidate + && (u->headers_in.etag + || u->headers_in.last_modified_time != -1))) + { r->cache->date = now; r->cache->body_start = (u_short) (u->buffer.pos - u->buffer.start); @@ -4272,11 +4277,6 @@ return NGX_OK; } - if (n == 0) { - u->cacheable = 0; - return NGX_OK; - } - r->cache->valid_sec = ngx_time() + n; } #endif @@ -4312,7 +4312,7 @@ expires = ngx_parse_http_time(h->value.data, h->value.len); - if (expires == NGX_ERROR || expires < ngx_time()) { + if (expires == NGX_ERROR) { u->cacheable = 0; return NGX_OK; } @@ -4354,15 +4354,9 @@ if (p[0] != '@') { n = ngx_atoi(p, len); - switch (n) { - case 0: - u->cacheable = 0; - /* fall through */ - - case NGX_ERROR: + if (n == NGX_ERROR) { return NGX_OK; - - default: + } else { r->cache->valid_sec = ngx_time() + n; return NGX_OK; } From thorvaldur.thorvaldsson at gmail.com Wed Dec 16 14:39:12 2015 From: thorvaldur.thorvaldsson at gmail.com (Thorvaldur Thorvaldsson) Date: Wed, 16 Dec 2015 15:39:12 +0100 Subject: [PATCH] Tests for stale responses from upstream that may be cached. Message-ID: # HG changeset patch # User Thorvaldur Thorvaldsson # Date 1450275691 -3600 # Wed Dec 16 15:21:31 2015 +0100 # Node ID 2843074e6e5f320ecae750cb3995b0ab4540dcad # Parent 5540ee8a12ce6e86f15f7cce616b231fb0fcaf4c Tests for stale responses from upstream that may be cached. The tests correspond to a patch that's been submitted to nginx-devel. The test cases verify the caching and revalidation behaviour when: 1) "proxy_cache_revalidate" directive is "on"; 2) the upstream response includes a "Cache-Control: max-age=0" header along with various combinations of ETag/Last-Modified headers. diff -r 5540ee8a12ce -r 2843074e6e5f proxy_cache_revalidate.t --- a/proxy_cache_revalidate.t Wed Dec 16 15:27:49 2015 +0300 +++ b/proxy_cache_revalidate.t Wed Dec 16 15:21:31 2015 +0100 @@ -21,7 +21,7 @@ select STDERR; $| = 1; select STDOUT; $| = 1; -my $t = Test::Nginx->new()->has(qw/http proxy cache rewrite shmem/)->plan(23) +my $t = Test::Nginx->new()->has(qw/http proxy cache rewrite shmem/)->plan(31) ->write_file_expand('nginx.conf', <<'EOF'); %%TEST_GLOBALS%% @@ -68,6 +68,29 @@ add_header X-If-Modified-Since $http_if_modified_since; return 201; } + location /stale-etag/ { + proxy_pass http://127.0.0.1:8081/; + proxy_hide_header Last-Modified; + add_header Cache-Control "max-age=0"; + } + location /stale-last-modified/ { + proxy_pass http://127.0.0.1:8081/; + proxy_hide_header ETag; + add_header Cache-Control "max-age=0"; + } + location /stale-cannot-revalidate/ { + proxy_pass http://127.0.0.1:8081/; + proxy_hide_header ETag; + proxy_hide_header Last-Modified; + add_header Cache-Control "max-age=0"; + } + location /stale-invalid-last-modified/ { + proxy_pass http://127.0.0.1:8081/; + proxy_hide_header ETag; + proxy_hide_header Last-Modified; + add_header Cache-Control "max-age=0"; + add_header Last-Modified "invalid"; + } } } @@ -94,6 +117,26 @@ like(http_get('/etag/t'), qr/X-Cache-Status: MISS.*SEE/ms, 'etag'); like(http_get('/etag/t'), qr/X-Cache-Status: HIT.*SEE/ms, 'etag cached'); +my $CACHE_MISS = qr/X-Cache-Status: MISS.*?SEE/ms; +my $CACHE_REVALIDATED = qr/X-Cache-Status: REVALIDATED.*?SEE/ms; + +like(http_get('/stale-etag/t'), $CACHE_MISS, 'stale etag'); +like(http_get('/stale-etag/t'), $CACHE_REVALIDATED, 'stale etag revalidated'); + +like(http_get('/stale-last-modified/t'), $CACHE_MISS, 'stale last-modified'); +like(http_get('/stale-last-modified/t'), $CACHE_REVALIDATED, + 'stale last-modified revalidated'); + +like(http_get('/stale-cannot-revalidate/t'), $CACHE_MISS, + 'stale cannot revalidate'); +like(http_get('/stale-cannot-revalidate/t'), $CACHE_MISS, + 'stale cannot revalidate not cached'); + +like(http_get('/stale-invalid-last-modified/t'), $CACHE_MISS, + 'stale invalid last-modified'); +like(http_get('/stale-invalid-last-modified/t'), $CACHE_MISS, + 'stale invalid last-modified not cached'); + like(http_get('/etag/t2'), qr/X-Cache-Status: MISS.*SEE/ms, 'etag2'); like(http_get('/etag/t2'), qr/X-Cache-Status: HIT.*SEE/ms, 'etag2 cached'); From vlad at cloudflare.com Wed Dec 16 14:47:42 2015 From: vlad at cloudflare.com (Vlad Krasnov) Date: Wed, 16 Dec 2015 14:47:42 +0000 Subject: [PATCH] HTTP/2: HPACK Huffman encoding References: <4E4C3D17-D5C0-48CD-AA0F-60A0F293AFD6@cloudflare.com> Message-ID: -------------- next part -------------- A non-text attachment was scrubbed... Name: HPACK_huffman_encode.patch Type: application/octet-stream Size: 22453 bytes Desc: not available URL: From mdounin at mdounin.ru Wed Dec 16 15:52:49 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 16 Dec 2015 18:52:49 +0300 Subject: [PATCH] Upstream: Cache stale responses if they may be revalidated. In-Reply-To: References: Message-ID: <20151216155249.GU74233@mdounin.ru> Hello! On Wed, Dec 16, 2015 at 03:37:27PM +0100, Thorvaldur Thorvaldsson wrote: > # HG changeset patch > # User Thorvaldur Thorvaldsson > # Date 1450190015 -3600 > # Tue Dec 15 15:33:35 2015 +0100 > # Node ID 2c1f00c7f857c12587f0ac47323f04c6a881843a > # Parent def9c9c9ae05cfa7467b0ec96e76afa180c23dfb > Upstream: Cache stale responses if they may be revalidated. > > Previously, the proxy cache would never store stale responses, e.g., > when the "Cache-Control" header contained "max-age=0", even if the > "proxy_cache_revalidate" directive was "on" and the response included > both an "ETag" and a "Last-Modified" header. This came as a surprise. > > Now, a header like "Cache-Control: max-age=0, must-revalidate" can be > used along with an ETag/Last-Modified header to make nginx cache > responses that always require revalidation, e.g., when authorization is > required (and cheap). [...] > diff -r def9c9c9ae05 -r 2c1f00c7f857 src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c Sat Dec 12 10:32:58 2015 +0300 > +++ b/src/http/ngx_http_upstream.c Tue Dec 15 15:33:35 2015 +0100 > @@ -2815,11 +2815,16 @@ > valid = ngx_http_file_cache_valid(u->conf->cache_valid, > u->headers_in.status_n); > if (valid) { > - r->cache->valid_sec = now + valid; > + valid += now; > + r->cache->valid_sec = valid; > } > } > > - if (valid) { > + if (valid > now > + || (r->upstream->conf->cache_revalidate > + && (u->headers_in.etag > + || u->headers_in.last_modified_time != -1))) > + { > r->cache->date = now; > r->cache->body_start = (u_short) (u->buffer.pos - u->buffer.start); As far as I see, this still allows caching of all responses, even ones without Cache-Control/Expires/X-Accel-Expires and proxy_cache_valid configured. This is not something that should happen because of revalidation enabled. You may want to rethink how the patch is expected to work. [...] -- Maxim Dounin http://nginx.org/ From thorvaldur.thorvaldsson at gmail.com Wed Dec 16 16:41:45 2015 From: thorvaldur.thorvaldsson at gmail.com (Thorvaldur Thorvaldsson) Date: Wed, 16 Dec 2015 17:41:45 +0100 Subject: [PATCH] Upstream: Cache stale responses if they may be revalidated. In-Reply-To: <20151216155249.GU74233@mdounin.ru> References: <20151216155249.GU74233@mdounin.ru> Message-ID: Hi, Thanks again for your time, Maxim. And you're right, I was assuming max-age=0 in all its forms to be equivalent to no such header at all. I have a simple fix which I'll post in a moment and then go ahead with adding a relevant test case. Just let me know if I'm completely on a wrong track here. Best regards, Thorvaldur On Wed, Dec 16, 2015 at 4:52 PM, Maxim Dounin wrote: > Hello! > > On Wed, Dec 16, 2015 at 03:37:27PM +0100, Thorvaldur Thorvaldsson wrote: > >> # HG changeset patch >> # User Thorvaldur Thorvaldsson >> # Date 1450190015 -3600 >> # Tue Dec 15 15:33:35 2015 +0100 >> # Node ID 2c1f00c7f857c12587f0ac47323f04c6a881843a >> # Parent def9c9c9ae05cfa7467b0ec96e76afa180c23dfb >> Upstream: Cache stale responses if they may be revalidated. >> >> Previously, the proxy cache would never store stale responses, e.g., >> when the "Cache-Control" header contained "max-age=0", even if the >> "proxy_cache_revalidate" directive was "on" and the response included >> both an "ETag" and a "Last-Modified" header. This came as a surprise. >> >> Now, a header like "Cache-Control: max-age=0, must-revalidate" can be >> used along with an ETag/Last-Modified header to make nginx cache >> responses that always require revalidation, e.g., when authorization is >> required (and cheap). > > [...] > >> diff -r def9c9c9ae05 -r 2c1f00c7f857 src/http/ngx_http_upstream.c >> --- a/src/http/ngx_http_upstream.c Sat Dec 12 10:32:58 2015 +0300 >> +++ b/src/http/ngx_http_upstream.c Tue Dec 15 15:33:35 2015 +0100 >> @@ -2815,11 +2815,16 @@ >> valid = ngx_http_file_cache_valid(u->conf->cache_valid, >> u->headers_in.status_n); >> if (valid) { >> - r->cache->valid_sec = now + valid; >> + valid += now; >> + r->cache->valid_sec = valid; >> } >> } >> >> - if (valid) { >> + if (valid > now >> + || (r->upstream->conf->cache_revalidate >> + && (u->headers_in.etag >> + || u->headers_in.last_modified_time != -1))) >> + { >> r->cache->date = now; >> r->cache->body_start = (u_short) (u->buffer.pos - u->buffer.start); > > As far as I see, this still allows caching of all > responses, even ones without Cache-Control/Expires/X-Accel-Expires > and proxy_cache_valid configured. This is not something that > should happen because of revalidation enabled. > > You may want to rethink how the patch is expected to work. > > [...] > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From thorvaldur.thorvaldsson at gmail.com Wed Dec 16 16:42:57 2015 From: thorvaldur.thorvaldsson at gmail.com (Thorvaldur Thorvaldsson) Date: Wed, 16 Dec 2015 17:42:57 +0100 Subject: [PATCH] Upstream: Cache stale responses if they may be revalidated. In-Reply-To: References: <20151216155249.GU74233@mdounin.ru> Message-ID: # HG changeset patch # User Thorvaldur Thorvaldsson # Date 1450190015 -3600 # Tue Dec 15 15:33:35 2015 +0100 # Node ID b0d5d1f8fb0822973bf160934fcf40c3b5e87f02 # Parent def9c9c9ae05cfa7467b0ec96e76afa180c23dfb Upstream: Cache stale responses if they may be revalidated. Previously, the proxy cache would never store stale responses, e.g., when the "Cache-Control" header contained "max-age=0", even if the "proxy_cache_revalidate" directive was "on" and the response included both an "ETag" and a "Last-Modified" header. This came as a surprise. Now, a header like "Cache-Control: max-age=0, must-revalidate" can be used along with an ETag/Last-Modified header to make nginx cache responses that always require revalidation, e.g., when authorization is required (and cheap). diff -r def9c9c9ae05 -r b0d5d1f8fb08 src/http/ngx_http_file_cache.c --- a/src/http/ngx_http_file_cache.c Sat Dec 12 10:32:58 2015 +0300 +++ b/src/http/ngx_http_file_cache.c Tue Dec 15 15:33:35 2015 +0100 @@ -628,7 +628,7 @@ now = ngx_time(); - if (c->valid_sec < now) { + if (c->valid_sec <= now) { ngx_shmtx_lock(&cache->shpool->mutex); @@ -831,7 +831,7 @@ if (fcn->error) { - if (fcn->valid_sec < ngx_time()) { + if (fcn->valid_sec <= ngx_time()) { goto renew; } diff -r def9c9c9ae05 -r b0d5d1f8fb08 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Sat Dec 12 10:32:58 2015 +0300 +++ b/src/http/ngx_http_upstream.c Tue Dec 15 15:33:35 2015 +0100 @@ -2815,11 +2815,16 @@ valid = ngx_http_file_cache_valid(u->conf->cache_valid, u->headers_in.status_n); if (valid) { - r->cache->valid_sec = now + valid; + valid += now; + r->cache->valid_sec = valid; } } - if (valid) { + if (valid > now + || (valid && r->upstream->conf->cache_revalidate + && (u->headers_in.etag + || u->headers_in.last_modified_time != -1))) + { r->cache->date = now; r->cache->body_start = (u_short) (u->buffer.pos - u->buffer.start); @@ -4272,11 +4277,6 @@ return NGX_OK; } - if (n == 0) { - u->cacheable = 0; - return NGX_OK; - } - r->cache->valid_sec = ngx_time() + n; } #endif @@ -4312,7 +4312,7 @@ expires = ngx_parse_http_time(h->value.data, h->value.len); - if (expires == NGX_ERROR || expires < ngx_time()) { + if (expires == NGX_ERROR) { u->cacheable = 0; return NGX_OK; } @@ -4354,15 +4354,9 @@ if (p[0] != '@') { n = ngx_atoi(p, len); - switch (n) { - case 0: - u->cacheable = 0; - /* fall through */ - - case NGX_ERROR: + if (n == NGX_ERROR) { return NGX_OK; - - default: + } else { r->cache->valid_sec = ngx_time() + n; return NGX_OK; } On Wed, Dec 16, 2015 at 5:41 PM, Thorvaldur Thorvaldsson wrote: > Hi, > > Thanks again for your time, Maxim. And you're right, > I was assuming max-age=0 in all its forms to be equivalent > to no such header at all. I have a simple fix which I'll > post in a moment and then go ahead with adding a > relevant test case. Just let me know if I'm completely > on a wrong track here. > > Best regards, > Thorvaldur > > On Wed, Dec 16, 2015 at 4:52 PM, Maxim Dounin wrote: >> Hello! >> >> On Wed, Dec 16, 2015 at 03:37:27PM +0100, Thorvaldur Thorvaldsson wrote: >> >>> # HG changeset patch >>> # User Thorvaldur Thorvaldsson >>> # Date 1450190015 -3600 >>> # Tue Dec 15 15:33:35 2015 +0100 >>> # Node ID 2c1f00c7f857c12587f0ac47323f04c6a881843a >>> # Parent def9c9c9ae05cfa7467b0ec96e76afa180c23dfb >>> Upstream: Cache stale responses if they may be revalidated. >>> >>> Previously, the proxy cache would never store stale responses, e.g., >>> when the "Cache-Control" header contained "max-age=0", even if the >>> "proxy_cache_revalidate" directive was "on" and the response included >>> both an "ETag" and a "Last-Modified" header. This came as a surprise. >>> >>> Now, a header like "Cache-Control: max-age=0, must-revalidate" can be >>> used along with an ETag/Last-Modified header to make nginx cache >>> responses that always require revalidation, e.g., when authorization is >>> required (and cheap). >> >> [...] >> >>> diff -r def9c9c9ae05 -r 2c1f00c7f857 src/http/ngx_http_upstream.c >>> --- a/src/http/ngx_http_upstream.c Sat Dec 12 10:32:58 2015 +0300 >>> +++ b/src/http/ngx_http_upstream.c Tue Dec 15 15:33:35 2015 +0100 >>> @@ -2815,11 +2815,16 @@ >>> valid = ngx_http_file_cache_valid(u->conf->cache_valid, >>> u->headers_in.status_n); >>> if (valid) { >>> - r->cache->valid_sec = now + valid; >>> + valid += now; >>> + r->cache->valid_sec = valid; >>> } >>> } >>> >>> - if (valid) { >>> + if (valid > now >>> + || (r->upstream->conf->cache_revalidate >>> + && (u->headers_in.etag >>> + || u->headers_in.last_modified_time != -1))) >>> + { >>> r->cache->date = now; >>> r->cache->body_start = (u_short) (u->buffer.pos - u->buffer.start); >> >> As far as I see, this still allows caching of all >> responses, even ones without Cache-Control/Expires/X-Accel-Expires >> and proxy_cache_valid configured. This is not something that >> should happen because of revalidation enabled. >> >> You may want to rethink how the patch is expected to work. >> >> [...] >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel From dk at syse.no Wed Dec 16 23:13:16 2015 From: dk at syse.no (Daniel K.) Date: Wed, 16 Dec 2015 23:13:16 +0000 Subject: [PATCH] http_geo_module: warn when using a variable as the value Message-ID: <5671F00C.8070908@syse.no> Something like this would have saved me a lot of time, hopefully it will be of help to others as well. But hey, I got an excuse to dive into the nginx code. Sorry about the non-'hg export'-ness of the patch. I guess Thunderbird will mangle the whitespace, and if you insist I may eventually install hg and do it properly, but here goes... Regards, Daniel K. Warn when using a variable as the data value in geo blocks Since the geo module does not expand variables, warn when a variable is used as the data value; as in the default entry of: geo $geo { default $foobar; 192.168.2.1 ''; } --- nginx-1.9.9.orig/src/http/modules/ngx_http_geo_module.c +++ nginx-1.9.9/src/http/modules/ngx_http_geo_module.c @@ -621,6 +621,13 @@ ngx_http_geo(ngx_conf_t *cf, ngx_command goto done; } + if (value[1].data[0] == '$') { + ngx_conf_log_error(NGX_LOG_WARN, cf, 0, + "suspect value \"%V\", " + "the geo module does not expand variables", + &value[1]); + } + if (ctx->ranges) { rv = ngx_http_geo_range(cf, ctx, value); From vlad at cloudflare.com Thu Dec 17 09:36:07 2015 From: vlad at cloudflare.com (Vlad Krasnov) Date: Thu, 17 Dec 2015 01:36:07 -0800 Subject: [PATCH] HTTP/2: HPACK Huffman encoding Message-ID: # HG changeset patch # User Vlad Krasnov # Date 1450274269 28800 # Wed Dec 16 05:57:49 2015 -0800 # Node ID d2e16044797ef7f7e0583e7c6dfdae5402c70d5c # Parent def9c9c9ae05cfa7467b0ec96e76afa180c23dfb HTTP/2: HPACK Huffman encoding Implement HPACK Huffman encoding for HTTP/2. This reduces the size of headers by over 30% on average. diff -r def9c9c9ae05 -r d2e16044797e src/http/v2/ngx_http_v2.h --- a/src/http/v2/ngx_http_v2.h Sat Dec 12 10:32:58 2015 +0300 +++ b/src/http/v2/ngx_http_v2.h Wed Dec 16 05:57:49 2015 -0800 @@ -51,6 +51,9 @@ #define NGX_HTTP_V2_PRIORITY_FLAG 0x20 +#define NGX_HTTP_V2_ENCODE_RAW 0 +#define NGX_HTTP_V2_ENCODE_HUFF 0x80 + typedef struct ngx_http_v2_connection_s ngx_http_v2_connection_t; typedef struct ngx_http_v2_node_s ngx_http_v2_node_t; typedef struct ngx_http_v2_out_frame_s ngx_http_v2_out_frame_t; @@ -255,6 +258,28 @@ } +static ngx_inline u_char * +ngx_http_v2_write_int(u_char *pos, ngx_uint_t prefix, ngx_uint_t value) +{ + if (value < prefix) { + *pos++ |= value; + return pos; + } + + *pos++ |= prefix; + value -= prefix; + + while (value >= 128) { + *pos++ = value % 128 + 128; + value /= 128; + } + + *pos++ = (u_char) value; + + return pos; +} + + void ngx_http_v2_init(ngx_event_t *rev); void ngx_http_v2_request_headers_init(void); @@ -275,7 +300,8 @@ ngx_int_t ngx_http_v2_huff_decode(u_char *state, u_char *src, size_t len, u_char **dst, ngx_uint_t last, ngx_log_t *log); - +u_char *ngx_http_v2_string_encode(u_char *src, size_t len, u_char *dst, + u_char *tmp, ngx_log_t *log, ngx_flag_t lower); #define ngx_http_v2_prefix(bits) ((1 << (bits)) - 1) diff -r def9c9c9ae05 -r d2e16044797e src/http/v2/ngx_http_v2_filter_module.c --- a/src/http/v2/ngx_http_v2_filter_module.c Sat Dec 12 10:32:58 2015 +0300 +++ b/src/http/v2/ngx_http_v2_filter_module.c Wed Dec 16 05:57:49 2015 -0800 @@ -25,9 +25,6 @@ #define ngx_http_v2_indexed(i) (128 + (i)) #define ngx_http_v2_inc_indexed(i) (64 + (i)) -#define NGX_HTTP_V2_ENCODE_RAW 0 -#define NGX_HTTP_V2_ENCODE_HUFF 0x80 - #define NGX_HTTP_V2_STATUS_INDEX 8 #define NGX_HTTP_V2_STATUS_200_INDEX 8 #define NGX_HTTP_V2_STATUS_204_INDEX 9 @@ -46,8 +43,9 @@ #define NGX_HTTP_V2_VARY_INDEX 59 -static u_char *ngx_http_v2_write_int(u_char *pos, ngx_uint_t prefix, - ngx_uint_t value); +#define ACCEPT_ENCODING "\x84\x84\x2d\x69\x5b\x05\x44\x3c\x86\xaa\x6f" + + static ngx_http_v2_out_frame_t *ngx_http_v2_create_headers_frame( ngx_http_request_t *r, u_char *pos, u_char *end); @@ -119,8 +117,8 @@ static ngx_int_t ngx_http_v2_header_filter(ngx_http_request_t *r) { - u_char status, *pos, *start, *p; - size_t len; + u_char status, *pos, *start, *p, *huff; + size_t len, hlen, nlen; ngx_str_t host, location; ngx_uint_t i, port; ngx_list_part_t *part; @@ -343,7 +341,7 @@ #if (NGX_HTTP_GZIP) if (r->gzip_vary) { if (clcf->gzip_vary) { - len += 1 + ngx_http_v2_literal_size("Accept-Encoding"); + len += 1 + ngx_http_v2_literal_size(ACCEPT_ENCODING); } else { r->gzip_vary = 0; @@ -354,6 +352,8 @@ part = &r->headers_out.headers.part; header = part->elts; + hlen = len; + for (i = 0; /* void */; i++) { if (i >= part->nelts) { @@ -384,11 +384,14 @@ return NGX_ERROR; } - len += 1 + NGX_HTTP_V2_INT_OCTETS + header[i].key.len + nlen = 1 + NGX_HTTP_V2_INT_OCTETS + header[i].key.len + NGX_HTTP_V2_INT_OCTETS + header[i].value.len; + len += nlen; + hlen = nlen > hlen ? nlen : hlen; } pos = ngx_palloc(r->pool, len); + huff = ngx_palloc(r->pool, hlen); if (pos == NULL) { return NGX_ERROR; } @@ -408,12 +411,15 @@ *pos++ = ngx_http_v2_inc_indexed(NGX_HTTP_V2_SERVER_INDEX); if (clcf->server_tokens) { - *pos++ = NGX_HTTP_V2_ENCODE_RAW | (sizeof(NGINX_VER) - 1); - pos = ngx_cpymem(pos, NGINX_VER, sizeof(NGINX_VER) - 1); + pos = ngx_http_v2_string_encode((u_char*)NGINX_VER, + sizeof(NGINX_VER) - 1, pos, huff, + r->connection->log, 0); } else { - *pos++ = NGX_HTTP_V2_ENCODE_RAW | (sizeof("nginx") - 1); - pos = ngx_cpymem(pos, "nginx", sizeof("nginx") - 1); + pos = ngx_http_v2_string_encode((u_char*)"nginx", + sizeof("nginx") - 1, pos, huff, + r->connection->log, 0); + } } @@ -453,11 +459,9 @@ r->headers_out.content_type.data = p; } else { - *pos = NGX_HTTP_V2_ENCODE_RAW; - pos = ngx_http_v2_write_int(pos, ngx_http_v2_prefix(7), - r->headers_out.content_type.len); - pos = ngx_cpymem(pos, r->headers_out.content_type.data, - r->headers_out.content_type.len); + pos = ngx_http_v2_string_encode(r->headers_out.content_type.data, + r->headers_out.content_type.len, + pos, huff, r->connection->log, 0); } } @@ -476,26 +480,27 @@ { *pos++ = ngx_http_v2_inc_indexed(NGX_HTTP_V2_LAST_MODIFIED_INDEX); - *pos++ = NGX_HTTP_V2_ENCODE_RAW - | (sizeof("Wed, 31 Dec 1986 18:00:00 GMT") - 1); - pos = ngx_http_time(pos, r->headers_out.last_modified_time); + nlen = sizeof("Wed, 31 Dec 1986 18:00:00 GMT") - 1; + ngx_http_time(pos + 1, r->headers_out.last_modified_time); + + pos = ngx_http_v2_string_encode(pos + 1, nlen, pos, huff, + r->connection->log, 0); } if (r->headers_out.location && r->headers_out.location->value.len) { *pos++ = ngx_http_v2_inc_indexed(NGX_HTTP_V2_LOCATION_INDEX); *pos = NGX_HTTP_V2_ENCODE_RAW; - pos = ngx_http_v2_write_int(pos, ngx_http_v2_prefix(7), - r->headers_out.location->value.len); - pos = ngx_cpymem(pos, r->headers_out.location->value.data, - r->headers_out.location->value.len); + pos = ngx_http_v2_string_encode(r->headers_out.location->value.data, + r->headers_out.location->value.len, + pos, huff, r->connection->log, 0); } #if (NGX_HTTP_GZIP) if (r->gzip_vary) { *pos++ = ngx_http_v2_inc_indexed(NGX_HTTP_V2_VARY_INDEX); - *pos++ = NGX_HTTP_V2_ENCODE_RAW | (sizeof("Accept-Encoding") - 1); - pos = ngx_cpymem(pos, "Accept-Encoding", sizeof("Accept-Encoding") - 1); + *pos++ = NGX_HTTP_V2_ENCODE_HUFF | (sizeof(ACCEPT_ENCODING) - 1); + pos = ngx_cpymem(pos, ACCEPT_ENCODING, sizeof(ACCEPT_ENCODING) - 1); } #endif @@ -520,16 +525,14 @@ *pos++ = 0; - *pos = NGX_HTTP_V2_ENCODE_RAW; - pos = ngx_http_v2_write_int(pos, ngx_http_v2_prefix(7), - header[i].key.len); - ngx_strlow(pos, header[i].key.data, header[i].key.len); - pos += header[i].key.len; + pos = ngx_http_v2_string_encode(header[i].key.data, + header[i].key.len, + pos, huff, r->connection->log, 1); - *pos = NGX_HTTP_V2_ENCODE_RAW; - pos = ngx_http_v2_write_int(pos, ngx_http_v2_prefix(7), - header[i].value.len); - pos = ngx_cpymem(pos, header[i].value.data, header[i].value.len); + pos = ngx_http_v2_string_encode(header[i].value.data, + header[i].value.len, + pos, huff, r->connection->log, 0); + } frame = ngx_http_v2_create_headers_frame(r, start, pos); @@ -556,28 +559,6 @@ } -static u_char * -ngx_http_v2_write_int(u_char *pos, ngx_uint_t prefix, ngx_uint_t value) -{ - if (value < prefix) { - *pos++ |= value; - return pos; - } - - *pos++ |= prefix; - value -= prefix; - - while (value >= 128) { - *pos++ = value % 128 + 128; - value /= 128; - } - - *pos++ = (u_char) value; - - return pos; -} - - static ngx_http_v2_out_frame_t * ngx_http_v2_create_headers_frame(ngx_http_request_t *r, u_char *pos, u_char *end) diff -r def9c9c9ae05 -r d2e16044797e src/http/v2/ngx_http_v2_huff_encode.c --- a/src/http/v2/ngx_http_v2_huff_encode.c Sat Dec 12 10:32:58 2015 +0300 +++ b/src/http/v2/ngx_http_v2_huff_encode.c Wed Dec 16 05:57:49 2015 -0800 @@ -1,10 +1,278 @@ /* * Copyright (C) Nginx, Inc. - * Copyright (C) Valentin V. Bartenev + * Copyright (C) Vlad Krasnov */ #include #include #include +#include + + +typedef struct { + ngx_uint_t code; + ngx_uint_t len; +} ngx_http_v2_huff_encode_code_t; + + +static ngx_http_v2_huff_encode_code_t ngx_http_v2_huff_encode_codes[256] = +{ + {0x00001ff8, 13}, {0x007fffd8, 23}, {0x0fffffe2, 28}, {0x0fffffe3, 28}, + {0x0fffffe4, 28}, {0x0fffffe5, 28}, {0x0fffffe6, 28}, {0x0fffffe7, 28}, + {0x0fffffe8, 28}, {0x00ffffea, 24}, {0x3ffffffc, 30}, {0x0fffffe9, 28}, + {0x0fffffea, 28}, {0x3ffffffd, 30}, {0x0fffffeb, 28}, {0x0fffffec, 28}, + {0x0fffffed, 28}, {0x0fffffee, 28}, {0x0fffffef, 28}, {0x0ffffff0, 28}, + {0x0ffffff1, 28}, {0x0ffffff2, 28}, {0x3ffffffe, 30}, {0x0ffffff3, 28}, + {0x0ffffff4, 28}, {0x0ffffff5, 28}, {0x0ffffff6, 28}, {0x0ffffff7, 28}, + {0x0ffffff8, 28}, {0x0ffffff9, 28}, {0x0ffffffa, 28}, {0x0ffffffb, 28}, + {0x00000014, 6}, {0x000003f8, 10}, {0x000003f9, 10}, {0x00000ffa, 12}, + {0x00001ff9, 13}, {0x00000015, 6}, {0x000000f8, 8}, {0x000007fa, 11}, + {0x000003fa, 10}, {0x000003fb, 10}, {0x000000f9, 8}, {0x000007fb, 11}, + {0x000000fa, 8}, {0x00000016, 6}, {0x00000017, 6}, {0x00000018, 6}, + {0x00000000, 5}, {0x00000001, 5}, {0x00000002, 5}, {0x00000019, 6}, + {0x0000001a, 6}, {0x0000001b, 6}, {0x0000001c, 6}, {0x0000001d, 6}, + {0x0000001e, 6}, {0x0000001f, 6}, {0x0000005c, 7}, {0x000000fb, 8}, + {0x00007ffc, 15}, {0x00000020, 6}, {0x00000ffb, 12}, {0x000003fc, 10}, + {0x00001ffa, 13}, {0x00000021, 6}, {0x0000005d, 7}, {0x0000005e, 7}, + {0x0000005f, 7}, {0x00000060, 7}, {0x00000061, 7}, {0x00000062, 7}, + {0x00000063, 7}, {0x00000064, 7}, {0x00000065, 7}, {0x00000066, 7}, + {0x00000067, 7}, {0x00000068, 7}, {0x00000069, 7}, {0x0000006a, 7}, + {0x0000006b, 7}, {0x0000006c, 7}, {0x0000006d, 7}, {0x0000006e, 7}, + {0x0000006f, 7}, {0x00000070, 7}, {0x00000071, 7}, {0x00000072, 7}, + {0x000000fc, 8}, {0x00000073, 7}, {0x000000fd, 8}, {0x00001ffb, 13}, + {0x0007fff0, 19}, {0x00001ffc, 13}, {0x00003ffc, 14}, {0x00000022, 6}, + {0x00007ffd, 15}, {0x00000003, 5}, {0x00000023, 6}, {0x00000004, 5}, + {0x00000024, 6}, {0x00000005, 5}, {0x00000025, 6}, {0x00000026, 6}, + {0x00000027, 6}, {0x00000006, 5}, {0x00000074, 7}, {0x00000075, 7}, + {0x00000028, 6}, {0x00000029, 6}, {0x0000002a, 6}, {0x00000007, 5}, + {0x0000002b, 6}, {0x00000076, 7}, {0x0000002c, 6}, {0x00000008, 5}, + {0x00000009, 5}, {0x0000002d, 6}, {0x00000077, 7}, {0x00000078, 7}, + {0x00000079, 7}, {0x0000007a, 7}, {0x0000007b, 7}, {0x00007ffe, 15}, + {0x000007fc, 11}, {0x00003ffd, 14}, {0x00001ffd, 13}, {0x0ffffffc, 28}, + {0x000fffe6, 20}, {0x003fffd2, 22}, {0x000fffe7, 20}, {0x000fffe8, 20}, + {0x003fffd3, 22}, {0x003fffd4, 22}, {0x003fffd5, 22}, {0x007fffd9, 23}, + {0x003fffd6, 22}, {0x007fffda, 23}, {0x007fffdb, 23}, {0x007fffdc, 23}, + {0x007fffdd, 23}, {0x007fffde, 23}, {0x00ffffeb, 24}, {0x007fffdf, 23}, + {0x00ffffec, 24}, {0x00ffffed, 24}, {0x003fffd7, 22}, {0x007fffe0, 23}, + {0x00ffffee, 24}, {0x007fffe1, 23}, {0x007fffe2, 23}, {0x007fffe3, 23}, + {0x007fffe4, 23}, {0x001fffdc, 21}, {0x003fffd8, 22}, {0x007fffe5, 23}, + {0x003fffd9, 22}, {0x007fffe6, 23}, {0x007fffe7, 23}, {0x00ffffef, 24}, + {0x003fffda, 22}, {0x001fffdd, 21}, {0x000fffe9, 20}, {0x003fffdb, 22}, + {0x003fffdc, 22}, {0x007fffe8, 23}, {0x007fffe9, 23}, {0x001fffde, 21}, + {0x007fffea, 23}, {0x003fffdd, 22}, {0x003fffde, 22}, {0x00fffff0, 24}, + {0x001fffdf, 21}, {0x003fffdf, 22}, {0x007fffeb, 23}, {0x007fffec, 23}, + {0x001fffe0, 21}, {0x001fffe1, 21}, {0x003fffe0, 22}, {0x001fffe2, 21}, + {0x007fffed, 23}, {0x003fffe1, 22}, {0x007fffee, 23}, {0x007fffef, 23}, + {0x000fffea, 20}, {0x003fffe2, 22}, {0x003fffe3, 22}, {0x003fffe4, 22}, + {0x007ffff0, 23}, {0x003fffe5, 22}, {0x003fffe6, 22}, {0x007ffff1, 23}, + {0x03ffffe0, 26}, {0x03ffffe1, 26}, {0x000fffeb, 20}, {0x0007fff1, 19}, + {0x003fffe7, 22}, {0x007ffff2, 23}, {0x003fffe8, 22}, {0x01ffffec, 25}, + {0x03ffffe2, 26}, {0x03ffffe3, 26}, {0x03ffffe4, 26}, {0x07ffffde, 27}, + {0x07ffffdf, 27}, {0x03ffffe5, 26}, {0x00fffff1, 24}, {0x01ffffed, 25}, + {0x0007fff2, 19}, {0x001fffe3, 21}, {0x03ffffe6, 26}, {0x07ffffe0, 27}, + {0x07ffffe1, 27}, {0x03ffffe7, 26}, {0x07ffffe2, 27}, {0x00fffff2, 24}, + {0x001fffe4, 21}, {0x001fffe5, 21}, {0x03ffffe8, 26}, {0x03ffffe9, 26}, + {0x0ffffffd, 28}, {0x07ffffe3, 27}, {0x07ffffe4, 27}, {0x07ffffe5, 27}, + {0x000fffec, 20}, {0x00fffff3, 24}, {0x000fffed, 20}, {0x001fffe6, 21}, + {0x003fffe9, 22}, {0x001fffe7, 21}, {0x001fffe8, 21}, {0x007ffff3, 23}, + {0x003fffea, 22}, {0x003fffeb, 22}, {0x01ffffee, 25}, {0x01ffffef, 25}, + {0x00fffff4, 24}, {0x00fffff5, 24}, {0x03ffffea, 26}, {0x007ffff4, 23}, + {0x03ffffeb, 26}, {0x07ffffe6, 27}, {0x03ffffec, 26}, {0x03ffffed, 26}, + {0x07ffffe7, 27}, {0x07ffffe8, 27}, {0x07ffffe9, 27}, {0x07ffffea, 27}, + {0x07ffffeb, 27}, {0x0ffffffe, 28}, {0x07ffffec, 27}, {0x07ffffed, 27}, + {0x07ffffee, 27}, {0x07ffffef, 27}, {0x07fffff0, 27}, {0x03ffffee, 26} +}; + + +/* Same as above, but embedes to lower case transformations */ +static ngx_http_v2_huff_encode_code_t ngx_http_v2_huff_encode_codes_low[256] = +{ + {0x00001ff8, 13}, {0x007fffd8, 23}, {0x0fffffe2, 28}, {0x0fffffe3, 28}, + {0x0fffffe4, 28}, {0x0fffffe5, 28}, {0x0fffffe6, 28}, {0x0fffffe7, 28}, + {0x0fffffe8, 28}, {0x00ffffea, 24}, {0x3ffffffc, 30}, {0x0fffffe9, 28}, + {0x0fffffea, 28}, {0x3ffffffd, 30}, {0x0fffffeb, 28}, {0x0fffffec, 28}, + {0x0fffffed, 28}, {0x0fffffee, 28}, {0x0fffffef, 28}, {0x0ffffff0, 28}, + {0x0ffffff1, 28}, {0x0ffffff2, 28}, {0x3ffffffe, 30}, {0x0ffffff3, 28}, + {0x0ffffff4, 28}, {0x0ffffff5, 28}, {0x0ffffff6, 28}, {0x0ffffff7, 28}, + {0x0ffffff8, 28}, {0x0ffffff9, 28}, {0x0ffffffa, 28}, {0x0ffffffb, 28}, + {0x00000014, 6}, {0x000003f8, 10}, {0x000003f9, 10}, {0x00000ffa, 12}, + {0x00001ff9, 13}, {0x00000015, 6}, {0x000000f8, 8}, {0x000007fa, 11}, + {0x000003fa, 10}, {0x000003fb, 10}, {0x000000f9, 8}, {0x000007fb, 11}, + {0x000000fa, 8}, {0x00000016, 6}, {0x00000017, 6}, {0x00000018, 6}, + {0x00000000, 5}, {0x00000001, 5}, {0x00000002, 5}, {0x00000019, 6}, + {0x0000001a, 6}, {0x0000001b, 6}, {0x0000001c, 6}, {0x0000001d, 6}, + {0x0000001e, 6}, {0x0000001f, 6}, {0x0000005c, 7}, {0x000000fb, 8}, + {0x00007ffc, 15}, {0x00000020, 6}, {0x00000ffb, 12}, {0x000003fc, 10}, + {0x00001ffa, 13}, {0x00000003, 5}, {0x00000023, 6}, {0x00000004, 5}, + {0x00000024, 6}, {0x00000005, 5}, {0x00000025, 6}, {0x00000026, 6}, + {0x00000027, 6}, {0x00000006, 5}, {0x00000074, 7}, {0x00000075, 7}, + {0x00000028, 6}, {0x00000029, 6}, {0x0000002a, 6}, {0x00000007, 5}, + {0x0000002b, 6}, {0x00000076, 7}, {0x0000002c, 6}, {0x00000008, 5}, + {0x00000009, 5}, {0x0000002d, 6}, {0x00000077, 7}, {0x00000078, 7}, + {0x00000079, 7}, {0x0000007a, 7}, {0x0000007b, 7}, {0x00001ffb, 13}, + {0x0007fff0, 19}, {0x00001ffc, 13}, {0x00003ffc, 14}, {0x00000022, 6}, + {0x00007ffd, 15}, {0x00000003, 5}, {0x00000023, 6}, {0x00000004, 5}, + {0x00000024, 6}, {0x00000005, 5}, {0x00000025, 6}, {0x00000026, 6}, + {0x00000027, 6}, {0x00000006, 5}, {0x00000074, 7}, {0x00000075, 7}, + {0x00000028, 6}, {0x00000029, 6}, {0x0000002a, 6}, {0x00000007, 5}, + {0x0000002b, 6}, {0x00000076, 7}, {0x0000002c, 6}, {0x00000008, 5}, + {0x00000009, 5}, {0x0000002d, 6}, {0x00000077, 7}, {0x00000078, 7}, + {0x00000079, 7}, {0x0000007a, 7}, {0x0000007b, 7}, {0x00007ffe, 15}, + {0x000007fc, 11}, {0x00003ffd, 14}, {0x00001ffd, 13}, {0x0ffffffc, 28}, + {0x000fffe6, 20}, {0x003fffd2, 22}, {0x000fffe7, 20}, {0x000fffe8, 20}, + {0x003fffd3, 22}, {0x003fffd4, 22}, {0x003fffd5, 22}, {0x007fffd9, 23}, + {0x003fffd6, 22}, {0x007fffda, 23}, {0x007fffdb, 23}, {0x007fffdc, 23}, + {0x007fffdd, 23}, {0x007fffde, 23}, {0x00ffffeb, 24}, {0x007fffdf, 23}, + {0x00ffffec, 24}, {0x00ffffed, 24}, {0x003fffd7, 22}, {0x007fffe0, 23}, + {0x00ffffee, 24}, {0x007fffe1, 23}, {0x007fffe2, 23}, {0x007fffe3, 23}, + {0x007fffe4, 23}, {0x001fffdc, 21}, {0x003fffd8, 22}, {0x007fffe5, 23}, + {0x003fffd9, 22}, {0x007fffe6, 23}, {0x007fffe7, 23}, {0x00ffffef, 24}, + {0x003fffda, 22}, {0x001fffdd, 21}, {0x000fffe9, 20}, {0x003fffdb, 22}, + {0x003fffdc, 22}, {0x007fffe8, 23}, {0x007fffe9, 23}, {0x001fffde, 21}, + {0x007fffea, 23}, {0x003fffdd, 22}, {0x003fffde, 22}, {0x00fffff0, 24}, + {0x001fffdf, 21}, {0x003fffdf, 22}, {0x007fffeb, 23}, {0x007fffec, 23}, + {0x001fffe0, 21}, {0x001fffe1, 21}, {0x003fffe0, 22}, {0x001fffe2, 21}, + {0x007fffed, 23}, {0x003fffe1, 22}, {0x007fffee, 23}, {0x007fffef, 23}, + {0x000fffea, 20}, {0x003fffe2, 22}, {0x003fffe3, 22}, {0x003fffe4, 22}, + {0x007ffff0, 23}, {0x003fffe5, 22}, {0x003fffe6, 22}, {0x007ffff1, 23}, + {0x03ffffe0, 26}, {0x03ffffe1, 26}, {0x000fffeb, 20}, {0x0007fff1, 19}, + {0x003fffe7, 22}, {0x007ffff2, 23}, {0x003fffe8, 22}, {0x01ffffec, 25}, + {0x03ffffe2, 26}, {0x03ffffe3, 26}, {0x03ffffe4, 26}, {0x07ffffde, 27}, + {0x07ffffdf, 27}, {0x03ffffe5, 26}, {0x00fffff1, 24}, {0x01ffffed, 25}, + {0x0007fff2, 19}, {0x001fffe3, 21}, {0x03ffffe6, 26}, {0x07ffffe0, 27}, + {0x07ffffe1, 27}, {0x03ffffe7, 26}, {0x07ffffe2, 27}, {0x00fffff2, 24}, + {0x001fffe4, 21}, {0x001fffe5, 21}, {0x03ffffe8, 26}, {0x03ffffe9, 26}, + {0x0ffffffd, 28}, {0x07ffffe3, 27}, {0x07ffffe4, 27}, {0x07ffffe5, 27}, + {0x000fffec, 20}, {0x00fffff3, 24}, {0x000fffed, 20}, {0x001fffe6, 21}, + {0x003fffe9, 22}, {0x001fffe7, 21}, {0x001fffe8, 21}, {0x007ffff3, 23}, + {0x003fffea, 22}, {0x003fffeb, 22}, {0x01ffffee, 25}, {0x01ffffef, 25}, + {0x00fffff4, 24}, {0x00fffff5, 24}, {0x03ffffea, 26}, {0x007ffff4, 23}, + {0x03ffffeb, 26}, {0x07ffffe6, 27}, {0x03ffffec, 26}, {0x03ffffed, 26}, + {0x07ffffe7, 27}, {0x07ffffe8, 27}, {0x07ffffe9, 27}, {0x07ffffea, 27}, + {0x07ffffeb, 27}, {0x0ffffffe, 28}, {0x07ffffec, 27}, {0x07ffffed, 27}, + {0x07ffffee, 27}, {0x07ffffef, 27}, {0x07fffff0, 27}, {0x03ffffee, 26} +}; + + +#if (NGX_HAVE_LITTLE_ENDIAN) +#if (__GNUC__) +static void +put64(u_char *dst, uint64_t val) { + *(uint64_t*)dst = __bswap_64(val); +} +#else /* !__GNUC__ */ + +static void +put64(u_char *dst, uint64_t val) { + dst[0] = val >> 56; + dst[1] = val >> 48; + dst[2] = val >> 40; + dst[3] = val >> 32; + dst[4] = val >> 24; + dst[5] = val >> 16; + dst[6] = val >> 8; + dst[7] = val; +} +#endif /* __GNUC__ */ + +#else /* !NGX_HAVE_LITTLE_ENDIAN */ +static void +put64(u_char *dst, uint64_t val) { + *(uint64_t*)dst = val; +} +#endif + + +static ngx_int_t +ngx_http_v2_huff_encode(u_char *src, size_t len, u_char *dst, ngx_log_t *log, + ngx_flag_t lower) +{ + ngx_uint_t inp = 0, outp = 0; + ngx_int_t pending = 0; + ngx_http_v2_huff_encode_code_t next; + uint64_t buf = 0; + ngx_http_v2_huff_encode_code_t *table; + + if (!lower) { + table = ngx_http_v2_huff_encode_codes; + + } else { + table = ngx_http_v2_huff_encode_codes_low; + } + + while (inp < len) { + next = table[src[inp]]; + /* Accumulate 64 bits */ + if ((next.len + pending) < 64) { + buf ^= (uint64_t)next.code << (64 - pending - next.len); + pending += next.len; + + } else { + /* If compressed result is longer than source, no point in using it */ + if ((outp + 8) >= len) { + return NGX_ERROR; + } + + buf ^= (uint64_t)next.code >> (next.len - (64 - pending)); + put64(&dst[outp], buf); + outp += 8; + pending = (next.len - (64 - pending)); + + if (pending) { + buf = (uint64_t)next.code << (64 - pending); + + } else { + buf = 0; + } + } + inp++; + } + + buf ^= 0xffffffffffffffffUL >> pending; + + while (pending > 0) { + dst[outp] = buf >> 56; + buf <<= 8; + pending -= 8; + outp++; + + if (outp >= len) { + return NGX_ERROR; + } + } + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, log, 0, + "http2 huffman encoding successful; " + "input len: %d, output len %d", len, outp); + return outp; +} + + +u_char * +ngx_http_v2_string_encode(u_char *src, size_t len, u_char *dst, u_char *tmp, + ngx_log_t *log, ngx_flag_t lower) +{ + ngx_int_t hlen = ngx_http_v2_huff_encode(src, len, tmp, log, lower); + + if (hlen != NGX_ERROR) { + *dst = NGX_HTTP_V2_ENCODE_HUFF; + dst = ngx_http_v2_write_int(dst, ngx_http_v2_prefix(7), hlen); + dst = ngx_cpymem(dst, tmp, hlen); + + } else { + *dst = NGX_HTTP_V2_ENCODE_RAW; + dst = ngx_http_v2_write_int(dst, ngx_http_v2_prefix(7), len); + + if (lower) { + ngx_strlow(dst, src, len); + dst += len; + + } else { + dst = ngx_cpymem(dst, src, len); + } + } + + return dst; +} From vbart at nginx.com Thu Dec 17 11:06:42 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 17 Dec 2015 14:06:42 +0300 Subject: [PATCH] http_geo_module: warn when using a variable as the value In-Reply-To: <5671F00C.8070908@syse.no> References: <5671F00C.8070908@syse.no> Message-ID: <13069249.Q9Q3cs6eDT@vbart-workstation> On Wednesday 16 December 2015 23:13:16 Daniel K. wrote: > Something like this would have saved me a lot of time, hopefully it will > be of help to others as well. > > But hey, I got an excuse to dive into the nginx code. > > Sorry about the non-'hg export'-ness of the patch. > I guess Thunderbird will mangle the whitespace, and if you insist I may > eventually install hg and do it properly, but here goes... > [..] There are a lot of directives that don't support variables. If the directive is support variables, then it is explicitly mentioned in the documentation. What is so special about the geo directive? The patch that adds variables support will have better chances to be approved than the one that adds a warning. wbr, Valentin V. Bartenev From vbart at nginx.com Thu Dec 17 11:30:41 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 17 Dec 2015 14:30:41 +0300 Subject: [PATCH] HTTP/2: HPACK Huffman encoding In-Reply-To: References: Message-ID: <1784789.dEHjQklqMX@vbart-workstation> On Thursday 17 December 2015 01:36:07 Vlad Krasnov wrote: > # HG changeset patch > # User Vlad Krasnov > # Date 1450274269 28800 > # Wed Dec 16 05:57:49 2015 -0800 > # Node ID d2e16044797ef7f7e0583e7c6dfdae5402c70d5c > # Parent def9c9c9ae05cfa7467b0ec96e76afa180c23dfb > HTTP/2: HPACK Huffman encoding > > Implement HPACK Huffman encoding for HTTP/2. > This reduces the size of headers by over 30% on average. > [..] Thank you for the patch. I'll look at it as time permits. >From the brief glance I've spotted a number of style issues. The most obvious is mixed tabs and spaces in indentation. Also I think we shouldn't even try to compress values less than 4 bytes (or even more... like 8-16, IMO the probability of saving 1-4 bytes doesn't worth it). wbr, Valentin V. Bartenev From vlad at cloudflare.com Thu Dec 17 11:40:14 2015 From: vlad at cloudflare.com (Vlad Krasnov) Date: Thu, 17 Dec 2015 11:40:14 +0000 Subject: [PATCH] HTTP/2: HPACK Huffman encoding In-Reply-To: <1784789.dEHjQklqMX@vbart-workstation> References: <1784789.dEHjQklqMX@vbart-workstation> Message-ID: <50B36859-37FF-4B81-8EA0-1BEA6E56C87A@cloudflare.com> I will remove the tabs, not sure how they got there. Regarding the minimal compression field: encoding 4 bytes yields real benefits. Values like ?gzip? or ?link? get one byte shoved off, which makes it 25% smaller and it adds up. Less than 4 bytes is probably useless, but then again those are not common, and shouldn?t be a performance issue. > On 17 Dec 2015, at 11:30, Valentin V. Bartenev wrote: > > On Thursday 17 December 2015 01:36:07 Vlad Krasnov wrote: >> # HG changeset patch >> # User Vlad Krasnov >> # Date 1450274269 28800 >> # Wed Dec 16 05:57:49 2015 -0800 >> # Node ID d2e16044797ef7f7e0583e7c6dfdae5402c70d5c >> # Parent def9c9c9ae05cfa7467b0ec96e76afa180c23dfb >> HTTP/2: HPACK Huffman encoding >> >> Implement HPACK Huffman encoding for HTTP/2. >> This reduces the size of headers by over 30% on average. >> > [..] > > Thank you for the patch. I'll look at it as time permits. > > From the brief glance I've spotted a number of style issues. > The most obvious is mixed tabs and spaces in indentation. > > Also I think we shouldn't even try to compress values less > than 4 bytes (or even more... like 8-16, IMO the probability > of saving 1-4 bytes doesn't worth it). > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From vlad at cloudflare.com Thu Dec 17 11:49:32 2015 From: vlad at cloudflare.com (Vlad Krasnov) Date: Thu, 17 Dec 2015 03:49:32 -0800 Subject: [PATCH] HTTP/2: HPACK Huffman encoding Message-ID: # HG changeset patch # User Vlad Krasnov # Date 1450274269 28800 # Wed Dec 16 05:57:49 2015 -0800 # Node ID d672e9c2b814710986683e050491536355c90f0d # Parent def9c9c9ae05cfa7467b0ec96e76afa180c23dfb HTTP/2: HPACK Huffman encoding Implement HPACK Huffman encoding for HTTP/2. This reduces the size of headers by over 30% on average. diff -r def9c9c9ae05 -r d672e9c2b814 src/http/v2/ngx_http_v2.h --- a/src/http/v2/ngx_http_v2.h Sat Dec 12 10:32:58 2015 +0300 +++ b/src/http/v2/ngx_http_v2.h Wed Dec 16 05:57:49 2015 -0800 @@ -51,6 +51,9 @@ #define NGX_HTTP_V2_PRIORITY_FLAG 0x20 +#define NGX_HTTP_V2_ENCODE_RAW 0 +#define NGX_HTTP_V2_ENCODE_HUFF 0x80 + typedef struct ngx_http_v2_connection_s ngx_http_v2_connection_t; typedef struct ngx_http_v2_node_s ngx_http_v2_node_t; typedef struct ngx_http_v2_out_frame_s ngx_http_v2_out_frame_t; @@ -255,6 +258,28 @@ } +static ngx_inline u_char * +ngx_http_v2_write_int(u_char *pos, ngx_uint_t prefix, ngx_uint_t value) +{ + if (value < prefix) { + *pos++ |= value; + return pos; + } + + *pos++ |= prefix; + value -= prefix; + + while (value >= 128) { + *pos++ = value % 128 + 128; + value /= 128; + } + + *pos++ = (u_char) value; + + return pos; +} + + void ngx_http_v2_init(ngx_event_t *rev); void ngx_http_v2_request_headers_init(void); @@ -275,7 +300,8 @@ ngx_int_t ngx_http_v2_huff_decode(u_char *state, u_char *src, size_t len, u_char **dst, ngx_uint_t last, ngx_log_t *log); - +u_char *ngx_http_v2_string_encode(u_char *src, size_t len, u_char *dst, + u_char *tmp, ngx_log_t *log, ngx_flag_t lower); #define ngx_http_v2_prefix(bits) ((1 << (bits)) - 1) diff -r def9c9c9ae05 -r d672e9c2b814 src/http/v2/ngx_http_v2_filter_module.c --- a/src/http/v2/ngx_http_v2_filter_module.c Sat Dec 12 10:32:58 2015 +0300 +++ b/src/http/v2/ngx_http_v2_filter_module.c Wed Dec 16 05:57:49 2015 -0800 @@ -25,9 +25,6 @@ #define ngx_http_v2_indexed(i) (128 + (i)) #define ngx_http_v2_inc_indexed(i) (64 + (i)) -#define NGX_HTTP_V2_ENCODE_RAW 0 -#define NGX_HTTP_V2_ENCODE_HUFF 0x80 - #define NGX_HTTP_V2_STATUS_INDEX 8 #define NGX_HTTP_V2_STATUS_200_INDEX 8 #define NGX_HTTP_V2_STATUS_204_INDEX 9 @@ -46,8 +43,9 @@ #define NGX_HTTP_V2_VARY_INDEX 59 -static u_char *ngx_http_v2_write_int(u_char *pos, ngx_uint_t prefix, - ngx_uint_t value); +#define ACCEPT_ENCODING "\x84\x84\x2d\x69\x5b\x05\x44\x3c\x86\xaa\x6f" + + static ngx_http_v2_out_frame_t *ngx_http_v2_create_headers_frame( ngx_http_request_t *r, u_char *pos, u_char *end); @@ -119,8 +117,8 @@ static ngx_int_t ngx_http_v2_header_filter(ngx_http_request_t *r) { - u_char status, *pos, *start, *p; - size_t len; + u_char status, *pos, *start, *p, *huff; + size_t len, hlen, nlen; ngx_str_t host, location; ngx_uint_t i, port; ngx_list_part_t *part; @@ -343,7 +341,7 @@ #if (NGX_HTTP_GZIP) if (r->gzip_vary) { if (clcf->gzip_vary) { - len += 1 + ngx_http_v2_literal_size("Accept-Encoding"); + len += 1 + ngx_http_v2_literal_size(ACCEPT_ENCODING); } else { r->gzip_vary = 0; @@ -354,6 +352,8 @@ part = &r->headers_out.headers.part; header = part->elts; + hlen = len; + for (i = 0; /* void */; i++) { if (i >= part->nelts) { @@ -384,11 +384,14 @@ return NGX_ERROR; } - len += 1 + NGX_HTTP_V2_INT_OCTETS + header[i].key.len + nlen = 1 + NGX_HTTP_V2_INT_OCTETS + header[i].key.len + NGX_HTTP_V2_INT_OCTETS + header[i].value.len; + len += nlen; + hlen = nlen > hlen ? nlen : hlen; } pos = ngx_palloc(r->pool, len); + huff = ngx_palloc(r->pool, hlen); if (pos == NULL) { return NGX_ERROR; } @@ -408,12 +411,15 @@ *pos++ = ngx_http_v2_inc_indexed(NGX_HTTP_V2_SERVER_INDEX); if (clcf->server_tokens) { - *pos++ = NGX_HTTP_V2_ENCODE_RAW | (sizeof(NGINX_VER) - 1); - pos = ngx_cpymem(pos, NGINX_VER, sizeof(NGINX_VER) - 1); + pos = ngx_http_v2_string_encode((u_char*)NGINX_VER, + sizeof(NGINX_VER) - 1, pos, huff, + r->connection->log, 0); } else { - *pos++ = NGX_HTTP_V2_ENCODE_RAW | (sizeof("nginx") - 1); - pos = ngx_cpymem(pos, "nginx", sizeof("nginx") - 1); + pos = ngx_http_v2_string_encode((u_char*)"nginx", + sizeof("nginx") - 1, pos, huff, + r->connection->log, 0); + } } @@ -453,11 +459,9 @@ r->headers_out.content_type.data = p; } else { - *pos = NGX_HTTP_V2_ENCODE_RAW; - pos = ngx_http_v2_write_int(pos, ngx_http_v2_prefix(7), - r->headers_out.content_type.len); - pos = ngx_cpymem(pos, r->headers_out.content_type.data, - r->headers_out.content_type.len); + pos = ngx_http_v2_string_encode(r->headers_out.content_type.data, + r->headers_out.content_type.len, + pos, huff, r->connection->log, 0); } } @@ -476,26 +480,27 @@ { *pos++ = ngx_http_v2_inc_indexed(NGX_HTTP_V2_LAST_MODIFIED_INDEX); - *pos++ = NGX_HTTP_V2_ENCODE_RAW - | (sizeof("Wed, 31 Dec 1986 18:00:00 GMT") - 1); - pos = ngx_http_time(pos, r->headers_out.last_modified_time); + nlen = sizeof("Wed, 31 Dec 1986 18:00:00 GMT") - 1; + ngx_http_time(pos + 1, r->headers_out.last_modified_time); + + pos = ngx_http_v2_string_encode(pos + 1, nlen, pos, huff, + r->connection->log, 0); } if (r->headers_out.location && r->headers_out.location->value.len) { *pos++ = ngx_http_v2_inc_indexed(NGX_HTTP_V2_LOCATION_INDEX); *pos = NGX_HTTP_V2_ENCODE_RAW; - pos = ngx_http_v2_write_int(pos, ngx_http_v2_prefix(7), - r->headers_out.location->value.len); - pos = ngx_cpymem(pos, r->headers_out.location->value.data, - r->headers_out.location->value.len); + pos = ngx_http_v2_string_encode(r->headers_out.location->value.data, + r->headers_out.location->value.len, + pos, huff, r->connection->log, 0); } #if (NGX_HTTP_GZIP) if (r->gzip_vary) { *pos++ = ngx_http_v2_inc_indexed(NGX_HTTP_V2_VARY_INDEX); - *pos++ = NGX_HTTP_V2_ENCODE_RAW | (sizeof("Accept-Encoding") - 1); - pos = ngx_cpymem(pos, "Accept-Encoding", sizeof("Accept-Encoding") - 1); + *pos++ = NGX_HTTP_V2_ENCODE_HUFF | (sizeof(ACCEPT_ENCODING) - 1); + pos = ngx_cpymem(pos, ACCEPT_ENCODING, sizeof(ACCEPT_ENCODING) - 1); } #endif @@ -520,16 +525,14 @@ *pos++ = 0; - *pos = NGX_HTTP_V2_ENCODE_RAW; - pos = ngx_http_v2_write_int(pos, ngx_http_v2_prefix(7), - header[i].key.len); - ngx_strlow(pos, header[i].key.data, header[i].key.len); - pos += header[i].key.len; + pos = ngx_http_v2_string_encode(header[i].key.data, + header[i].key.len, + pos, huff, r->connection->log, 1); - *pos = NGX_HTTP_V2_ENCODE_RAW; - pos = ngx_http_v2_write_int(pos, ngx_http_v2_prefix(7), - header[i].value.len); - pos = ngx_cpymem(pos, header[i].value.data, header[i].value.len); + pos = ngx_http_v2_string_encode(header[i].value.data, + header[i].value.len, + pos, huff, r->connection->log, 0); + } frame = ngx_http_v2_create_headers_frame(r, start, pos); @@ -556,28 +559,6 @@ } -static u_char * -ngx_http_v2_write_int(u_char *pos, ngx_uint_t prefix, ngx_uint_t value) -{ - if (value < prefix) { - *pos++ |= value; - return pos; - } - - *pos++ |= prefix; - value -= prefix; - - while (value >= 128) { - *pos++ = value % 128 + 128; - value /= 128; - } - - *pos++ = (u_char) value; - - return pos; -} - - static ngx_http_v2_out_frame_t * ngx_http_v2_create_headers_frame(ngx_http_request_t *r, u_char *pos, u_char *end) diff -r def9c9c9ae05 -r d672e9c2b814 src/http/v2/ngx_http_v2_huff_encode.c --- a/src/http/v2/ngx_http_v2_huff_encode.c Sat Dec 12 10:32:58 2015 +0300 +++ b/src/http/v2/ngx_http_v2_huff_encode.c Wed Dec 16 05:57:49 2015 -0800 @@ -1,10 +1,296 @@ /* * Copyright (C) Nginx, Inc. - * Copyright (C) Valentin V. Bartenev + * Copyright (C) Vlad Krasnov */ #include #include #include +#if (__GNUC__) +#include +#endif + + +typedef struct { + ngx_uint_t code; + ngx_uint_t len; +} ngx_http_v2_huff_encode_code_t; + + +static ngx_http_v2_huff_encode_code_t ngx_http_v2_huff_encode_codes[256] = +{ + {0x00001ff8, 13}, {0x007fffd8, 23}, {0x0fffffe2, 28}, {0x0fffffe3, 28}, + {0x0fffffe4, 28}, {0x0fffffe5, 28}, {0x0fffffe6, 28}, {0x0fffffe7, 28}, + {0x0fffffe8, 28}, {0x00ffffea, 24}, {0x3ffffffc, 30}, {0x0fffffe9, 28}, + {0x0fffffea, 28}, {0x3ffffffd, 30}, {0x0fffffeb, 28}, {0x0fffffec, 28}, + {0x0fffffed, 28}, {0x0fffffee, 28}, {0x0fffffef, 28}, {0x0ffffff0, 28}, + {0x0ffffff1, 28}, {0x0ffffff2, 28}, {0x3ffffffe, 30}, {0x0ffffff3, 28}, + {0x0ffffff4, 28}, {0x0ffffff5, 28}, {0x0ffffff6, 28}, {0x0ffffff7, 28}, + {0x0ffffff8, 28}, {0x0ffffff9, 28}, {0x0ffffffa, 28}, {0x0ffffffb, 28}, + {0x00000014, 6}, {0x000003f8, 10}, {0x000003f9, 10}, {0x00000ffa, 12}, + {0x00001ff9, 13}, {0x00000015, 6}, {0x000000f8, 8}, {0x000007fa, 11}, + {0x000003fa, 10}, {0x000003fb, 10}, {0x000000f9, 8}, {0x000007fb, 11}, + {0x000000fa, 8}, {0x00000016, 6}, {0x00000017, 6}, {0x00000018, 6}, + {0x00000000, 5}, {0x00000001, 5}, {0x00000002, 5}, {0x00000019, 6}, + {0x0000001a, 6}, {0x0000001b, 6}, {0x0000001c, 6}, {0x0000001d, 6}, + {0x0000001e, 6}, {0x0000001f, 6}, {0x0000005c, 7}, {0x000000fb, 8}, + {0x00007ffc, 15}, {0x00000020, 6}, {0x00000ffb, 12}, {0x000003fc, 10}, + {0x00001ffa, 13}, {0x00000021, 6}, {0x0000005d, 7}, {0x0000005e, 7}, + {0x0000005f, 7}, {0x00000060, 7}, {0x00000061, 7}, {0x00000062, 7}, + {0x00000063, 7}, {0x00000064, 7}, {0x00000065, 7}, {0x00000066, 7}, + {0x00000067, 7}, {0x00000068, 7}, {0x00000069, 7}, {0x0000006a, 7}, + {0x0000006b, 7}, {0x0000006c, 7}, {0x0000006d, 7}, {0x0000006e, 7}, + {0x0000006f, 7}, {0x00000070, 7}, {0x00000071, 7}, {0x00000072, 7}, + {0x000000fc, 8}, {0x00000073, 7}, {0x000000fd, 8}, {0x00001ffb, 13}, + {0x0007fff0, 19}, {0x00001ffc, 13}, {0x00003ffc, 14}, {0x00000022, 6}, + {0x00007ffd, 15}, {0x00000003, 5}, {0x00000023, 6}, {0x00000004, 5}, + {0x00000024, 6}, {0x00000005, 5}, {0x00000025, 6}, {0x00000026, 6}, + {0x00000027, 6}, {0x00000006, 5}, {0x00000074, 7}, {0x00000075, 7}, + {0x00000028, 6}, {0x00000029, 6}, {0x0000002a, 6}, {0x00000007, 5}, + {0x0000002b, 6}, {0x00000076, 7}, {0x0000002c, 6}, {0x00000008, 5}, + {0x00000009, 5}, {0x0000002d, 6}, {0x00000077, 7}, {0x00000078, 7}, + {0x00000079, 7}, {0x0000007a, 7}, {0x0000007b, 7}, {0x00007ffe, 15}, + {0x000007fc, 11}, {0x00003ffd, 14}, {0x00001ffd, 13}, {0x0ffffffc, 28}, + {0x000fffe6, 20}, {0x003fffd2, 22}, {0x000fffe7, 20}, {0x000fffe8, 20}, + {0x003fffd3, 22}, {0x003fffd4, 22}, {0x003fffd5, 22}, {0x007fffd9, 23}, + {0x003fffd6, 22}, {0x007fffda, 23}, {0x007fffdb, 23}, {0x007fffdc, 23}, + {0x007fffdd, 23}, {0x007fffde, 23}, {0x00ffffeb, 24}, {0x007fffdf, 23}, + {0x00ffffec, 24}, {0x00ffffed, 24}, {0x003fffd7, 22}, {0x007fffe0, 23}, + {0x00ffffee, 24}, {0x007fffe1, 23}, {0x007fffe2, 23}, {0x007fffe3, 23}, + {0x007fffe4, 23}, {0x001fffdc, 21}, {0x003fffd8, 22}, {0x007fffe5, 23}, + {0x003fffd9, 22}, {0x007fffe6, 23}, {0x007fffe7, 23}, {0x00ffffef, 24}, + {0x003fffda, 22}, {0x001fffdd, 21}, {0x000fffe9, 20}, {0x003fffdb, 22}, + {0x003fffdc, 22}, {0x007fffe8, 23}, {0x007fffe9, 23}, {0x001fffde, 21}, + {0x007fffea, 23}, {0x003fffdd, 22}, {0x003fffde, 22}, {0x00fffff0, 24}, + {0x001fffdf, 21}, {0x003fffdf, 22}, {0x007fffeb, 23}, {0x007fffec, 23}, + {0x001fffe0, 21}, {0x001fffe1, 21}, {0x003fffe0, 22}, {0x001fffe2, 21}, + {0x007fffed, 23}, {0x003fffe1, 22}, {0x007fffee, 23}, {0x007fffef, 23}, + {0x000fffea, 20}, {0x003fffe2, 22}, {0x003fffe3, 22}, {0x003fffe4, 22}, + {0x007ffff0, 23}, {0x003fffe5, 22}, {0x003fffe6, 22}, {0x007ffff1, 23}, + {0x03ffffe0, 26}, {0x03ffffe1, 26}, {0x000fffeb, 20}, {0x0007fff1, 19}, + {0x003fffe7, 22}, {0x007ffff2, 23}, {0x003fffe8, 22}, {0x01ffffec, 25}, + {0x03ffffe2, 26}, {0x03ffffe3, 26}, {0x03ffffe4, 26}, {0x07ffffde, 27}, + {0x07ffffdf, 27}, {0x03ffffe5, 26}, {0x00fffff1, 24}, {0x01ffffed, 25}, + {0x0007fff2, 19}, {0x001fffe3, 21}, {0x03ffffe6, 26}, {0x07ffffe0, 27}, + {0x07ffffe1, 27}, {0x03ffffe7, 26}, {0x07ffffe2, 27}, {0x00fffff2, 24}, + {0x001fffe4, 21}, {0x001fffe5, 21}, {0x03ffffe8, 26}, {0x03ffffe9, 26}, + {0x0ffffffd, 28}, {0x07ffffe3, 27}, {0x07ffffe4, 27}, {0x07ffffe5, 27}, + {0x000fffec, 20}, {0x00fffff3, 24}, {0x000fffed, 20}, {0x001fffe6, 21}, + {0x003fffe9, 22}, {0x001fffe7, 21}, {0x001fffe8, 21}, {0x007ffff3, 23}, + {0x003fffea, 22}, {0x003fffeb, 22}, {0x01ffffee, 25}, {0x01ffffef, 25}, + {0x00fffff4, 24}, {0x00fffff5, 24}, {0x03ffffea, 26}, {0x007ffff4, 23}, + {0x03ffffeb, 26}, {0x07ffffe6, 27}, {0x03ffffec, 26}, {0x03ffffed, 26}, + {0x07ffffe7, 27}, {0x07ffffe8, 27}, {0x07ffffe9, 27}, {0x07ffffea, 27}, + {0x07ffffeb, 27}, {0x0ffffffe, 28}, {0x07ffffec, 27}, {0x07ffffed, 27}, + {0x07ffffee, 27}, {0x07ffffef, 27}, {0x07fffff0, 27}, {0x03ffffee, 26} +}; + + +/* Same as above, but embedes to lower case transformations */ +static ngx_http_v2_huff_encode_code_t ngx_http_v2_huff_encode_codes_low[256] = +{ + {0x00001ff8, 13}, {0x007fffd8, 23}, {0x0fffffe2, 28}, {0x0fffffe3, 28}, + {0x0fffffe4, 28}, {0x0fffffe5, 28}, {0x0fffffe6, 28}, {0x0fffffe7, 28}, + {0x0fffffe8, 28}, {0x00ffffea, 24}, {0x3ffffffc, 30}, {0x0fffffe9, 28}, + {0x0fffffea, 28}, {0x3ffffffd, 30}, {0x0fffffeb, 28}, {0x0fffffec, 28}, + {0x0fffffed, 28}, {0x0fffffee, 28}, {0x0fffffef, 28}, {0x0ffffff0, 28}, + {0x0ffffff1, 28}, {0x0ffffff2, 28}, {0x3ffffffe, 30}, {0x0ffffff3, 28}, + {0x0ffffff4, 28}, {0x0ffffff5, 28}, {0x0ffffff6, 28}, {0x0ffffff7, 28}, + {0x0ffffff8, 28}, {0x0ffffff9, 28}, {0x0ffffffa, 28}, {0x0ffffffb, 28}, + {0x00000014, 6}, {0x000003f8, 10}, {0x000003f9, 10}, {0x00000ffa, 12}, + {0x00001ff9, 13}, {0x00000015, 6}, {0x000000f8, 8}, {0x000007fa, 11}, + {0x000003fa, 10}, {0x000003fb, 10}, {0x000000f9, 8}, {0x000007fb, 11}, + {0x000000fa, 8}, {0x00000016, 6}, {0x00000017, 6}, {0x00000018, 6}, + {0x00000000, 5}, {0x00000001, 5}, {0x00000002, 5}, {0x00000019, 6}, + {0x0000001a, 6}, {0x0000001b, 6}, {0x0000001c, 6}, {0x0000001d, 6}, + {0x0000001e, 6}, {0x0000001f, 6}, {0x0000005c, 7}, {0x000000fb, 8}, + {0x00007ffc, 15}, {0x00000020, 6}, {0x00000ffb, 12}, {0x000003fc, 10}, + {0x00001ffa, 13}, {0x00000003, 5}, {0x00000023, 6}, {0x00000004, 5}, + {0x00000024, 6}, {0x00000005, 5}, {0x00000025, 6}, {0x00000026, 6}, + {0x00000027, 6}, {0x00000006, 5}, {0x00000074, 7}, {0x00000075, 7}, + {0x00000028, 6}, {0x00000029, 6}, {0x0000002a, 6}, {0x00000007, 5}, + {0x0000002b, 6}, {0x00000076, 7}, {0x0000002c, 6}, {0x00000008, 5}, + {0x00000009, 5}, {0x0000002d, 6}, {0x00000077, 7}, {0x00000078, 7}, + {0x00000079, 7}, {0x0000007a, 7}, {0x0000007b, 7}, {0x00001ffb, 13}, + {0x0007fff0, 19}, {0x00001ffc, 13}, {0x00003ffc, 14}, {0x00000022, 6}, + {0x00007ffd, 15}, {0x00000003, 5}, {0x00000023, 6}, {0x00000004, 5}, + {0x00000024, 6}, {0x00000005, 5}, {0x00000025, 6}, {0x00000026, 6}, + {0x00000027, 6}, {0x00000006, 5}, {0x00000074, 7}, {0x00000075, 7}, + {0x00000028, 6}, {0x00000029, 6}, {0x0000002a, 6}, {0x00000007, 5}, + {0x0000002b, 6}, {0x00000076, 7}, {0x0000002c, 6}, {0x00000008, 5}, + {0x00000009, 5}, {0x0000002d, 6}, {0x00000077, 7}, {0x00000078, 7}, + {0x00000079, 7}, {0x0000007a, 7}, {0x0000007b, 7}, {0x00007ffe, 15}, + {0x000007fc, 11}, {0x00003ffd, 14}, {0x00001ffd, 13}, {0x0ffffffc, 28}, + {0x000fffe6, 20}, {0x003fffd2, 22}, {0x000fffe7, 20}, {0x000fffe8, 20}, + {0x003fffd3, 22}, {0x003fffd4, 22}, {0x003fffd5, 22}, {0x007fffd9, 23}, + {0x003fffd6, 22}, {0x007fffda, 23}, {0x007fffdb, 23}, {0x007fffdc, 23}, + {0x007fffdd, 23}, {0x007fffde, 23}, {0x00ffffeb, 24}, {0x007fffdf, 23}, + {0x00ffffec, 24}, {0x00ffffed, 24}, {0x003fffd7, 22}, {0x007fffe0, 23}, + {0x00ffffee, 24}, {0x007fffe1, 23}, {0x007fffe2, 23}, {0x007fffe3, 23}, + {0x007fffe4, 23}, {0x001fffdc, 21}, {0x003fffd8, 22}, {0x007fffe5, 23}, + {0x003fffd9, 22}, {0x007fffe6, 23}, {0x007fffe7, 23}, {0x00ffffef, 24}, + {0x003fffda, 22}, {0x001fffdd, 21}, {0x000fffe9, 20}, {0x003fffdb, 22}, + {0x003fffdc, 22}, {0x007fffe8, 23}, {0x007fffe9, 23}, {0x001fffde, 21}, + {0x007fffea, 23}, {0x003fffdd, 22}, {0x003fffde, 22}, {0x00fffff0, 24}, + {0x001fffdf, 21}, {0x003fffdf, 22}, {0x007fffeb, 23}, {0x007fffec, 23}, + {0x001fffe0, 21}, {0x001fffe1, 21}, {0x003fffe0, 22}, {0x001fffe2, 21}, + {0x007fffed, 23}, {0x003fffe1, 22}, {0x007fffee, 23}, {0x007fffef, 23}, + {0x000fffea, 20}, {0x003fffe2, 22}, {0x003fffe3, 22}, {0x003fffe4, 22}, + {0x007ffff0, 23}, {0x003fffe5, 22}, {0x003fffe6, 22}, {0x007ffff1, 23}, + {0x03ffffe0, 26}, {0x03ffffe1, 26}, {0x000fffeb, 20}, {0x0007fff1, 19}, + {0x003fffe7, 22}, {0x007ffff2, 23}, {0x003fffe8, 22}, {0x01ffffec, 25}, + {0x03ffffe2, 26}, {0x03ffffe3, 26}, {0x03ffffe4, 26}, {0x07ffffde, 27}, + {0x07ffffdf, 27}, {0x03ffffe5, 26}, {0x00fffff1, 24}, {0x01ffffed, 25}, + {0x0007fff2, 19}, {0x001fffe3, 21}, {0x03ffffe6, 26}, {0x07ffffe0, 27}, + {0x07ffffe1, 27}, {0x03ffffe7, 26}, {0x07ffffe2, 27}, {0x00fffff2, 24}, + {0x001fffe4, 21}, {0x001fffe5, 21}, {0x03ffffe8, 26}, {0x03ffffe9, 26}, + {0x0ffffffd, 28}, {0x07ffffe3, 27}, {0x07ffffe4, 27}, {0x07ffffe5, 27}, + {0x000fffec, 20}, {0x00fffff3, 24}, {0x000fffed, 20}, {0x001fffe6, 21}, + {0x003fffe9, 22}, {0x001fffe7, 21}, {0x001fffe8, 21}, {0x007ffff3, 23}, + {0x003fffea, 22}, {0x003fffeb, 22}, {0x01ffffee, 25}, {0x01ffffef, 25}, + {0x00fffff4, 24}, {0x00fffff5, 24}, {0x03ffffea, 26}, {0x007ffff4, 23}, + {0x03ffffeb, 26}, {0x07ffffe6, 27}, {0x03ffffec, 26}, {0x03ffffed, 26}, + {0x07ffffe7, 27}, {0x07ffffe8, 27}, {0x07ffffe9, 27}, {0x07ffffea, 27}, + {0x07ffffeb, 27}, {0x0ffffffe, 28}, {0x07ffffec, 27}, {0x07ffffed, 27}, + {0x07ffffee, 27}, {0x07ffffef, 27}, {0x07fffff0, 27}, {0x03ffffee, 26} +}; + + +#if (NGX_HAVE_LITTLE_ENDIAN) +#if (__GNUC__) + +static void +put64(u_char *dst, uint64_t val) +{ + *(uint64_t*)dst = __bswap_64(val); +} + +#else /* !__GNUC__ */ + +static void +put64(u_char *dst, uint64_t val) +{ + dst[0] = val >> 56; + dst[1] = val >> 48; + dst[2] = val >> 40; + dst[3] = val >> 32; + dst[4] = val >> 24; + dst[5] = val >> 16; + dst[6] = val >> 8; + dst[7] = val; +} +#endif /* __GNUC__ */ + +#else /* !NGX_HAVE_LITTLE_ENDIAN */ + +static void +put64(u_char *dst, uint64_t val) +{ + *(uint64_t*)dst = val; +} + +#endif + + +static ngx_int_t +ngx_http_v2_huff_encode(u_char *src, size_t len, u_char *dst, ngx_log_t *log, + ngx_flag_t lower) +{ + ngx_uint_t inp, outp; + ngx_int_t pending; + uint64_t buf; + ngx_http_v2_huff_encode_code_t next; + ngx_http_v2_huff_encode_code_t *table; + + if (!lower) { + table = ngx_http_v2_huff_encode_codes; + + } else { + table = ngx_http_v2_huff_encode_codes_low; + } + + inp = 0; + outp = 0; + pending = 0; + buf = 0; + + while (inp < len) { + next = table[src[inp]]; + /* Accumulate 64 bits */ + if ((next.len + pending) < 64) { + buf ^= (uint64_t)next.code << (64 - pending - next.len); + pending += next.len; + + } else { + /* If compressed result is longer than source, no point in using it */ + if ((outp + 8) >= len) { + return NGX_ERROR; + } + + buf ^= (uint64_t)next.code >> (next.len - (64 - pending)); + put64(&dst[outp], buf); + outp += 8; + pending = (next.len - (64 - pending)); + + if (pending) { + buf = (uint64_t)next.code << (64 - pending); + + } else { + buf = 0; + } + } + + inp++; + } + + buf ^= 0xffffffffffffffffUL >> pending; + + while (pending > 0) { + dst[outp] = buf >> 56; + buf <<= 8; + pending -= 8; + outp++; + + if (outp >= len) { + return NGX_ERROR; + } + } + + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, log, 0, + "http2 huffman encoding successful; " + "input len: %d, output len %d", len, outp); + + return outp; +} + + +u_char * +ngx_http_v2_string_encode(u_char *src, size_t len, u_char *dst, u_char *tmp, + ngx_log_t *log, ngx_flag_t lower) +{ + ngx_int_t hlen; + + hlen = ngx_http_v2_huff_encode(src, len, tmp, log, lower); + + if (hlen != NGX_ERROR) { + *dst = NGX_HTTP_V2_ENCODE_HUFF; + dst = ngx_http_v2_write_int(dst, ngx_http_v2_prefix(7), hlen); + dst = ngx_cpymem(dst, tmp, hlen); + + } else { + *dst = NGX_HTTP_V2_ENCODE_RAW; + dst = ngx_http_v2_write_int(dst, ngx_http_v2_prefix(7), len); + + if (lower) { + ngx_strlow(dst, src, len); + dst += len; + + } else { + dst = ngx_cpymem(dst, src, len); + } + } + + return dst; +} From mdounin at mdounin.ru Thu Dec 17 12:44:42 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 17 Dec 2015 15:44:42 +0300 Subject: [PATCH] http_geo_module: warn when using a variable as the value In-Reply-To: <5671F00C.8070908@syse.no> References: <5671F00C.8070908@syse.no> Message-ID: <20151217124441.GW74233@mdounin.ru> Hello! On Wed, Dec 16, 2015 at 11:13:16PM +0000, Daniel K. wrote: [...] > Warn when using a variable as the data value in geo blocks > > Since the geo module does not expand variables, warn when a variable is > used as the data value; as in the default entry of: > > geo $geo { > default $foobar; > 192.168.2.1 ''; > } As the geo module is currently probably the only way to insert plain '$' into a place which expands variables, I'm not sure if it's a good idea to add such a patch, at least before we'll resolve the '$' escaping issue. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Thu Dec 17 13:39:40 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 17 Dec 2015 16:39:40 +0300 Subject: [PATCH] HTTP/2: fixed premature connection closure during reload (ticket #626). In-Reply-To: <565ECA83.1060905@onapp.com> References: <565ECA83.1060905@onapp.com> Message-ID: <1763175.YnBMWZvfMS@vbart-workstation> On Wednesday 02 December 2015 18:40:03 Wai Keen Woon wrote: > # HG changeset patch > # User Wai Keen Woon > # Date 1449052722 -28800 > # Wed Dec 02 18:38:42 2015 +0800 > # Node ID 4b7ef34610ebe00eb6a6d52008a48f9864dadd33 > # Parent be3aed17689c0edd36c2025ff5c36fe493b68bd7 > HTTP/2: fixed premature connection closure during reload (ticket #626). > > HTTP/2 transfers may be closed prematurely during nginx reload, which logs > "open socket #X left in connection Y" alerts. > > ngx_add_timer() isn't called when frames are sent faster than they can be > created. The worker process therefore exits because there are no more timers > scheduled, even though there are more data frames and finalization > forthcoming. > > diff -r be3aed17689c -r 4b7ef34610eb src/http/v2/ngx_http_v2.c > --- a/src/http/v2/ngx_http_v2.c Wed Dec 02 01:06:54 2015 +0300 > +++ b/src/http/v2/ngx_http_v2.c Wed Dec 02 18:38:42 2015 +0800 > @@ -535,7 +535,7 @@ > c->tcp_nodelay = NGX_TCP_NODELAY_SET; > } > > - if (cl) { > + if (cl || h2c->processing) { > ngx_add_timer(wev, clcf->send_timeout); > > } else { > [..] This is completely wrong approach to solve the issue. For example, with your patch the send timeout will be triggered in cases while nginx is waiting for backend, or delaying request with limit_req, or resolving an address. Please try the attached patch instead. wbr, Valentin V. Bartenev -------------- next part -------------- A non-text attachment was scrubbed... Name: timeouts.patch Type: text/x-patch Size: 17208 bytes Desc: not available URL: From dk at syse.no Thu Dec 17 13:43:22 2015 From: dk at syse.no (Daniel K.) Date: Thu, 17 Dec 2015 13:43:22 +0000 Subject: [PATCH] http_geo_module: warn when using a variable as the value In-Reply-To: <13069249.Q9Q3cs6eDT@vbart-workstation> References: <5671F00C.8070908@syse.no> <13069249.Q9Q3cs6eDT@vbart-workstation> Message-ID: <5672BBFA.8040400@syse.no> On 12/17/2015 11:06 AM, Valentin V. Bartenev wrote: > On Wednesday 16 December 2015 23:13:16 Daniel K. wrote: >> [A patch to warn when using variabled with geo] > > There are a lot of directives that don't support variables. If the > directive is support variables, then it is explicitly mentioned in > the documentation. I know that now, after careful reading. No-one reads the docs that carefully on the first reading, though. :) > What is so special about the geo directive? Nothing, except that this one bit me, and I wanted to test the waters to see if something like the patch I sent could be accepted. Thanks for clarifying what you'd like to see instead of the warning. Daniel K. From dk at syse.no Thu Dec 17 13:47:50 2015 From: dk at syse.no (Daniel K.) Date: Thu, 17 Dec 2015 13:47:50 +0000 Subject: [PATCH] http_geo_module: warn when using a variable as the value In-Reply-To: <20151217124441.GW74233@mdounin.ru> References: <5671F00C.8070908@syse.no> <20151217124441.GW74233@mdounin.ru> Message-ID: <5672BD06.8090508@syse.no> On 12/17/2015 12:44 PM, Maxim Dounin wrote: > On Wed, Dec 16, 2015 at 11:13:16PM +0000, Daniel K. wrote: >> Warn when using a variable as the data value in geo blocks >> >> Since the geo module does not expand variables, warn when a variable is >> used as the data value; > > As the geo module is currently probably the only way to insert > plain '$' into a place which expands variables, I'm not sure if > it's a good idea to add such a patch, at least before we'll > resolve the '$' escaping issue. It was just showing a warning to the possibly unsuspecting user, nothing else. Valentin suggested to add variable support. From your comment I take it that variable support is not wanted, at least not yet, until a machanism to escape '$' is implemented? Daniel K. From mdounin at mdounin.ru Thu Dec 17 14:09:16 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 17 Dec 2015 14:09:16 +0000 Subject: [nginx] Version bump. Message-ID: details: http://hg.nginx.org/nginx/rev/fef42206bae7 branches: changeset: 6330:fef42206bae7 user: Maxim Dounin date: Thu Dec 17 16:38:51 2015 +0300 description: Version bump. diffstat: src/core/nginx.h | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diffs (14 lines): diff --git a/src/core/nginx.h b/src/core/nginx.h --- a/src/core/nginx.h +++ b/src/core/nginx.h @@ -9,8 +9,8 @@ #define _NGINX_H_INCLUDED_ -#define nginx_version 1009009 -#define NGINX_VERSION "1.9.9" +#define nginx_version 1009010 +#define NGINX_VERSION "1.9.10" #define NGINX_VER "nginx/" NGINX_VERSION #ifdef NGX_BUILD From mdounin at mdounin.ru Thu Dec 17 14:09:19 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 17 Dec 2015 14:09:19 +0000 Subject: [nginx] Fixed PROXY protocol on IPv6 sockets (ticket #858). Message-ID: details: http://hg.nginx.org/nginx/rev/ceeb1edb3018 branches: changeset: 6331:ceeb1edb3018 user: Maxim Dounin date: Thu Dec 17 16:39:02 2015 +0300 description: Fixed PROXY protocol on IPv6 sockets (ticket #858). diffstat: src/http/ngx_http.c | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diffs (11 lines): diff --git a/src/http/ngx_http.c b/src/http/ngx_http.c --- a/src/http/ngx_http.c +++ b/src/http/ngx_http.c @@ -1927,6 +1927,7 @@ ngx_http_add_addrs6(ngx_conf_t *cf, ngx_ #if (NGX_HTTP_V2) addrs6[i].conf.http2 = addr[i].opt.http2; #endif + addrs6[i].conf.proxy_protocol = addr[i].opt.proxy_protocol; if (addr[i].hash.buckets == NULL && (addr[i].wc_head == NULL From mdounin at mdounin.ru Thu Dec 17 14:09:21 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 17 Dec 2015 14:09:21 +0000 Subject: [nginx] Upstream: don't keep connections on early responses (tic... Message-ID: details: http://hg.nginx.org/nginx/rev/78b4e10b4367 branches: changeset: 6332:78b4e10b4367 user: Maxim Dounin date: Thu Dec 17 16:39:15 2015 +0300 description: Upstream: don't keep connections on early responses (ticket #669). diffstat: src/http/modules/ngx_http_upstream_keepalive_module.c | 4 ++++ src/http/ngx_http_upstream.c | 3 +++ src/http/ngx_http_upstream.h | 1 + 3 files changed, 8 insertions(+), 0 deletions(-) diffs (45 lines): diff --git a/src/http/modules/ngx_http_upstream_keepalive_module.c b/src/http/modules/ngx_http_upstream_keepalive_module.c --- a/src/http/modules/ngx_http_upstream_keepalive_module.c +++ b/src/http/modules/ngx_http_upstream_keepalive_module.c @@ -302,6 +302,10 @@ ngx_http_upstream_free_keepalive_peer(ng goto invalid; } + if (!u->request_body_sent) { + goto invalid; + } + if (ngx_terminate || ngx_exiting) { goto invalid; } diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -1441,6 +1441,7 @@ ngx_http_upstream_connect(ngx_http_reque } u->request_sent = 0; + u->request_body_sent = 0; if (rc == NGX_AGAIN) { ngx_add_timer(c->write, u->conf->connect_timeout); @@ -1825,6 +1826,8 @@ ngx_http_upstream_send_request(ngx_http_ /* rc == NGX_OK */ + u->request_body_sent = 1; + if (c->write->timer_set) { ngx_del_timer(c->write); } diff --git a/src/http/ngx_http_upstream.h b/src/http/ngx_http_upstream.h --- a/src/http/ngx_http_upstream.h +++ b/src/http/ngx_http_upstream.h @@ -370,6 +370,7 @@ struct ngx_http_upstream_s { unsigned upgrade:1; unsigned request_sent:1; + unsigned request_body_sent:1; unsigned header_sent:1; }; From mdounin at mdounin.ru Thu Dec 17 14:15:44 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 17 Dec 2015 17:15:44 +0300 Subject: [PATCH] http_geo_module: warn when using a variable as the value In-Reply-To: <5672BD06.8090508@syse.no> References: <5671F00C.8070908@syse.no> <20151217124441.GW74233@mdounin.ru> <5672BD06.8090508@syse.no> Message-ID: <20151217141544.GX74233@mdounin.ru> Hello! On Thu, Dec 17, 2015 at 01:47:50PM +0000, Daniel K. wrote: > On 12/17/2015 12:44 PM, Maxim Dounin wrote: > > On Wed, Dec 16, 2015 at 11:13:16PM +0000, Daniel K. wrote: > >> Warn when using a variable as the data value in geo blocks > >> > >> Since the geo module does not expand variables, warn when a variable is > >> used as the data value; > > > > As the geo module is currently probably the only way to insert > > plain '$' into a place which expands variables, I'm not sure if > > it's a good idea to add such a patch, at least before we'll > > resolve the '$' escaping issue. > > It was just showing a warning to the possibly unsuspecting user, nothing > else. > > Valentin suggested to add variable support. From your comment I take it > that variable support is not wanted, at least not yet, until a machanism > to escape '$' is implemented? Yes, we'll have to introduce '$' escaping before (or at least at the same time) adding variables support to geo. -- Maxim Dounin http://nginx.org/ From thorvaldur.thorvaldsson at gmail.com Thu Dec 17 14:22:00 2015 From: thorvaldur.thorvaldsson at gmail.com (Thorvaldur Thorvaldsson) Date: Thu, 17 Dec 2015 15:22:00 +0100 Subject: [PATCH] Tests for stale responses from upstream that may be cached. In-Reply-To: References: Message-ID: Hi there, I mentioned in another thread that I'd add a test case for the most recent revision of the "cache stale from upstream" patch. I'll post it here in a moment. By the way, I'm sure there's a better way to set up the tests so I don't expect this patch to be accepted as-is. It's more about showing how the effect of the patch. Best regards, Thorvaldur From thorvaldur.thorvaldsson at gmail.com Thu Dec 17 14:22:30 2015 From: thorvaldur.thorvaldsson at gmail.com (Thorvaldur Thorvaldsson) Date: Thu, 17 Dec 2015 15:22:30 +0100 Subject: [PATCH] Tests for stale responses from upstream that may be cached. In-Reply-To: References: Message-ID: # HG changeset patch # User Thorvaldur Thorvaldsson # Date 1450275691 -3600 # Wed Dec 16 15:21:31 2015 +0100 # Node ID 692075b59d386ffcbf1aff8e560488fd110d04f7 # Parent dba758c045edbe010de380176235bea6b700367e Tests for stale responses from upstream that may be cached. The tests correspond to a patch that's been submitted to nginx-devel. The test cases verify the caching and revalidation behaviour when: 1) "proxy_cache_revalidate" directive is "on"; 2) the upstream response includes a "Cache-Control: max-age=0" header along with various combinations of ETag/Last-Modified headers. There's also test for the case where there's no "proxy_cache_valid" directive and also no "Cache-Control: max-age=0" header, or X-Accel-Expires, or Expired header but still where the "proxy_cache_revalidate" is on and the response includes an ETag header. diff -r dba758c045ed -r 692075b59d38 proxy_cache_revalidate.t --- a/proxy_cache_revalidate.t Thu Dec 17 17:18:07 2015 +0300 +++ b/proxy_cache_revalidate.t Wed Dec 16 15:21:31 2015 +0100 @@ -21,7 +21,7 @@ select STDERR; $| = 1; select STDOUT; $| = 1; -my $t = Test::Nginx->new()->has(qw/http proxy cache rewrite shmem/)->plan(23) +my $t = Test::Nginx->new()->has(qw/http proxy cache rewrite shmem/)->plan(33) ->write_file_expand('nginx.conf', <<'EOF'); %%TEST_GLOBALS%% @@ -51,6 +51,11 @@ add_header X-Cache-Status $upstream_cache_status; } + location /no-valid-directive/ { + proxy_pass http://127.0.0.1:8081/etag-no-max-age/; + proxy_cache one; + add_header X-Cache-Status $upstream_cache_status; + } } server { @@ -68,6 +73,33 @@ add_header X-If-Modified-Since $http_if_modified_since; return 201; } + location /stale-etag/ { + proxy_pass http://127.0.0.1:8081/; + proxy_hide_header Last-Modified; + add_header Cache-Control "max-age=0"; + } + location /etag-no-max-age/ { + proxy_pass http://127.0.0.1:8081/; + proxy_hide_header Last-Modified; + } + location /stale-last-modified/ { + proxy_pass http://127.0.0.1:8081/; + proxy_hide_header ETag; + add_header Cache-Control "max-age=0"; + } + location /stale-cannot-revalidate/ { + proxy_pass http://127.0.0.1:8081/; + proxy_hide_header ETag; + proxy_hide_header Last-Modified; + add_header Cache-Control "max-age=0"; + } + location /stale-invalid-last-modified/ { + proxy_pass http://127.0.0.1:8081/; + proxy_hide_header ETag; + proxy_hide_header Last-Modified; + add_header Cache-Control "max-age=0"; + add_header Last-Modified "invalid"; + } } } @@ -94,6 +126,31 @@ like(http_get('/etag/t'), qr/X-Cache-Status: MISS.*SEE/ms, 'etag'); like(http_get('/etag/t'), qr/X-Cache-Status: HIT.*SEE/ms, 'etag cached'); +my $CACHE_MISS = qr/X-Cache-Status: MISS.*?SEE/ms; +my $CACHE_REVALIDATED = qr/X-Cache-Status: REVALIDATED.*?SEE/ms; + +like(http_get('/stale-etag/t'), $CACHE_MISS, 'stale etag'); +like(http_get('/stale-etag/t'), $CACHE_REVALIDATED, 'stale etag revalidated'); + +like(http_get('/stale-last-modified/t'), $CACHE_MISS, 'stale last-modified'); +like(http_get('/stale-last-modified/t'), $CACHE_REVALIDATED, + 'stale last-modified revalidated'); + +like(http_get('/stale-cannot-revalidate/t'), $CACHE_MISS, + 'stale cannot revalidate'); +like(http_get('/stale-cannot-revalidate/t'), $CACHE_MISS, + 'stale cannot revalidate not cached'); + +like(http_get('/stale-invalid-last-modified/t'), $CACHE_MISS, + 'stale invalid last-modified'); +like(http_get('/stale-invalid-last-modified/t'), $CACHE_MISS, + 'stale invalid last-modified not cached'); + +like(http_get('/no-valid-directive/etag-no-max-age/t'), $CACHE_MISS, + 'no cache-valid no max-age'); +like(http_get('/no-valid-directive/etag-no-max-age/t'), $CACHE_MISS, + 'no cache-valid no max-age not cached'); + like(http_get('/etag/t2'), qr/X-Cache-Status: MISS.*SEE/ms, 'etag2'); like(http_get('/etag/t2'), qr/X-Cache-Status: HIT.*SEE/ms, 'etag2 cached'); From ru at nginx.com Thu Dec 17 14:43:54 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 17 Dec 2015 17:43:54 +0300 Subject: [PATCH] Fix ptr resolving with cname In-Reply-To: <9d8c7332b7300908414e.1450238369@AAMStorage.lan> References: <9d8c7332b7300908414e.1450238369@AAMStorage.lan> Message-ID: <20151217144354.GB60752@lo0.su> On Wed, Dec 16, 2015 at 11:59:29AM +0800, DannyAAM wrote: > # HG changeset patch > # User DannyAAM > # Date 1449696194 -28800 > # Thu Dec 10 05:23:14 2015 +0800 > # Branch fix-ptr-cname > # Node ID 9d8c7332b7300908414e3bec78a90d9d14b30af8 > # Parent dfe68c41f34f865bc7b45cbe6b7d0f639de283fc > Fix ptr resolving with cname > > Make ptr process aware of cname & follow it. > (This depends on resolver's recursive answer.) Thanks for trying. Unfortunately, your patch has many issues, mainly caused by not checking for malformed responses. I have a better patch in the works that doesn't have a complete CNAME support in PTR responses, but otherwise works in the described situation plus doesn't require a RR to be compressed like is found in responses generated by some SOHO routers. Please let me know if you'd like to try my patch. > diff -r dfe68c41f34f -r 9d8c7332b730 src/core/ngx_resolver.c > --- a/src/core/ngx_resolver.c Wed Dec 09 17:47:21 2015 +0300 > +++ b/src/core/ngx_resolver.c Thu Dec 10 05:23:14 2015 +0800 > @@ -2032,7 +2032,7 @@ > int32_t ttl; > ngx_int_t octet; > ngx_str_t name; > - ngx_uint_t i, mask, qident, class; > + ngx_uint_t i, mask, qident, type, class; > ngx_queue_t *expire_queue; > ngx_rbtree_t *tree; > ngx_resolver_an_t *an; > @@ -2196,9 +2196,14 @@ > goto invalid; > } > > + > + > an = (ngx_resolver_an_t *) &buf[i + 2]; > > +cname_continue: > + > class = (an->class_hi << 8) + an->class_lo; > + type = (an->type_hi << 8) + an->type_lo; > len = (an->len_hi << 8) + an->len_lo; > ttl = (an->ttl[0] << 24) + (an->ttl[1] << 16) > + (an->ttl[2] << 8) + (an->ttl[3]); > @@ -2213,6 +2218,34 @@ > ttl = 0; > } > > + /* CNAME processing */ > + if (type == NGX_RESOLVE_CNAME) { > + do { > + if (buf[i] == 0xc0) { > + i += 2; > + break; > + } else { > + i += 1 + buf[i]; > + } > + } while (buf[i] != 0); > + an = (ngx_resolver_an_t *) &buf[i]; > + len = (an->len_hi << 8) + an->len_lo; > + i += sizeof(ngx_resolver_an_t) + len; > + > + ngx_uint_t nameidx = i; > + do { > + if (buf[nameidx] == 0xc0) { > + nameidx += 2; > + break; > + } else { > + nameidx += 1 + buf[nameidx]; > + } > + } while (buf[nameidx] != 0); > + an = (ngx_resolver_an_t *) &buf[nameidx]; > + > + goto cname_continue; > + } > + > ngx_log_debug3(NGX_LOG_DEBUG_CORE, r->log, 0, > "resolver qt:%ui cl:%ui len:%uz", > (an->type_hi << 8) + an->type_lo, From thorvaldur.thorvaldsson at gmail.com Sat Dec 19 01:40:33 2015 From: thorvaldur.thorvaldsson at gmail.com (Thorvaldur Thorvaldsson) Date: Sat, 19 Dec 2015 02:40:33 +0100 Subject: [PATCH] Upstream: Cache stale responses if they may be revalidated. In-Reply-To: References: <20151216155249.GU74233@mdounin.ru> Message-ID: Hello again, I'd just like to make one final revision of the patch. Two simplifications can be made without sacrificing my main objective, i.e., to make it possible for nginx to cache responses from upstream that must always be revalidated for authorization. The simplifications are: 1) Don't change the handling of the X-Accel-Expires or Expires headers, i.e., change only the handling of max-age=0 (and s-maxage=0) in Cache-Control. 2) Don't lower the valid time of all cached responses by one second, i.e., just make sure that a cached response is "immediately stale" when max-age=0. I've attached the test cases. It's a new file this time and not a patch like the one I posted on another thread. I'll post the new and simplified changeset patch in a moment. Best regards, Thorvaldur -------------- next part -------------- A non-text attachment was scrubbed... Name: proxy_cache_revalidate_always.t Type: application/x-troff Size: 4207 bytes Desc: not available URL: From thorvaldur.thorvaldsson at gmail.com Sat Dec 19 01:41:25 2015 From: thorvaldur.thorvaldsson at gmail.com (Thorvaldur Thorvaldsson) Date: Sat, 19 Dec 2015 02:41:25 +0100 Subject: [PATCH] Upstream: Cache stale responses if they may be revalidated. In-Reply-To: References: <20151216155249.GU74233@mdounin.ru> Message-ID: # HG changeset patch # User Thorvaldur Thorvaldsson # Date 1450486750 -3600 # Sat Dec 19 01:59:10 2015 +0100 # Node ID d5f8a24ee96d47f056949f3a103fd53a9dd56282 # Parent def9c9c9ae05cfa7467b0ec96e76afa180c23dfb Upstream: Cache response if max-age=0 and it may be revalidated. Previously, the proxy cache would never store responses with "max-age=0" in the Cache-Control header. Now it will, but only if "proxy_cache_revalidate" is "on" and the response includes an ETag or a valid Last-Modified header. This opens up the possibilty to make nginx cache responses that must always be revalidated, e.g., when authorization is required (and cheap). diff -r def9c9c9ae05 -r d5f8a24ee96d src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Sat Dec 12 10:32:58 2015 +0300 +++ b/src/http/ngx_http_upstream.c Sat Dec 19 01:59:10 2015 +0100 @@ -2815,11 +2815,16 @@ valid = ngx_http_file_cache_valid(u->conf->cache_valid, u->headers_in.status_n); if (valid) { - r->cache->valid_sec = now + valid; + valid = now + valid; + r->cache->valid_sec = valid; } } - if (valid) { + if (valid > now + || (valid && r->upstream->conf->cache_revalidate + && (u->headers_in.etag + || u->headers_in.last_modified_time != -1))) + { r->cache->date = now; r->cache->body_start = (u_short) (u->buffer.pos - u->buffer.start); @@ -4272,12 +4277,7 @@ return NGX_OK; } - if (n == 0) { - u->cacheable = 0; - return NGX_OK; - } - - r->cache->valid_sec = ngx_time() + n; + r->cache->valid_sec = ngx_time() + (n ? n : -1); } #endif From savetherbtz at gmail.com Sat Dec 19 11:43:28 2015 From: savetherbtz at gmail.com (Alexey Ivanov) Date: Sat, 19 Dec 2015 03:43:28 -0800 Subject: [PATCH] Variables: added $tcpinfo_retrans Message-ID: # HG changeset patch # User Alexey Ivanov # Date 1450520577 28800 # Sat Dec 19 02:22:57 2015 -0800 # Branch tcpi_retrans # Node ID 89e3d2427e669a060f23f70adbdd301f8916d11c # Parent 78b4e10b4367b31367aad3c83c9c3acdd42397c4 Variables: added $tcpinfo_retrans This one is useful for debugging poor network conditions. diff -r 78b4e10b4367 -r 89e3d2427e66 src/http/ngx_http_variables.c --- a/src/http/ngx_http_variables.c Thu Dec 17 16:39:15 2015 +0300 +++ b/src/http/ngx_http_variables.c Sat Dec 19 02:22:57 2015 -0800 @@ -343,6 +343,9 @@ { ngx_string("tcpinfo_rcv_space"), NULL, ngx_http_variable_tcpinfo, 3, NGX_HTTP_VAR_NOCACHEABLE, 0 }, + + { ngx_string("tcpinfo_retrans"), NULL, ngx_http_variable_tcpinfo, + 4, NGX_HTTP_VAR_NOCACHEABLE, 0 }, #endif { ngx_null_string, NULL, NULL, 0, 0, 0 } @@ -1053,6 +1056,10 @@ value = ti.tcpi_rcv_space; break; + case 4: + value = ti.tcpi_retrans; + break; + /* suppress warning */ default: value = 0; -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pskhanapur at gmail.com Sun Dec 20 22:08:56 2015 From: pskhanapur at gmail.com (Prasanna Khanapur) Date: Sun, 20 Dec 2015 22:08:56 +0000 Subject: Create a file in the server using nginx Message-ID: Hi, I have Nginx running on a Server named ServerS. I have some backend(upstream) servers B1,B2 etc which server S is proxying( load balancing). Just to give you a complete picture, may be not relevant to the question I'm asking. I want to test ServerS by sending some commands (in the form of a file) to the ServerS system through Nginx. I'm planning to do it by for example curl doing a post to Nginx to create a file which I'm sending through curl post. Nginx basically saves this file in some directory path which my application reads this file periodically and performs some action based on the content of the file. Probably I can use REST library etc but I don't want to burden myself using it as I can acheive the same by creating a file in the server using Nginx. In summary, curl should a post/put a file and Nginx should save it in a directory. Is this possible? Any suggestions? Thanks! -- Best Regards Prasanna Khanapur Oslo, Norway Mobile: +4795417774 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hellkvist at gmail.com Mon Dec 21 10:00:09 2015 From: hellkvist at gmail.com (Stefan Hellkvist) Date: Mon, 21 Dec 2015 11:00:09 +0100 Subject: Is the limit_rate per tcp session or per HTTP request? Message-ID: Hi, >From reading the code and the docs I have gotten the impression that limit_rate (and limit_rate_after) is per ngx_connection which (I think) means that it is per HTTP request and not per socket. Am I right in this conclusion or is the limit actually per socket/TCP connection? What we are observing is that the limit we configure does only kick in for requests to files that are larger than the limit_rate_after when the request is done in one GET request but not when the request is done in chunks using byte offset parameters (that is - using many GET requests for the file). So clients can easily avoid the limitations by downloading the file chunk by chunk rather than in one request. If our conclusion are right - that the limit is per HTTP request and not per socket so that a chunked download would not be limited - does anyone have any suggestion how we would go about to introduce a limit also on socket level? I don't mind hacking away at the code, but perhaps someone out there has already looked into this? /Stefan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Mon Dec 21 11:26:15 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 21 Dec 2015 14:26:15 +0300 Subject: Is the limit_rate per tcp session or per HTTP request? In-Reply-To: References: Message-ID: <20151221112615.GG60752@lo0.su> On Mon, Dec 21, 2015 at 11:00:09AM +0100, Stefan Hellkvist wrote: > Hi, > > From reading the code and the docs I have gotten the impression that > limit_rate (and limit_rate_after) is per ngx_connection which (I think) > means that it is per HTTP request and not per socket. Am I right in this > conclusion or is the limit actually per socket/TCP connection? The docs at http://nginx.org/r/limit_rate says it clearly that the limit is set per a request, and describes one of the possible cases how this limit can be "avoided" by the client. The limit is implemented on the ngx_connection_t level which is usually mapped 1:1 to a physical connection. > What we are observing is that the limit we configure does only kick in for > requests to files that are larger than the limit_rate_after when the > request is done in one GET request but not when the request is done in > chunks using byte offset parameters (that is - using many GET requests for > the file). So clients can easily avoid the limitations by downloading the > file chunk by chunk rather than in one request. Opening several connections, or using SPDY/HTTP2 is another way to jump over the limit. > If our conclusion are right - that the limit is per HTTP request and not > per socket so that a chunked download would not be limited - does anyone > have any suggestion how we would go about to introduce a limit also on > socket level? I don't mind hacking away at the code, but perhaps someone > out there has already looked into this? I know that Valentin (CC:ed) was working on the limit_rate module that improves things, including variables support and extening the limitation beyond only "per request". It should become possible to limit byte rate per IP, for example. From hellkvist at gmail.com Mon Dec 21 12:18:43 2015 From: hellkvist at gmail.com (Stefan Hellkvist) Date: Mon, 21 Dec 2015 13:18:43 +0100 Subject: Is the limit_rate per tcp session or per HTTP request? In-Reply-To: <20151221112615.GG60752@lo0.su> References: <20151221112615.GG60752@lo0.su> Message-ID: On Mon, Dec 21, 2015 at 12:26 PM, Ruslan Ermilov wrote: > On Mon, Dec 21, 2015 at 11:00:09AM +0100, Stefan Hellkvist wrote: > > Hi, > > > > From reading the code and the docs I have gotten the impression that > > limit_rate (and limit_rate_after) is per ngx_connection which (I think) > > means that it is per HTTP request and not per socket. Am I right in this > > conclusion or is the limit actually per socket/TCP connection? > > The docs at http://nginx.org/r/limit_rate says it clearly that the limit > is set per a request, and describes one of the possible cases how this > limit can be "avoided" by the client. > > The limit is implemented on the ngx_connection_t level which is usually > mapped 1:1 to a physical connection. > In our case we have clients that use pipelining where several requests share the same tcp session. An ngx_connection_t is mapped 1:1 to the requests and not to the physical socket in this case, am I right? > > > If our conclusion are right - that the limit is per HTTP request and not > > per socket so that a chunked download would not be limited - does anyone > > have any suggestion how we would go about to introduce a limit also on > > socket level? I don't mind hacking away at the code, but perhaps someone > > out there has already looked into this? > > I know that Valentin (CC:ed) was working on the limit_rate module that > improves things, including variables support and extening the limitation > beyond only "per request". It should become possible to limit byte rate > per IP, for example. > That is great news that there is someone looking into it. Would this limit_rate module also support limiting the rate in pipelined requests? The use case I am looking at specifically is to control the download rate of a client that pipelines requests for HLS chunks (Apple's HTTP live streaming), so the same tcp connection is being used for several downloadable chunks. I would need to have the limit_rate_after as it is today, but on socket rather than http request, to allow the video buffer to fill up quickly during the requests for the first chunks, but would like to limit the rate of the following pipelined request once the video playback buffer is full. In the wildest of dreams there would be an interface to nginx to dynamically alter the rate of individual socket connections. through an application interface. In that way it would be possible for the client to actually signal to the server what rate it would need at certain phases. /Stefan -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon Dec 21 12:22:37 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 21 Dec 2015 15:22:37 +0300 Subject: [PATCH] Variables: added $tcpinfo_retrans In-Reply-To: References: Message-ID: <7184744.nWrtJ39kzb@vbart-workstation> On Saturday 19 December 2015 03:43:28 Alexey Ivanov wrote: > # HG changeset patch > # User Alexey Ivanov > # Date 1450520577 28800 > # Sat Dec 19 02:22:57 2015 -0800 > # Branch tcpi_retrans > # Node ID 89e3d2427e669a060f23f70adbdd301f8916d11c > # Parent 78b4e10b4367b31367aad3c83c9c3acdd42397c4 > Variables: added $tcpinfo_retrans > > This one is useful for debugging poor network conditions. > > diff -r 78b4e10b4367 -r 89e3d2427e66 src/http/ngx_http_variables.c > --- a/src/http/ngx_http_variables.c Thu Dec 17 16:39:15 2015 +0300 > +++ b/src/http/ngx_http_variables.c Sat Dec 19 02:22:57 2015 -0800 > @@ -343,6 +343,9 @@ > > { ngx_string("tcpinfo_rcv_space"), NULL, ngx_http_variable_tcpinfo, > 3, NGX_HTTP_VAR_NOCACHEABLE, 0 }, > + > + { ngx_string("tcpinfo_retrans"), NULL, ngx_http_variable_tcpinfo, > + 4, NGX_HTTP_VAR_NOCACHEABLE, 0 }, > #endif [..] Looks like your mail client have broken the patch. > > { ngx_null_string, NULL, NULL, 0, 0, 0 } > @@ -1053,6 +1056,10 @@ > value = ti.tcpi_rcv_space; > break; > > + case 4: > + value = ti.tcpi_retrans; > + break; > + > /* suppress warning */ > default: > value = 0; > What if there is no "tcpi_retrans" field? wbr, Valentin V. Bartenev From vbart at nginx.com Mon Dec 21 12:36:48 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 21 Dec 2015 15:36:48 +0300 Subject: Is the limit_rate per tcp session or per HTTP request? In-Reply-To: References: <20151221112615.GG60752@lo0.su> Message-ID: <3576215.kLobsEZYiH@vbart-workstation> On Monday 21 December 2015 13:18:43 Stefan Hellkvist wrote: > On Mon, Dec 21, 2015 at 12:26 PM, Ruslan Ermilov wrote: > > > On Mon, Dec 21, 2015 at 11:00:09AM +0100, Stefan Hellkvist wrote: > > > Hi, > > > > > > From reading the code and the docs I have gotten the impression that > > > limit_rate (and limit_rate_after) is per ngx_connection which (I think) > > > means that it is per HTTP request and not per socket. Am I right in this > > > conclusion or is the limit actually per socket/TCP connection? > > > > The docs at http://nginx.org/r/limit_rate says it clearly that the limit > > is set per a request, and describes one of the possible cases how this > > limit can be "avoided" by the client. > > > > The limit is implemented on the ngx_connection_t level which is usually > > mapped 1:1 to a physical connection. > > > > In our case we have clients that use pipelining where several requests > share the same tcp session. An ngx_connection_t is mapped 1:1 to the > requests and not to the physical socket in this case, am I right? > [..] Regardless of the internal implementation it's better to think that "limit_rate" and "limit_rate_after" currently work per request only. The ngx_connection_t is mapped to the physical socket, but the number of sent bytes is reseted to zero on each request in the connection. > > > > > > > If our conclusion are right - that the limit is per HTTP request and not > > > per socket so that a chunked download would not be limited - does anyone > > > have any suggestion how we would go about to introduce a limit also on > > > socket level? I don't mind hacking away at the code, but perhaps someone > > > out there has already looked into this? > > > > I know that Valentin (CC:ed) was working on the limit_rate module that > > improves things, including variables support and extening the limitation > > beyond only "per request". It should become possible to limit byte rate > > per IP, for example. > > > > > That is great news that there is someone looking into it. Would this > limit_rate module also support limiting the rate in pipelined requests? The > use case I am looking at specifically is to control the download rate of a > client that pipelines requests for HLS chunks (Apple's HTTP live > streaming), so the same tcp connection is being used for several > downloadable chunks. I would need to have the limit_rate_after as it is > today, but on socket rather than http request, to allow the video buffer to > fill up quickly during the requests for the first chunks, but would like > to limit the rate of the following pipelined request once the video > playback buffer is full. > [..] Yes, I'll keep in mind this use case. wbr, Valentin V. Bartenev From hellkvist at gmail.com Mon Dec 21 12:42:27 2015 From: hellkvist at gmail.com (Stefan Hellkvist) Date: Mon, 21 Dec 2015 13:42:27 +0100 Subject: Is the limit_rate per tcp session or per HTTP request? In-Reply-To: <3576215.kLobsEZYiH@vbart-workstation> References: <20151221112615.GG60752@lo0.su> <3576215.kLobsEZYiH@vbart-workstation> Message-ID: <6CCCC86B-D13E-4C94-AA86-7F40064B6A0E@gmail.com> > On 21 Dec 2015, at 13:36, Valentin V. Bartenev wrote: > > On Monday 21 December 2015 13:18:43 Stefan Hellkvist wrote: >> On Mon, Dec 21, 2015 at 12:26 PM, Ruslan Ermilov wrote: >> >>> On Mon, Dec 21, 2015 at 11:00:09AM +0100, Stefan Hellkvist wrote: >>>> Hi, >>>> >>>> From reading the code and the docs I have gotten the impression that >>>> limit_rate (and limit_rate_after) is per ngx_connection which (I think) >>>> means that it is per HTTP request and not per socket. Am I right in this >>>> conclusion or is the limit actually per socket/TCP connection? >>> >>> The docs at http://nginx.org/r/limit_rate says it clearly that the limit >>> is set per a request, and describes one of the possible cases how this >>> limit can be "avoided" by the client. >>> >>> The limit is implemented on the ngx_connection_t level which is usually >>> mapped 1:1 to a physical connection. >>> >> >> In our case we have clients that use pipelining where several requests >> share the same tcp session. An ngx_connection_t is mapped 1:1 to the >> requests and not to the physical socket in this case, am I right? >> > [..] > > Regardless of the internal implementation it's better to think that > "limit_rate" and "limit_rate_after" currently work per request only. > > The ngx_connection_t is mapped to the physical socket, but the number > of sent bytes is reseted to zero on each request in the connection. Interesting! So perhaps a quick fix for my current use case would be to avoid resetting the "sent bytes? on each request? In that case the limit will be counted per socket rather than request. Probably not a generic solution that everybody would like, as it probably breaks other use cases, but perhaps something I can quickly try out on a private branch. > > >> >> >>> >>>> If our conclusion are right - that the limit is per HTTP request and not >>>> per socket so that a chunked download would not be limited - does anyone >>>> have any suggestion how we would go about to introduce a limit also on >>>> socket level? I don't mind hacking away at the code, but perhaps someone >>>> out there has already looked into this? >>> >>> I know that Valentin (CC:ed) was working on the limit_rate module that >>> improves things, including variables support and extening the limitation >>> beyond only "per request". It should become possible to limit byte rate >>> per IP, for example. >>> >> >> >> That is great news that there is someone looking into it. Would this >> limit_rate module also support limiting the rate in pipelined requests? The >> use case I am looking at specifically is to control the download rate of a >> client that pipelines requests for HLS chunks (Apple's HTTP live >> streaming), so the same tcp connection is being used for several >> downloadable chunks. I would need to have the limit_rate_after as it is >> today, but on socket rather than http request, to allow the video buffer to >> fill up quickly during the requests for the first chunks, but would like >> to limit the rate of the following pipelined request once the video >> playback buffer is full. >> > [..] > > Yes, I'll keep in mind this use case. Great! Looking forward to this feature when it is available. Thanks for the good work guys. Stefan From mdounin at mdounin.ru Mon Dec 21 13:26:12 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Dec 2015 16:26:12 +0300 Subject: [PATCH] Variables: added $tcpinfo_retrans In-Reply-To: <7184744.nWrtJ39kzb@vbart-workstation> References: <7184744.nWrtJ39kzb@vbart-workstation> Message-ID: <20151221132612.GH74233@mdounin.ru> Hello! On Mon, Dec 21, 2015 at 03:22:37PM +0300, Valentin V. Bartenev wrote: > On Saturday 19 December 2015 03:43:28 Alexey Ivanov wrote: [...] > > @@ -1053,6 +1056,10 @@ > > value = ti.tcpi_rcv_space; > > break; > > > > + case 4: > > + value = ti.tcpi_retrans; > > + break; > > + > > /* suppress warning */ > > default: > > value = 0; > > > > What if there is no "tcpi_retrans" field? Just in case, there is no tcpi_retrans field on FreeBSD. -- Maxim Dounin http://nginx.org/ From devnexen at gmail.com Mon Dec 21 13:41:03 2015 From: devnexen at gmail.com (David CARLIER) Date: Mon, 21 Dec 2015 13:41:03 +0000 Subject: [PATCH] Variables: added $tcpinfo_retrans In-Reply-To: <20151221132612.GH74233@mdounin.ru> References: <7184744.nWrtJ39kzb@vbart-workstation> <20151221132612.GH74233@mdounin.ru> Message-ID: I think FreeBSD has __tcpi_retrans but not "typedef" it though ... On 21 December 2015 at 13:26, Maxim Dounin wrote: > Hello! > > On Mon, Dec 21, 2015 at 03:22:37PM +0300, Valentin V. Bartenev wrote: > >> On Saturday 19 December 2015 03:43:28 Alexey Ivanov wrote: > > [...] > >> > @@ -1053,6 +1056,10 @@ >> > value = ti.tcpi_rcv_space; >> > break; >> > >> > + case 4: >> > + value = ti.tcpi_retrans; >> > + break; >> > + >> > /* suppress warning */ >> > default: >> > value = 0; >> > >> >> What if there is no "tcpi_retrans" field? > > Just in case, there is no tcpi_retrans field on FreeBSD. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From mdounin at mdounin.ru Mon Dec 21 13:57:36 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Dec 2015 16:57:36 +0300 Subject: [PATCH] Variables: added $tcpinfo_retrans In-Reply-To: References: <7184744.nWrtJ39kzb@vbart-workstation> <20151221132612.GH74233@mdounin.ru> Message-ID: <20151221135736.GJ74233@mdounin.ru> Hello! On Mon, Dec 21, 2015 at 01:41:03PM +0000, David CARLIER wrote: > I think FreeBSD has __tcpi_retrans but not "typedef" it though ... It's just a placeholder for ABI compatibility, it's not set to anything. And either way it's named differently, so the patch will break things. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Mon Dec 21 14:02:13 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 21 Dec 2015 17:02:13 +0300 Subject: Is the limit_rate per tcp session or per HTTP request? In-Reply-To: <6CCCC86B-D13E-4C94-AA86-7F40064B6A0E@gmail.com> References: <3576215.kLobsEZYiH@vbart-workstation> <6CCCC86B-D13E-4C94-AA86-7F40064B6A0E@gmail.com> Message-ID: <1676307.dnyVKPyLVx@vbart-workstation> On Monday 21 December 2015 13:42:27 Stefan Hellkvist wrote: > > > On 21 Dec 2015, at 13:36, Valentin V. Bartenev wrote: > > > > On Monday 21 December 2015 13:18:43 Stefan Hellkvist wrote: > >> On Mon, Dec 21, 2015 at 12:26 PM, Ruslan Ermilov wrote: > >> > >>> On Mon, Dec 21, 2015 at 11:00:09AM +0100, Stefan Hellkvist wrote: > >>>> Hi, > >>>> > >>>> From reading the code and the docs I have gotten the impression that > >>>> limit_rate (and limit_rate_after) is per ngx_connection which (I think) > >>>> means that it is per HTTP request and not per socket. Am I right in this > >>>> conclusion or is the limit actually per socket/TCP connection? > >>> > >>> The docs at http://nginx.org/r/limit_rate says it clearly that the limit > >>> is set per a request, and describes one of the possible cases how this > >>> limit can be "avoided" by the client. > >>> > >>> The limit is implemented on the ngx_connection_t level which is usually > >>> mapped 1:1 to a physical connection. > >>> > >> > >> In our case we have clients that use pipelining where several requests > >> share the same tcp session. An ngx_connection_t is mapped 1:1 to the > >> requests and not to the physical socket in this case, am I right? > >> > > [..] > > > > Regardless of the internal implementation it's better to think that > > "limit_rate" and "limit_rate_after" currently work per request only. > > > > The ngx_connection_t is mapped to the physical socket, but the number > > of sent bytes is reseted to zero on each request in the connection. > > > Interesting! So perhaps a quick fix for my current use case would be to avoid resetting the "sent bytes? on each request? In that case the limit will be counted per socket rather than request. Probably not a generic solution that everybody would like, as it probably breaks other use cases, but perhaps something I can quickly try out on a private branch. That will break limit_rate. The other peculiarity of the current implementation is that it limits the average rate, and the average is calculated by this formula: rate = bytes_sent / (current_time - request_start_time) You may have better luck with the patch below (untested): diff -r def9c9c9ae05 -r 9e66c0bf7efd src/http/ngx_http_write_filter_module.c --- a/src/http/ngx_http_write_filter_module.c Sat Dec 12 10:32:58 2015 +0300 +++ b/src/http/ngx_http_write_filter_module.c Mon Dec 21 16:59:07 2015 +0300 @@ -219,7 +219,7 @@ ngx_http_write_filter(ngx_http_request_t } if (r->limit_rate) { - if (r->limit_rate_after == 0) { + if (c->requests == 1 && r->limit_rate_after == 0) { r->limit_rate_after = clcf->limit_rate_after; } wbr, Valentin V. Bartenev From devnexen at gmail.com Mon Dec 21 14:03:59 2015 From: devnexen at gmail.com (David CARLIER) Date: Mon, 21 Dec 2015 14:03:59 +0000 Subject: [PATCH] Variables: added $tcpinfo_retrans In-Reply-To: <20151221135736.GJ74233@mdounin.ru> References: <7184744.nWrtJ39kzb@vbart-workstation> <20151221132612.GH74233@mdounin.ru> <20151221135736.GJ74233@mdounin.ru> Message-ID: On 21 December 2015 at 13:57, Maxim Dounin wrote: > Hello! > > On Mon, Dec 21, 2015 at 01:41:03PM +0000, David CARLIER wrote: > >> I think FreeBSD has __tcpi_retrans but not "typedef" it though ... > > It's just a placeholder for ABI compatibility, it's not set to > anything. Yes it s many fields are not set (less than I first thought ...) > And either way it's named differently, so the patch > will break things. > Sure it would be meaningless in this case to use the FreeBSD field then. > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel From devnexen at gmail.com Mon Dec 21 15:07:10 2015 From: devnexen at gmail.com (David CARLIER) Date: Mon, 21 Dec 2015 15:07:10 +0000 Subject: upstream module / need to check pipe field Message-ID: Hi, I was wondering if it might be better to check if pipe field before attempting to clear the temp file ? Anyway it is checked as well line 4015 and below ... in this case u->pipe checks is sufficient u->pipe->temp_file is checked inside ngx_http_file_cache_free function. Hope it helps. Regards. -------------- next part -------------- diff -r 78b4e10b4367 conf/nginx.conf --- a/conf/nginx.conf Thu Dec 17 16:39:15 2015 +0300 +++ b/conf/nginx.conf Mon Dec 21 14:59:58 2015 +0000 @@ -33,7 +33,7 @@ #gzip on; server { - listen 80; + listen 8380; server_name localhost; #charset koi8-r; diff -r 78b4e10b4367 src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c Thu Dec 17 16:39:15 2015 +0300 +++ b/src/http/ngx_http_upstream.c Mon Dec 21 14:59:58 2015 +0000 @@ -4048,7 +4048,9 @@ } } - ngx_http_file_cache_free(r->cache, u->pipe->temp_file); + if (u->pipe) { + ngx_http_file_cache_free(r->cache, u->pipe->temp_file); + } } #endif From mdounin at mdounin.ru Mon Dec 21 15:33:41 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Dec 2015 18:33:41 +0300 Subject: upstream module / need to check pipe field In-Reply-To: References: Message-ID: <20151221153341.GK74233@mdounin.ru> Hello! On Mon, Dec 21, 2015 at 03:07:10PM +0000, David CARLIER wrote: > I was wondering if it might be better to check if pipe field before > attempting to clear the temp file ? Anyway it is checked as well line > 4015 and below ... in this case u->pipe checks is sufficient > u->pipe->temp_file is checked inside ngx_http_file_cache_free > function. The check for u->pipe is needed only in places where we don't know if u->pipe exists or not. -- Maxim Dounin http://nginx.org/ From savetherbtz at gmail.com Mon Dec 21 17:53:58 2015 From: savetherbtz at gmail.com (Alexey Ivanov) Date: Mon, 21 Dec 2015 09:53:58 -0800 Subject: [PATCH] Variables: added $tcpinfo_retrans In-Reply-To: References: <7184744.nWrtJ39kzb@vbart-workstation> <20151221132612.GH74233@mdounin.ru> <20151221135736.GJ74233@mdounin.ru> Message-ID: <68556B49-CFBB-489A-A1EF-5CE6618E014B@gmail.com> # HG changeset patch # User Alexey Ivanov # Date 1450520577 28800 # Sat Dec 19 02:22:57 2015 -0800 # Branch tcpi_retrans # Node ID b018f837480dbad3dc45f1a2ba93fb99bc625ef5 # Parent 78b4e10b4367b31367aad3c83c9c3acdd42397c4 Variables: added $tcpinfo_retrans This one is useful for debugging poor network conditions. diff -r 78b4e10b4367 -r b018f837480d auto/unix --- a/auto/unix Thu Dec 17 16:39:15 2015 +0300 +++ b/auto/unix Sat Dec 19 02:22:57 2015 -0800 @@ -384,6 +384,17 @@ . auto/feature +ngx_feature="TCP_INFO_RETRANS" +ngx_feature_name="NGX_HAVE_TCP_INFO_RETRANS" +ngx_feature_run=no +ngx_feature_incs="#include " +ngx_feature_path= +ngx_feature_libs= +ngx_feature_test="struct tcp_info ti; + ti.tcpi_retrans" +. auto/feature + + ngx_feature="accept4()" ngx_feature_name="NGX_HAVE_ACCEPT4" ngx_feature_run=no diff -r 78b4e10b4367 -r b018f837480d src/http/ngx_http_variables.c --- a/src/http/ngx_http_variables.c Thu Dec 17 16:39:15 2015 +0300 +++ b/src/http/ngx_http_variables.c Sat Dec 19 02:22:57 2015 -0800 @@ -343,6 +343,11 @@ { ngx_string("tcpinfo_rcv_space"), NULL, ngx_http_variable_tcpinfo, 3, NGX_HTTP_VAR_NOCACHEABLE, 0 }, + +#if (NGX_HAVE_TCP_INFO_RETRANS) + { ngx_string("tcpinfo_retrans"), NULL, ngx_http_variable_tcpinfo, + 4, NGX_HTTP_VAR_NOCACHEABLE, 0 }, +#endif #endif { ngx_null_string, NULL, NULL, 0, 0, 0 } @@ -1053,6 +1058,12 @@ value = ti.tcpi_rcv_space; break; +#if (NGX_HAVE_TCP_INFO_RETRANS) + case 4: + value = ti.tcpi_retrans; + break; +#endif + /* suppress warning */ default: value = 0; > On Dec 21, 2015, at 6:03 AM, David CARLIER wrote: > > On 21 December 2015 at 13:57, Maxim Dounin wrote: >> Hello! >> >> On Mon, Dec 21, 2015 at 01:41:03PM +0000, David CARLIER wrote: >> >>> I think FreeBSD has __tcpi_retrans but not "typedef" it though ... >> >> It's just a placeholder for ABI compatibility, it's not set to >> anything. > > Yes it s many fields are not set (less than I first thought ...) > > >> And either way it's named differently, so the patch >> will break things. >> > > Sure it would be meaningless in this case to use the FreeBSD field then. > >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx-devel mailing list >> nginx-devel at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx-devel > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From hellkvist at gmail.com Tue Dec 22 08:46:39 2015 From: hellkvist at gmail.com (Stefan Hellkvist) Date: Tue, 22 Dec 2015 09:46:39 +0100 Subject: Is the limit_rate per tcp session or per HTTP request? In-Reply-To: <1676307.dnyVKPyLVx@vbart-workstation> References: <3576215.kLobsEZYiH@vbart-workstation> <6CCCC86B-D13E-4C94-AA86-7F40064B6A0E@gmail.com> <1676307.dnyVKPyLVx@vbart-workstation> Message-ID: <1C9B7342-F236-4D84-BD1F-B342E5FD6114@gmail.com> >> >> Interesting! So perhaps a quick fix for my current use case would be to avoid resetting the "sent bytes? on each request? In that case the limit will be counted per socket rather than request. Probably not a generic solution that everybody would like, as it probably breaks other use cases, but perhaps something I can quickly try out on a private branch. > > That will break limit_rate. > > The other peculiarity of the current implementation is that it limits > the average rate, and the average is calculated by this formula: > > rate = bytes_sent / (current_time - request_start_time) > > You may have better luck with the patch below (untested): > > diff -r def9c9c9ae05 -r 9e66c0bf7efd src/http/ngx_http_write_filter_module.c > --- a/src/http/ngx_http_write_filter_module.c Sat Dec 12 10:32:58 2015 +0300 > +++ b/src/http/ngx_http_write_filter_module.c Mon Dec 21 16:59:07 2015 +0300 > @@ -219,7 +219,7 @@ ngx_http_write_filter(ngx_http_request_t > } > > if (r->limit_rate) { > - if (r->limit_rate_after == 0) { > + if (c->requests == 1 && r->limit_rate_after == 0) { > r->limit_rate_after = clcf->limit_rate_after; > } > Thanks for the patch. I tried it however and it does not seem to achieve what we want. The patch, as I understand it, only seem to make the limit_rate_after config be active on the first request in the pipeline. In our case is always a small playlist file (an HLS session starts by loading a playlist .m3u8-file) which is always less than the limit_rate_after limit that we wanted to act on the whole TCP session, so this has no affect on the larger video files that are requested after the first request in the pipeline - they will always be rate limited even the first chunks that fit under the limit_rate_after border. What we need is that the sent data counter and the limit_rate_after work on the TCP session and not per request and this patch does not seem to achieve that unfortunately. Perhaps another approach, if we do not want to touch the c->sent behaviour and keep it per request, would be to decrement the limit_rate_after with the total number of bytes sent within this TCP-session (which would mean we would have to have a separate counter for that). Thanks anyway. I?ll see what I can come up with? /Stefan -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.muzafarov at gmail.com Tue Dec 22 11:12:55 2015 From: m.muzafarov at gmail.com (=?utf-8?B?0JzQsNC60YEg0Jw=?=) Date: Tue, 22 Dec 2015 14:12:55 +0300 Subject: [PATCH] HTTP Strip Content-Type by semicolon Message-ID: <725BBFDA-C739-4614-89DC-CDC2B3415245@gmail.com> # HG changeset patch # User Maxim Muzafarov # Date 1450782516 -10800 # Tue Dec 22 14:08:36 2015 +0300 # Node ID efdf809163976307021556c3a11a4b66201c1375 # Parent 78b4e10b4367b31367aad3c83c9c3acdd42397c4 Strip Content-Type by semicolon Test only first part of Content-Type in heavy mimes, such as "applciation/json; encoding=UTF-8" Useful for gzip_types hash, for example. diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c --- a/src/http/ngx_http_core_module.c +++ b/src/http/ngx_http_core_module.c @@ -1679,6 +1679,10 @@ for (i = 0; i < len; i++) { c = ngx_tolower(r->headers_out.content_type.data[i]); + if (c == ';') { + len = i; + break; + } hash = ngx_hash(hash, c); lowcase[i] = c; } From m.muzafarov at gmail.com Tue Dec 22 11:24:18 2015 From: m.muzafarov at gmail.com (Maxim Muzafarov) Date: Tue, 22 Dec 2015 14:24:18 +0300 Subject: [PATCH] HTTP Strip Content-Type by semicolon In-Reply-To: <725BBFDA-C739-4614-89DC-CDC2B3415245@gmail.com> References: <725BBFDA-C739-4614-89DC-CDC2B3415245@gmail.com> Message-ID: Of course, I mean "application/json; charset=UTF-8". In working copy this content-type needed to be declared separately, which may be difficult, when content-type received from backend. > On 22 Dec 2015, at 14:12, Maxim Muzafarov wrote: > > # HG changeset patch > # User Maxim Muzafarov > # Date 1450782516 -10800 > # Tue Dec 22 14:08:36 2015 +0300 > # Node ID efdf809163976307021556c3a11a4b66201c1375 > # Parent 78b4e10b4367b31367aad3c83c9c3acdd42397c4 > Strip Content-Type by semicolon > > Test only first part of Content-Type in heavy mimes, such as "applciation/json; encoding=UTF-8" Useful for gzip_types hash, for example. > > diff --git a/src/http/ngx_http_core_module.c b/src/http/ngx_http_core_module.c > --- a/src/http/ngx_http_core_module.c > +++ b/src/http/ngx_http_core_module.c > @@ -1679,6 +1679,10 @@ > > for (i = 0; i < len; i++) { > c = ngx_tolower(r->headers_out.content_type.data[i]); > + if (c == ';') { > + len = i; > + break; > + } > hash = ngx_hash(hash, c); > lowcase[i] = c; > } > From mdounin at mdounin.ru Tue Dec 22 13:31:41 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Dec 2015 16:31:41 +0300 Subject: [PATCH] HTTP Strip Content-Type by semicolon In-Reply-To: References: <725BBFDA-C739-4614-89DC-CDC2B3415245@gmail.com> Message-ID: <20151222133141.GL74233@mdounin.ru> Hello! On Tue, Dec 22, 2015 at 02:24:18PM +0300, Maxim Muzafarov wrote: > Of course, I mean "application/json; charset=UTF-8". In working > copy this content-type needed to be declared separately, which > may be difficult, when content-type received from backend. Charset parameter is expected to be handled separately, see src/http/ngx_http_upstream.c, ngx_http_upstream_copy_content_type(). -- Maxim Dounin http://nginx.org/ From faskiri.devel at gmail.com Tue Dec 22 14:21:14 2015 From: faskiri.devel at gmail.com (Fasih) Date: Tue, 22 Dec 2015 19:51:14 +0530 Subject: Pushing HTTP/2 to stable branch Message-ID: Hello! I currently use 1.8 (stable) nginx. Is there an expected timeline to have HTTP/2 available as nginx stable? Or backporting HTTP/2 to 1.8.x? Thanks and Regards +Fasih -------------- next part -------------- An HTML attachment was scrubbed... URL: From sem33 at yandex-team.ru Tue Dec 22 14:21:26 2015 From: sem33 at yandex-team.ru (Sergey Matveychuk) Date: Tue, 22 Dec 2015 17:21:26 +0300 Subject: [PATCH] ngx_pstrdup() and ngx_copy() problems Message-ID: <56795C66.8050000@yandex-team.ru> Hi. * It looks like strings are supposed to finish with '\0' char to be compatible with C strings. So ngx_pstrdup() must allocate and copy len+1, not just len. * ngx_copy() returns different values for different preprocessor conditions. PS. I have no idea how trac.nginx.org works. I tried to fill a ticket, but it just lost. -------------- next part -------------- diff --git a/src/core/ngx_string.c b/src/core/ngx_string.c index 503502a..77d7c3c 100644 --- a/src/core/ngx_string.c +++ b/src/core/ngx_string.c @@ -58,12 +58,12 @@ ngx_pstrdup(ngx_pool_t *pool, ngx_str_t *src) { u_char *dst; - dst = ngx_pnalloc(pool, src->len); + dst = ngx_pnalloc(pool, src->len + 1); if (dst == NULL) { return NULL; } - ngx_memcpy(dst, src->data, src->len); + ngx_memcpy(dst, src->data, src->len + 1); return dst; } diff --git a/src/core/ngx_string.h b/src/core/ngx_string.h index 712e7d0..e0380a9 100644 --- a/src/core/ngx_string.h +++ b/src/core/ngx_string.h @@ -122,7 +122,7 @@ ngx_copy(u_char *dst, u_char *src, size_t len) len--; } - return dst; + return dst + len; } else { return ngx_cpymem(dst, src, len); From hellkvist at gmail.com Tue Dec 22 14:25:28 2015 From: hellkvist at gmail.com (Stefan Hellkvist) Date: Tue, 22 Dec 2015 15:25:28 +0100 Subject: Is the limit_rate per tcp session or per HTTP request? In-Reply-To: <1C9B7342-F236-4D84-BD1F-B342E5FD6114@gmail.com> References: <3576215.kLobsEZYiH@vbart-workstation> <6CCCC86B-D13E-4C94-AA86-7F40064B6A0E@gmail.com> <1676307.dnyVKPyLVx@vbart-workstation> <1C9B7342-F236-4D84-BD1F-B342E5FD6114@gmail.com> Message-ID: Hi again, Just to follow up on this one: I decided to try the route of avoiding to reset "c->sent" for each request and it does seem to work. To do this I also needed to store the start time of the ngx_connection to make the rate calculations correct for my use case. The patch, which is attached, breaks the previous behavior of limit_rate as it puts limit_rate and limit_rate_after on the tcp session instead of per request. As mentioned, the patch seems to solve my particular use case (I can put limit_rate_after on the tcp session and therefore do rate limitation on pipelined requests) but I do not claim that it is a general solution that does not break other things. I include it here anyway in case someone else is interested. I do not suggest that anyone adds it to nginx for real. /Stefan On Tue, Dec 22, 2015 at 9:46 AM, Stefan Hellkvist wrote: > > Interesting! So perhaps a quick fix for my current use case would be to > avoid resetting the "sent bytes? on each request? In that case the limit > will be counted per socket rather than request. Probably not a generic > solution that everybody would like, as it probably breaks other use cases, > but perhaps something I can quickly try out on a private branch. > > > That will break limit_rate. > > The other peculiarity of the current implementation is that it limits > the average rate, and the average is calculated by this formula: > > rate = bytes_sent / (current_time - request_start_time) > > You may have better luck with the patch below (untested): > > diff -r def9c9c9ae05 -r 9e66c0bf7efd > src/http/ngx_http_write_filter_module.c > --- a/src/http/ngx_http_write_filter_module.c Sat Dec 12 10:32:58 2015 > +0300 > +++ b/src/http/ngx_http_write_filter_module.c Mon Dec 21 16:59:07 2015 > +0300 > @@ -219,7 +219,7 @@ ngx_http_write_filter(ngx_http_request_t > } > > if (r->limit_rate) { > - if (r->limit_rate_after == 0) { > + if (c->requests == 1 && r->limit_rate_after == 0) { > r->limit_rate_after = clcf->limit_rate_after; > } > > > Thanks for the patch. I tried it however and it does not seem to achieve > what we want. The patch, as I understand it, only seem to make the > limit_rate_after config be active on the first request in the pipeline. In > our case is always a small playlist file (an HLS session starts by loading > a playlist .m3u8-file) which is always less than the limit_rate_after limit > that we wanted to act on the whole TCP session, so this has no affect on > the larger video files that are requested after the first request in the > pipeline - they will always be rate limited even the first chunks that fit > under the limit_rate_after border. > > What we need is that the sent data counter and the limit_rate_after work > on the TCP session and not per request and this patch does not seem to > achieve that unfortunately. > > Perhaps another approach, if we do not want to touch the c->sent behaviour > and keep it per request, would be to decrement the limit_rate_after with > the total number of bytes sent within this TCP-session (which would mean we > would have to have a separate counter for that). > > Thanks anyway. I?ll see what I can come up with? > > > > /Stefan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- diff -r 78b4e10b4367 src/core/ngx_connection.c --- a/src/core/ngx_connection.c Thu Dec 17 16:39:15 2015 +0300 +++ b/src/core/ngx_connection.c Tue Dec 22 15:11:42 2015 +0100 @@ -946,6 +946,7 @@ ngx_connection_t * ngx_get_connection(ngx_socket_t s, ngx_log_t *log) { + ngx_time_t *tp; ngx_uint_t instance; ngx_event_t *rev, *wev; ngx_connection_t *c; @@ -992,6 +993,9 @@ c->fd = s; c->log = log; + tp = ngx_timeofday(); + c->start_sec = tp->sec; + instance = rev->instance; ngx_memzero(rev, sizeof(ngx_event_t)); diff -r 78b4e10b4367 src/core/ngx_connection.h --- a/src/core/ngx_connection.h Thu Dec 17 16:39:15 2015 +0300 +++ b/src/core/ngx_connection.h Tue Dec 22 15:11:42 2015 +0100 @@ -193,6 +193,8 @@ #if (NGX_THREADS) ngx_thread_task_t *sendfile_task; #endif + + time_t start_sec; }; diff -r 78b4e10b4367 src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c Thu Dec 17 16:39:15 2015 +0300 +++ b/src/http/ngx_http_request.c Tue Dec 22 15:11:42 2015 +0100 @@ -2934,7 +2934,6 @@ c->data = r; - c->sent = 0; c->destroyed = 0; if (rev->timer_set) { @@ -3187,7 +3186,6 @@ return; } - c->sent = 0; c->destroyed = 0; ngx_del_timer(rev); diff -r 78b4e10b4367 src/http/ngx_http_write_filter_module.c --- a/src/http/ngx_http_write_filter_module.c Thu Dec 17 16:39:15 2015 +0300 +++ b/src/http/ngx_http_write_filter_module.c Tue Dec 22 15:11:42 2015 +0100 @@ -223,7 +223,7 @@ r->limit_rate_after = clcf->limit_rate_after; } - limit = (off_t) r->limit_rate * (ngx_time() - r->start_sec + 1) + limit = (off_t) r->limit_rate * (ngx_time() - c->start_sec + 1) - (c->sent - r->limit_rate_after); if (limit <= 0) { From mdounin at mdounin.ru Tue Dec 22 14:47:41 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Dec 2015 17:47:41 +0300 Subject: [PATCH] ngx_pstrdup() and ngx_copy() problems In-Reply-To: <56795C66.8050000@yandex-team.ru> References: <56795C66.8050000@yandex-team.ru> Message-ID: <20151222144741.GO74233@mdounin.ru> Hello! On Tue, Dec 22, 2015 at 05:21:26PM +0300, Sergey Matveychuk wrote: > Hi. > > * It looks like strings are supposed to finish with '\0' char to be > compatible with C strings. So ngx_pstrdup() must allocate and copy len+1, > not just len. > > * ngx_copy() returns different values for different preprocessor conditions. > > PS. I have no idea how trac.nginx.org works. I tried to fill a ticket, but > it just lost. Trac works fine, and I've just replied to you in the ticket. Here is a copy of the response: No, your assumptions are wrong. Strings in nginx are not expected to be null-terminated in general. The ngx_pstrdup() function is expected to duplicate the string passed, exactly, and this is what it currently does. The change to ngx_copy() is also wrong, you've missed dst++ in the loop. Please also see http://nginx.org/en/docs/contributing_changes.html. -- Maxim Dounin http://nginx.org/ From maxim at nginx.com Tue Dec 22 14:49:38 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 22 Dec 2015 17:49:38 +0300 Subject: Pushing HTTP/2 to stable branch In-Reply-To: References: Message-ID: <56796302.2040604@nginx.com> Hi, On 12/22/15 5:21 PM, Fasih wrote: > Hello! > > I currently use 1.8 (stable) nginx. Is there an expected timeline to > have HTTP/2 available as nginx stable? Or backporting HTTP/2 to 1.8.x? > No plans to backport HTTP/2 to 1.8. We usually make new stable/dev branches in April/May timeframe. Also, it's safe to use 1.9 in production -- see the resent thread here about nginx http/2 stability. -- Maxim Konovalov From maxim at nginx.com Tue Dec 22 14:51:13 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 22 Dec 2015 17:51:13 +0300 Subject: [PATCH] ngx_pstrdup() and ngx_copy() problems In-Reply-To: <20151222144741.GO74233@mdounin.ru> References: <56795C66.8050000@yandex-team.ru> <20151222144741.GO74233@mdounin.ru> Message-ID: <56796361.8050209@nginx.com> On 12/22/15 5:47 PM, Maxim Dounin wrote: > Hello! > > On Tue, Dec 22, 2015 at 05:21:26PM +0300, Sergey Matveychuk wrote: > >> Hi. >> >> * It looks like strings are supposed to finish with '\0' char to be >> compatible with C strings. So ngx_pstrdup() must allocate and copy len+1, >> not just len. >> >> * ngx_copy() returns different values for different preprocessor conditions. >> >> PS. I have no idea how trac.nginx.org works. I tried to fill a ticket, but >> it just lost. > > Trac works fine, and I've just replied to you in the ticket. > Here is a copy of the response: > > No, your assumptions are wrong. Strings in nginx are not expected > to be null-terminated in general. The ngx_pstrdup() function is >From Emiller's guide: Note: an ngx_str_t is a struct with a data element, which is a string, and a len element, which is the length of that string. Nginx uses this data structure most places you'd expect a string. http://www.evanmiller.org/nginx-modules-guide.html [...] -- Maxim Konovalov From sem33 at yandex-team.ru Tue Dec 22 14:57:25 2015 From: sem33 at yandex-team.ru (Sergey Matveychuk) Date: Tue, 22 Dec 2015 17:57:25 +0300 Subject: [PATCH] ngx_pstrdup() and ngx_copy() problems In-Reply-To: <20151222144741.GO74233@mdounin.ru> References: <56795C66.8050000@yandex-team.ru> <20151222144741.GO74233@mdounin.ru> Message-ID: <567964D5.20306@yandex-team.ru> Yes, I wrong with ngx_copy. But I see ngx_string_t data is passed to ? functions. Just a first example I've found in ngx_open_file_wrapper(): fd = ngx_open_file(name->data, mode, create, access); Where ngx_open_file() is a macro for open(2). 22.12.2015 17:47, Maxim Dounin ?????: > Hello! > > On Tue, Dec 22, 2015 at 05:21:26PM +0300, Sergey Matveychuk wrote: > >> Hi. >> >> * It looks like strings are supposed to finish with '\0' char to be >> compatible with C strings. So ngx_pstrdup() must allocate and copy len+1, >> not just len. >> >> * ngx_copy() returns different values for different preprocessor conditions. >> >> PS. I have no idea how trac.nginx.org works. I tried to fill a ticket, but >> it just lost. > > Trac works fine, and I've just replied to you in the ticket. > Here is a copy of the response: > > No, your assumptions are wrong. Strings in nginx are not expected > to be null-terminated in general. The ngx_pstrdup() function is > expected to duplicate the string passed, exactly, and this is what > it currently does. The change to ngx_copy() is also wrong, you've > missed dst++ in the loop. > > Please also see http://nginx.org/en/docs/contributing_changes.html. > From vbart at nginx.com Tue Dec 22 15:08:56 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 22 Dec 2015 18:08:56 +0300 Subject: [PATCH] ngx_pstrdup() and ngx_copy() problems In-Reply-To: <567964D5.20306@yandex-team.ru> References: <56795C66.8050000@yandex-team.ru> <20151222144741.GO74233@mdounin.ru> <567964D5.20306@yandex-team.ru> Message-ID: <16304093.AMEUZ5Wl7j@vbart-workstation> On Tuesday 22 December 2015 17:57:25 Sergey Matveychuk wrote: > Yes, I wrong with ngx_copy. > > But I see ngx_string_t data is passed to ? functions. Just a first > example I've found in ngx_open_file_wrapper(): > fd = ngx_open_file(name->data, mode, create, access); > > Where ngx_open_file() is a macro for open(2). > [..] In places where a null-terminated string is expected the special care is taken to provide such string. But in general they are not null-terminated. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Tue Dec 22 15:21:03 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Dec 2015 18:21:03 +0300 Subject: [PATCH] ngx_pstrdup() and ngx_copy() problems In-Reply-To: <567964D5.20306@yandex-team.ru> References: <56795C66.8050000@yandex-team.ru> <20151222144741.GO74233@mdounin.ru> <567964D5.20306@yandex-team.ru> Message-ID: <20151222152103.GP74233@mdounin.ru> Hello! On Tue, Dec 22, 2015 at 05:57:25PM +0300, Sergey Matveychuk wrote: > Yes, I wrong with ngx_copy. > > But I see ngx_string_t data is passed to ? functions. Just a first example > I've found in ngx_open_file_wrapper(): > fd = ngx_open_file(name->data, mode, create, access); > > Where ngx_open_file() is a macro for open(2). When a string is expected to be used to call a library function which accepts null-terminated strings, it's up to the caller to ensure the string passed is properly null-terminated. In some cases it happens automatically (e.g., strings from configuration parser are null-terminated), while in other cases you'll have to do it explicitly (e.g., in ngx_init_zone_pool() special format specifier %Z is used to generate a null-terminated name of a lock file). -- Maxim Dounin http://nginx.org/ From faskiri.devel at gmail.com Tue Dec 22 18:21:19 2015 From: faskiri.devel at gmail.com (Fasih) Date: Tue, 22 Dec 2015 23:51:19 +0530 Subject: Pushing HTTP/2 to stable branch In-Reply-To: <56796302.2040604@nginx.com> References: <56796302.2040604@nginx.com> Message-ID: Thanks. Yeah, I did see the other thread was wondering if I should move to 1.9 or is stable coming soon. On Tue, Dec 22, 2015 at 8:19 PM, Maxim Konovalov wrote: > Hi, > > On 12/22/15 5:21 PM, Fasih wrote: > > Hello! > > > > I currently use 1.8 (stable) nginx. Is there an expected timeline to > > have HTTP/2 available as nginx stable? Or backporting HTTP/2 to 1.8.x? > > > No plans to backport HTTP/2 to 1.8. > > We usually make new stable/dev branches in April/May timeframe. > > Also, it's safe to use 1.9 in production -- see the resent thread > here about nginx http/2 stability. > > -- > Maxim Konovalov > > _______________________________________________ > nginx-devel mailing list > nginx-devel at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tolga.ceylan at gmail.com Wed Dec 23 00:47:33 2015 From: tolga.ceylan at gmail.com (Tolga Ceylan) Date: Tue, 22 Dec 2015 16:47:33 -0800 Subject: proxy_next_upstream default Message-ID: Hi All, According to documentation, default for proxy_next_upstream flag is error + timeout + invalid_header even if these are not specified: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream However, looking at the ngx_http_proxy_module.c, I only see timeout/error as merged/set: https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_proxy_module.c#L3065 This means documentation is incorrect, so if "invalid_header" is not specified, nginx will not consider such cases as "unsuccessful" attempts, right? Regards, Tolga Ceylan From vbart at nginx.com Wed Dec 23 12:02:42 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 23 Dec 2015 15:02:42 +0300 Subject: proxy_next_upstream default In-Reply-To: References: Message-ID: <4653357.hvm6uoPk0s@vbart-workstation> On Tuesday 22 December 2015 16:47:33 Tolga Ceylan wrote: > Hi All, > > According to documentation, default for proxy_next_upstream flag is > error + timeout + invalid_header > even if these are not specified: > > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream > [..] By your link: Default: proxy_next_upstream error timeout; > However, looking at the ngx_http_proxy_module.c, I only see timeout/error > as merged/set: > > https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_proxy_module.c#L3065 > > This means documentation is incorrect, so if "invalid_header" is not > specified, nginx > will not consider such cases as "unsuccessful" attempts, right? > [..] The documentation is correct, it says: "error, timeout and invalid_header are always considered unsuccessful attempts, even if they are not specified in the directive". This is not about the default value, this is about nginx behavior. See the relevant part of the code: http://hg.nginx.org/nginx/file/tip/src/http/ngx_http_upstream.c#l2141 http://hg.nginx.org/nginx/file/tip/src/http/ngx_http_upstream.c#l2152 http://hg.nginx.org/nginx/file/tip/src/http/ngx_http_upstream.c#l3811 (note that the value of the directive isn't checked in this code path) wbr, Valentin V. Bartenev From tolga.ceylan at gmail.com Thu Dec 24 01:24:27 2015 From: tolga.ceylan at gmail.com (Tolga Ceylan) Date: Wed, 23 Dec 2015 17:24:27 -0800 Subject: proxy_next_upstream default In-Reply-To: <4653357.hvm6uoPk0s@vbart-workstation> References: <4653357.hvm6uoPk0s@vbart-workstation> Message-ID: On Wed, Dec 23, 2015 at 4:02 AM, Valentin V. Bartenev wrote: > > The documentation is correct, it says: "error, timeout and invalid_header > are always considered unsuccessful attempts, even if they are not specified > in the directive". > > This is not about the default value, this is about nginx behavior. > See the relevant part of the code: > > http://hg.nginx.org/nginx/file/tip/src/http/ngx_http_upstream.c#l2141 > http://hg.nginx.org/nginx/file/tip/src/http/ngx_http_upstream.c#l2152 > http://hg.nginx.org/nginx/file/tip/src/http/ngx_http_upstream.c#l3811 > > (note that the value of the directive isn't checked in this code path) > > Thanks Valentin, I see it now. Regards, Tolga Ceylan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Fri Dec 25 07:57:25 2015 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 25 Dec 2015 10:57:25 +0300 Subject: [PATCH] Fix ptr resolving with cname In-Reply-To: <9d8c7332b7300908414e.1450238369@AAMStorage.lan> References: <9d8c7332b7300908414e.1450238369@AAMStorage.lan> Message-ID: <20151225075725.GG36793@lo0.su> On Wed, Dec 16, 2015 at 11:59:29AM +0800, DannyAAM wrote: > # HG changeset patch > # User DannyAAM > # Date 1449696194 -28800 > # Thu Dec 10 05:23:14 2015 +0800 > # Branch fix-ptr-cname > # Node ID 9d8c7332b7300908414e3bec78a90d9d14b30af8 > # Parent dfe68c41f34f865bc7b45cbe6b7d0f639de283fc > Fix ptr resolving with cname > > Make ptr process aware of cname & follow it. > (This depends on resolver's recursive answer.) Please try these patches instead. # HG changeset patch # User Ruslan Ermilov # Date 1450362072 -10800 # Thu Dec 17 17:21:12 2015 +0300 # Node ID 799f1ad5e2f31d50ec1200f9c210c6763e3ece37 # Parent 78b4e10b4367b31367aad3c83c9c3acdd42397c4 Resolver: style. Renamed argument in ngx_resolver_process_a() and ngx_resolver_process_ptr(), for consistency. diff --git a/src/core/ngx_resolver.c b/src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c +++ b/src/core/ngx_resolver.c @@ -75,11 +75,11 @@ static ngx_uint_t ngx_resolver_resend_em static void ngx_resolver_read_response(ngx_event_t *rev); static void ngx_resolver_process_response(ngx_resolver_t *r, u_char *buf, size_t n); -static void ngx_resolver_process_a(ngx_resolver_t *r, u_char *buf, size_t n, - ngx_uint_t ident, ngx_uint_t code, ngx_uint_t qtype, +static void ngx_resolver_process_a(ngx_resolver_t *r, u_char *buf, + size_t last, ngx_uint_t ident, ngx_uint_t code, ngx_uint_t qtype, ngx_uint_t nan, ngx_uint_t ans); -static void ngx_resolver_process_ptr(ngx_resolver_t *r, u_char *buf, size_t n, - ngx_uint_t ident, ngx_uint_t code, ngx_uint_t nan); +static void ngx_resolver_process_ptr(ngx_resolver_t *r, u_char *buf, + size_t last, ngx_uint_t ident, ngx_uint_t code, ngx_uint_t nan); static ngx_resolver_node_t *ngx_resolver_lookup_name(ngx_resolver_t *r, ngx_str_t *name, uint32_t hash); static ngx_resolver_node_t *ngx_resolver_lookup_addr(ngx_resolver_t *r, @@ -2022,7 +2022,7 @@ next: static void -ngx_resolver_process_ptr(ngx_resolver_t *r, u_char *buf, size_t n, +ngx_resolver_process_ptr(ngx_resolver_t *r, u_char *buf, size_t last, ngx_uint_t ident, ngx_uint_t code, ngx_uint_t nan) { char *err; @@ -2045,7 +2045,7 @@ ngx_resolver_process_ptr(ngx_resolver_t #endif if (ngx_resolver_copy(r, NULL, buf, - buf + sizeof(ngx_resolver_hdr_t), buf + n) + buf + sizeof(ngx_resolver_hdr_t), buf + last) != NGX_OK) { return; @@ -2185,7 +2185,7 @@ valid: i += sizeof(ngx_resolver_qs_t); - if (i + 2 + sizeof(ngx_resolver_an_t) >= n) { + if (i + 2 + sizeof(ngx_resolver_an_t) >= last) { goto short_response; } @@ -2220,11 +2220,11 @@ valid: i += 2 + sizeof(ngx_resolver_an_t); - if (i + len > n) { + if (i + len > last) { goto short_response; } - if (ngx_resolver_copy(r, &name, buf, buf + i, buf + n) != NGX_OK) { + if (ngx_resolver_copy(r, &name, buf, buf + i, buf + last) != NGX_OK) { goto failed; } # HG changeset patch # User Ruslan Ermilov # Date 1450362076 -10800 # Thu Dec 17 17:21:16 2015 +0300 # Node ID 35f2e54f88cd582b8ab5ad617022022e0bcf8acd # Parent 799f1ad5e2f31d50ec1200f9c210c6763e3ece37 Resolver: improved PTR response processing. The previous code only parsed the first answer, without checking its type, and requiring a compressed RR name. The new code checks the RR type, supports responses with multiple answers, and doesn't require the RR name to be compressed. This has a side effect in limited support of CNAME. If a response includes both CNAME and PTR RRs, like when recursion is enabled on the server, PTR RR is handled. Full CNAME support in PTR response is not implemented in this change. diff --git a/src/core/ngx_resolver.c b/src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c +++ b/src/core/ngx_resolver.c @@ -2032,7 +2032,7 @@ ngx_resolver_process_ptr(ngx_resolver_t int32_t ttl; ngx_int_t octet; ngx_str_t name; - ngx_uint_t i, mask, qident, class; + ngx_uint_t mask, type, class, qident, a, i, start; ngx_queue_t *expire_queue; ngx_rbtree_t *tree; ngx_resolver_an_t *an; @@ -2185,44 +2185,96 @@ valid: i += sizeof(ngx_resolver_qs_t); - if (i + 2 + sizeof(ngx_resolver_an_t) >= last) { + for (a = 0; a < nan; a++) { + + start = i; + + while (i < last) { + + if (buf[i] & 0xc0) { + i += 2; + goto found; + } + + if (buf[i] == 0) { + i++; + goto test_length; + } + + i += 1 + buf[i]; + } + goto short_response; + + test_length: + + if (i - start < 2) { + err = "invalid name in DNS response"; + goto invalid; + } + + found: + + if (i + sizeof(ngx_resolver_an_t) >= last) { + goto short_response; + } + + an = (ngx_resolver_an_t *) &buf[i]; + + type = (an->type_hi << 8) + an->type_lo; + class = (an->class_hi << 8) + an->class_lo; + len = (an->len_hi << 8) + an->len_lo; + ttl = (an->ttl[0] << 24) + (an->ttl[1] << 16) + + (an->ttl[2] << 8) + (an->ttl[3]); + + if (class != 1) { + ngx_log_error(r->log_level, r->log, 0, + "unexpected RR class %ui", class); + goto failed; + } + + if (ttl < 0) { + ttl = 0; + } + + ngx_log_debug3(NGX_LOG_DEBUG_CORE, r->log, 0, + "resolver qt:%ui cl:%ui len:%uz", + type, class, len); + + i += sizeof(ngx_resolver_an_t); + + switch (type) { + + case NGX_RESOLVE_PTR: + + if (i + len > last) { + goto short_response; + } + + goto ptr; + + break; + + case NGX_RESOLVE_CNAME: + + break; + + default: + + ngx_log_error(r->log_level, r->log, 0, + "unexpected RR type %ui", type); + } + + i += len; } - /* compression pointer to *.arpa */ - - if (buf[i] != 0xc0 || buf[i + 1] != sizeof(ngx_resolver_hdr_t)) { - err = "invalid in-addr.arpa or ip6.arpa name in DNS response"; - goto invalid; - } - - an = (ngx_resolver_an_t *) &buf[i + 2]; - - class = (an->class_hi << 8) + an->class_lo; - len = (an->len_hi << 8) + an->len_lo; - ttl = (an->ttl[0] << 24) + (an->ttl[1] << 16) - + (an->ttl[2] << 8) + (an->ttl[3]); - - if (class != 1) { - ngx_log_error(r->log_level, r->log, 0, - "unexpected RR class %ui", class); - goto failed; - } - - if (ttl < 0) { - ttl = 0; - } - - ngx_log_debug3(NGX_LOG_DEBUG_CORE, r->log, 0, - "resolver qt:%ui cl:%ui len:%uz", - (an->type_hi << 8) + an->type_lo, - class, len); - - i += 2 + sizeof(ngx_resolver_an_t); - - if (i + len > last) { - goto short_response; - } + /* unlock addr mutex */ + + ngx_log_error(r->log_level, r->log, 0, + "no PTR type in DNS response"); + return; + +ptr: if (ngx_resolver_copy(r, &name, buf, buf + i, buf + last) != NGX_OK) { goto failed; From gmm at csdoc.com Wed Dec 30 13:45:48 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Wed, 30 Dec 2015 15:45:48 +0200 Subject: Workaround of race condition between systemd and nginx. Message-ID: <5683E00C.7080200@csdoc.com> # HG changeset patch # User Gena Makhomed # Date 1451482795 18000 # Wed Dec 30 08:39:55 2015 -0500 # Node ID a340d271b3ffa51c0396a5afc5270cb02b701204 # Parent 1073d7e4e430ddb53b603d151e1a403d10aa420b Workaround of race condition between systemd and nginx. Just replace network.target with network-online.target in systemd unit files. More details: http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/ diff -r 1073d7e4e430 -r a340d271b3ff rpm/SOURCES/nginx-debug.service --- a/rpm/SOURCES/nginx-debug.service Wed Dec 09 18:31:08 2015 +0300 +++ b/rpm/SOURCES/nginx-debug.service Wed Dec 30 08:39:55 2015 -0500 @@ -1,7 +1,7 @@ [Unit] Description=nginx - high performance web server Documentation=http://nginx.org/en/docs/ -After=network.target remote-fs.target nss-lookup.target +After=network-online.target remote-fs.target nss-lookup.target [Service] Type=forking diff -r 1073d7e4e430 -r a340d271b3ff rpm/SOURCES/nginx.service --- a/rpm/SOURCES/nginx.service Wed Dec 09 18:31:08 2015 +0300 +++ b/rpm/SOURCES/nginx.service Wed Dec 30 08:39:55 2015 -0500 @@ -1,7 +1,7 @@ [Unit] Description=nginx - high performance web server Documentation=http://nginx.org/en/docs/ -After=network.target remote-fs.target nss-lookup.target +After=network-online.target remote-fs.target nss-lookup.target [Service] Type=forking -------------- next part -------------- # HG changeset patch # User Gena Makhomed # Date 1451482795 18000 # Wed Dec 30 08:39:55 2015 -0500 # Node ID a340d271b3ffa51c0396a5afc5270cb02b701204 # Parent 1073d7e4e430ddb53b603d151e1a403d10aa420b Workaround of race condition between systemd and nginx. Just replace network.target with network-online.target in systemd unit files. More details: http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/ diff -r 1073d7e4e430 -r a340d271b3ff rpm/SOURCES/nginx-debug.service --- a/rpm/SOURCES/nginx-debug.service Wed Dec 09 18:31:08 2015 +0300 +++ b/rpm/SOURCES/nginx-debug.service Wed Dec 30 08:39:55 2015 -0500 @@ -1,7 +1,7 @@ [Unit] Description=nginx - high performance web server Documentation=http://nginx.org/en/docs/ -After=network.target remote-fs.target nss-lookup.target +After=network-online.target remote-fs.target nss-lookup.target [Service] Type=forking diff -r 1073d7e4e430 -r a340d271b3ff rpm/SOURCES/nginx.service --- a/rpm/SOURCES/nginx.service Wed Dec 09 18:31:08 2015 +0300 +++ b/rpm/SOURCES/nginx.service Wed Dec 30 08:39:55 2015 -0500 @@ -1,7 +1,7 @@ [Unit] Description=nginx - high performance web server Documentation=http://nginx.org/en/docs/ -After=network.target remote-fs.target nss-lookup.target +After=network-online.target remote-fs.target nss-lookup.target [Service] Type=forking From jimpop at gmail.com Wed Dec 30 14:51:58 2015 From: jimpop at gmail.com (Jim Popovitch) Date: Wed, 30 Dec 2015 09:51:58 -0500 Subject: Workaround of race condition between systemd and nginx. In-Reply-To: <5683E00C.7080200@csdoc.com> References: <5683E00C.7080200@csdoc.com> Message-ID: On Dec 30, 2015 8:46 AM, "Gena Makhomed" wrote: > > # HG changeset patch > # User Gena Makhomed > # Date 1451482795 18000 > # Wed Dec 30 08:39:55 2015 -0500 > # Node ID a340d271b3ffa51c0396a5afc5270cb02b701204 > # Parent 1073d7e4e430ddb53b603d151e1a403d10aa420b > Workaround of race condition between systemd and nginx. > > Just replace network.target with network-online.target in systemd unit files. > More details: http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/ >From that page, wrt network-online.target: "It is strongly recommended not to pull in this target too liberally: for example network server software should generally not pull this in (since server software generally is happy to accept local connections even before any routable network interface is up), it's primary purpose is network client software that cannot operate without network" -Jim P. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmm at csdoc.com Wed Dec 30 15:50:44 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Wed, 30 Dec 2015 17:50:44 +0200 Subject: Workaround of race condition between systemd and nginx. In-Reply-To: References: <5683E00C.7080200@csdoc.com> Message-ID: <5683FD54.90105@csdoc.com> On 30.12.2015 16:51, Jim Popovitch wrote: >> # HG changeset patch >> # User Gena Makhomed >> # Date 1451482795 18000 >> # Wed Dec 30 08:39:55 2015 -0500 >> # Node ID a340d271b3ffa51c0396a5afc5270cb02b701204 >> # Parent 1073d7e4e430ddb53b603d151e1a403d10aa420b >> Workaround of race condition between systemd and nginx. >> >> Just replace network.target with network-online.target in systemd unit > files. >> More details: > http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/ > >>From that page, wrt network-online.target: > > "It is strongly recommended not to pull in this target too liberally: for > example network server software should generally not pull this in (since > server software generally is happy to accept local connections even before > any routable network interface is up), it's primary purpose is network > client software that cannot operate without network" nginx is FreeBSD daemon, ant it is not systemd-aware, and nginx not follow many strange systemd guidelines. Current race condition between systemd and nginx follow to non-working nginx daemon at least under CentOS 7.2 templates under OpenVZ after system reboot. nginx now requires configured and up network, before starting daemon. Replace network.target with network-online.target is easy workaround. P.S. # cat /var/log/messages Dec 27 19:08:38 stage-ideil-com systemd: Starting nginx - high performance web server... Dec 27 19:08:39 stage-ideil-com systemd: Starting System Logging Service... Dec 27 19:08:39 stage-ideil-com systemd: Starting LSB: Bring up/down networking... Dec 27 19:08:39 stage-ideil-com systemd: Starting Postfix Mail Transport Agent... [...] Dec 27 19:09:24 stage-ideil-com systemd: nginx.service: control process exited, code=exited status=1 Dec 27 19:09:24 stage-ideil-com systemd: Failed to start nginx - high performance web server. Dec 27 19:09:24 stage-ideil-com systemd: Unit nginx.service entered failed state. Dec 27 19:09:24 stage-ideil-com systemd: nginx.service failed. Dec 27 19:09:24 stage-ideil-com systemd-sysctl: Failed to write '16' to '/proc/sys/kernel/sysrq': Permission denied Dec 27 19:09:24 stage-ideil-com systemd-sysctl: Failed to write '1' to '/proc/sys/kernel/core_uses_pid': Permission denied >Dec 27 19:09:24 stage-ideil-com systemd: Started LSB: Bring up/down networking. >Dec 27 19:09:24 stage-ideil-com systemd: Reached target Network is Online. >Dec 27 19:09:24 stage-ideil-com systemd: Starting Network is Online. Dec 27 19:09:24 stage-ideil-com systemd: Started The PHP FastCGI Process Manager. >Dec 27 19:09:24 stage-ideil-com nginx: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok >Dec 27 19:09:24 stage-ideil-com nginx: nginx: [emerg] bind() to 172.22.22.202:80 failed (99: Cannot assign requested address) >Dec 27 19:09:24 stage-ideil-com nginx: nginx: configuration file /etc/nginx/nginx.conf test failed >Dec 27 19:09:24 stage-ideil-com network: Bringing up loopback interface: [ OK ] >Dec 27 19:09:25 stage-ideil-com network: Bringing up interface venet0: arping: Device venet0 not available. >Dec 27 19:09:25 stage-ideil-com network: [ OK ] ===================================================================== # cat /var/log/messages Dec 24 18:55:14 hroniky-com systemd: Starting Sockets. Dec 24 18:55:14 hroniky-com systemd: Reached target Basic System. Dec 24 18:55:14 hroniky-com systemd: Starting Basic System. Dec 24 18:55:14 hroniky-com systemd: Started D-Bus System Message Bus. Dec 24 18:55:14 hroniky-com systemd: Starting D-Bus System Message Bus... Dec 24 18:55:14 hroniky-com systemd: Starting Permit User Sessions... Dec 24 18:55:14 hroniky-com systemd: Starting Postfix Mail Transport Agent... Dec 24 18:55:14 hroniky-com systemd: Started OpenSSH Server Key Generation. Dec 24 18:55:14 hroniky-com systemd: Starting /etc/rc.d/rc.local Compatibility... Dec 24 18:55:15 hroniky-com systemd: Starting nginx - high performance web server... Dec 24 18:55:18 hroniky-com systemd: Starting System Logging Service... Dec 24 18:55:18 hroniky-com systemd: Starting LSB: Bring up/down networking... Dec 24 18:55:18 hroniky-com nginx: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok Dec 24 18:55:18 hroniky-com nginx: nginx: [emerg] bind() to 172.23.23.161:80 failed (99: Cannot assign requested address) Dec 24 18:55:18 hroniky-com nginx: nginx: configuration file /etc/nginx/nginx.conf test failed Dec 24 18:55:18 hroniky-com systemd: Starting The PHP FastCGI Process Manager... Dec 24 18:55:18 hroniky-com systemd: Started OpenSSH server daemon. Dec 24 18:55:18 hroniky-com systemd: Starting OpenSSH server daemon... Dec 24 18:55:18 hroniky-com systemd: Starting Login Service... Dec 24 18:55:18 hroniky-com systemd: Starting Dump dmesg to /var/log/dmesg... Dec 24 18:55:18 hroniky-com systemd: Started Permit User Sessions. Dec 24 18:55:18 hroniky-com systemd: Started /etc/rc.d/rc.local Compatibility. Dec 24 18:55:18 hroniky-com systemd: nginx.service: control process exited, code=exited status=1 Dec 24 18:55:18 hroniky-com systemd: Failed to start nginx - high performance web server. Dec 24 18:55:18 hroniky-com systemd: Unit nginx.service entered failed state. Dec 24 18:55:18 hroniky-com systemd: nginx.service failed. -- Best regards, Gena From jimpop at gmail.com Wed Dec 30 16:09:29 2015 From: jimpop at gmail.com (Jim Popovitch) Date: Wed, 30 Dec 2015 11:09:29 -0500 Subject: Workaround of race condition between systemd and nginx. In-Reply-To: <5683FD54.90105@csdoc.com> References: <5683E00C.7080200@csdoc.com> <5683FD54.90105@csdoc.com> Message-ID: On Wed, Dec 30, 2015 at 10:50 AM, Gena Makhomed wrote: > nginx now requires configured and up network, before starting daemon. Specifically it's your configuration. You are hardcoding an IP address to bind to, thereby telling nginx to not start until that IP is active. > Replace network.target with network-online.target is easy workaround. That will prevent nginx from staring in situations where systemd determines that the external network is not yet active (dhcp, etc., etc), yet nginx may still run perfectly fine with split interfaces, localhost, etc. -Jim P. From gmm at csdoc.com Wed Dec 30 16:50:49 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Wed, 30 Dec 2015 18:50:49 +0200 Subject: Workaround of race condition between systemd and nginx. In-Reply-To: References: <5683E00C.7080200@csdoc.com> <5683FD54.90105@csdoc.com> Message-ID: <56840B69.50200@csdoc.com> On 30.12.2015 18:09, Jim Popovitch wrote: >> nginx now requires configured and up network, before starting daemon. > Specifically it's your configuration. > You are hardcoding an IP address to bind to > thereby telling nginx to not start until that IP is active. Do you know how nginx and systemd work right now? You understand race condition between nginx and systemd? > That will prevent nginx from staring in situations where systemd > determines that the external network is not yet active (dhcp, etc., > etc), yet nginx may still run perfectly fine with split interfaces, > localhost, etc. You say, what nginx should work fine if no network available, I say what nginx *must* work fine if network *IS* available. [..........................................................] So, I need create my own fork, for example, nginx-fixed, which I can use with OpenVZ and CentOS 7.2 templates? You can provide better solution for this systemd / nginx race condition? Your solution is "forbid nginx users write IP adresses is nginx config"? This is official decision of main open core nginx/nginx-plus developers? -- Best regards, Gena From jimpop at gmail.com Wed Dec 30 17:08:40 2015 From: jimpop at gmail.com (Jim Popovitch) Date: Wed, 30 Dec 2015 12:08:40 -0500 Subject: Workaround of race condition between systemd and nginx. In-Reply-To: <56840B69.50200@csdoc.com> References: <5683E00C.7080200@csdoc.com> <5683FD54.90105@csdoc.com> <56840B69.50200@csdoc.com> Message-ID: On Wed, Dec 30, 2015 at 11:50 AM, Gena Makhomed wrote: > On 30.12.2015 18:09, Jim Popovitch wrote: > >>> nginx now requires configured and up network, before starting daemon. > > >> Specifically it's your configuration. >> You are hardcoding an IP address to bind to >> thereby telling nginx to not start until that IP is active. > > > Do you know how nginx and systemd work right now? > You understand race condition between nginx and systemd? I understand nginx, systemd, and race conditions. I understand why *you* have a race condition, and I understand why I do not have a race condition. >> That will prevent nginx from staring in situations where systemd >> determines that the external network is not yet active (dhcp, etc., >> etc), yet nginx may still run perfectly fine with split interfaces, >> localhost, etc. > > > You say, what nginx should work fine if no network available, Yes, or even if only localhost (lo) exists. BTW, you can read about how openvpn handled this very issue https://community.openvpn.net/openvpn/ticket/462 > I say what nginx *must* work fine if network *IS* available. It does, it currently works if the network IS or ISNT available, and all possibilities in-between. > > So, I need create my own fork, for example, nginx-fixed, > which I can use with OpenVZ and CentOS 7.2 templates? No, you can simply modify your local /etc/systemd/system/nginx.service file to specify a local startup policy. -Jim P. From gmm at csdoc.com Wed Dec 30 17:44:30 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Wed, 30 Dec 2015 19:44:30 +0200 Subject: Workaround of race condition between systemd and nginx. In-Reply-To: References: <5683E00C.7080200@csdoc.com> <5683FD54.90105@csdoc.com> <56840B69.50200@csdoc.com> Message-ID: <568417FE.2010608@csdoc.com> On 30.12.2015 19:08, Jim Popovitch wrote: >> Do you know how nginx and systemd work right now? >> You understand race condition between nginx and systemd? > I understand nginx, systemd, and race conditions. I understand why > *you* have a race condition, and I understand why I do not have a race > condition. And you want to tell this "mantra" to all OpenVZ / CentOS 7.2 users? >> You say, what nginx should work fine if no network available, > Yes, or even if only localhost (lo) exists. lo exists. nginx startup failed. logs - see in previous messages. > BTW, you can read about how openvpn handled this very issue > https://community.openvpn.net/openvpn/ticket/462 You can provide patch with solution? If you can't - can you please stop flame war against my patch? >> I say what nginx *must* work fine if network *IS* available. > It does it currently works if the network IS or ISNT available, and > all possibilities in-between. No. nginx config is valid. logs - see in previous messages. >> So, I need create my own fork, for example, nginx-fixed, >> which I can use with OpenVZ and CentOS 7.2 templates? > > No, you can simply modify your local /etc/systemd/system/nginx.service > file to specify a local startup policy. Inside all containers on all hardware nodes? Manually? And same way this bug should be fixed by all other OpenVZ users? P.S. This is nginx-devel mail list for developers, not for users. $ curl -s http://nginx.org/en/CHANGES | grep "Jim Popovitch" $ curl -s http://nginx.org/en/CHANGES | grep "Gena Makhomed" -- Best regards, Gena From jimpop at gmail.com Wed Dec 30 17:59:09 2015 From: jimpop at gmail.com (Jim Popovitch) Date: Wed, 30 Dec 2015 12:59:09 -0500 Subject: Workaround of race condition between systemd and nginx. In-Reply-To: <568417FE.2010608@csdoc.com> References: <5683E00C.7080200@csdoc.com> <5683FD54.90105@csdoc.com> <56840B69.50200@csdoc.com> <568417FE.2010608@csdoc.com> Message-ID: On Dec 30, 2015 12:44 PM, "Gena Makhomed" wrote: > > On 30.12.2015 19:08, Jim Popovitch wrote: > >>> Do you know how nginx and systemd work right now? >>> You understand race condition between nginx and systemd? > > >> I understand nginx, systemd, and race conditions. I understand why >> *you* have a race condition, and I understand why I do not have a race >> condition. > > > And you want to tell this "mantra" to all OpenVZ / CentOS 7.2 users? > > >>> You say, what nginx should work fine if no network available, > > >> Yes, or even if only localhost (lo) exists. > > > lo exists. > > nginx startup failed. > > logs - see in previous messages. > > >> BTW, you can read about how openvpn handled this very issue >> https://community.openvpn.net/openvpn/ticket/462 > > > You can provide patch with solution? I see no need for a patch, and I already explained why, including a link to how another service daemon handled the very same issue. I've also explainec how your patch breaks things. Good day. -Jim P. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dk at syse.no Wed Dec 30 18:28:38 2015 From: dk at syse.no (Daniel K.) Date: Wed, 30 Dec 2015 18:28:38 +0000 Subject: Workaround of race condition between systemd and nginx. In-Reply-To: <5683FD54.90105@csdoc.com> References: <5683E00C.7080200@csdoc.com> <5683FD54.90105@csdoc.com> Message-ID: <56842256.9000501@syse.no> On 12/30/2015 03:50 PM, Gena Makhomed wrote: > On 30.12.2015 16:51, Jim Popovitch wrote: >> On Dec 30, 2015 8:46 AM, "Gena Makhomed" wrote: >>> Workaround of race condition between systemd and nginx. >>> >>> Just replace network.target with network-online.target in systemd unit >>> files. >>> More details: >>> http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/ >> >> From that page, wrt network-online.target: >> >> "It is strongly recommended not to pull in this target too liberally: for >> example network server software should generally not pull this in (since >> server software generally is happy to accept local connections even before >> any routable network interface is up), it's primary purpose is network >> client software that cannot operate without network" > > nginx now requires configured and up network, before starting daemon. > Replace network.target with network-online.target is easy workaround. Actually it does not require that at all. It would be more helpful if you posted your config files, but from your log file i gather they look something like: server { listen 172.22.22.202:80; [...] } And that, due to using systemd, the nginx service gets started before the network-interface have been configured with the IP address shown. Two ways you can work around this issue comes to mind. 1) Allow non-local binds # echo 1 > /proc/sys/net/ipv4/ip_nonlocal_bind Put 'net.ipv4.ip_nonlocal_bind = 1' in /etc/sysctl.conf to make it stick. 2) Configure nginx to listen to *:80 Add this to your config files somewhere. server { listen 80; } to your config files somewhere, and nginx will listen to 0.0.0.0:80 instead of every IP address you mention. Hope that helps, Daniel K. From gmm at csdoc.com Wed Dec 30 18:46:23 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Wed, 30 Dec 2015 20:46:23 +0200 Subject: Workaround of race condition between systemd and nginx. In-Reply-To: References: <5683E00C.7080200@csdoc.com> <5683FD54.90105@csdoc.com> <56840B69.50200@csdoc.com> <568417FE.2010608@csdoc.com> Message-ID: <5684267F.2030909@csdoc.com> On 30.12.2015 19:59, Jim Popovitch wrote: >>> BTW, you can read about how openvpn handled this very issue >>> https://community.openvpn.net/openvpn/ticket/462 >> You can provide patch with solution? > I see no need for a patch, and I already explained why nginx failed to start with correct config. And you don't see any problems with nginx. > including a link to how another service daemon handled the very same issue. In theory, there is no difference between theory and practice. But, in practice, there is. -- Best regards, Gena From jadas at akamai.com Wed Dec 30 18:49:06 2015 From: jadas at akamai.com (Das, Jagannath) Date: Wed, 30 Dec 2015 18:49:06 +0000 Subject: Workaround of race condition between systemd and nginx. In-Reply-To: <56842256.9000501@syse.no> References: <5683E00C.7080200@csdoc.com> <5683FD54.90105@csdoc.com> <56842256.9000501@syse.no> Message-ID: How to reproduce this issue? From: "Daniel K." > Reply-To: "nginx-devel at nginx.org" > Date: Wednesday, December 30, 2015 at 11:58 PM To: "nginx-devel at nginx.org" > Subject: Re: Workaround of race condition between systemd and nginx. On 12/30/2015 03:50 PM, Gena Makhomed wrote: On 30.12.2015 16:51, Jim Popovitch wrote: On Dec 30, 2015 8:46 AM, "Gena Makhomed" > wrote: Workaround of race condition between systemd and nginx. Just replace network.target with network-online.target in systemd unit files. More details: http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/ From that page, wrt network-online.target: "It is strongly recommended not to pull in this target too liberally: for example network server software should generally not pull this in (since server software generally is happy to accept local connections even before any routable network interface is up), it's primary purpose is network client software that cannot operate without network" nginx now requires configured and up network, before starting daemon. Replace network.target with network-online.target is easy workaround. Actually it does not require that at all. It would be more helpful if you posted your config files, but from your log file i gather they look something like: server { listen 172.22.22.202:80; [...] } And that, due to using systemd, the nginx service gets started before the network-interface have been configured with the IP address shown. Two ways you can work around this issue comes to mind. 1) Allow non-local binds # echo 1 > /proc/sys/net/ipv4/ip_nonlocal_bind Put 'net.ipv4.ip_nonlocal_bind = 1' in /etc/sysctl.conf to make it stick. 2) Configure nginx to listen to *:80 Add this to your config files somewhere. server { listen 80; } to your config files somewhere, and nginx will listen to 0.0.0.0:80 instead of every IP address you mention. Hope that helps, Daniel K. _______________________________________________ nginx-devel mailing list nginx-devel at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmm at csdoc.com Wed Dec 30 18:53:57 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Wed, 30 Dec 2015 20:53:57 +0200 Subject: Workaround of race condition between systemd and nginx. In-Reply-To: <56842256.9000501@syse.no> References: <5683E00C.7080200@csdoc.com> <5683FD54.90105@csdoc.com> <56842256.9000501@syse.no> Message-ID: <56842845.4090702@csdoc.com> On 30.12.2015 20:28, Daniel K. wrote: >> nginx now requires configured and up network, before starting daemon. >> Replace network.target with network-online.target is easy workaround. > > Actually it does not require that at all. nginx failed to start if network is down via systemd race condition. > It would be more helpful if you posted your config files, but from your > log file i gather they look something like: > > server { > listen 172.22.22.202:80; > [...] > } this is allowed syntax: http://nginx.org/en/docs/http/ngx_http_core_module.html#listen > And that, due to using systemd, the nginx service gets started before > the network-interface have been configured with the IP address shown. Yes. And nginx failed to start with *correct* config. > Two ways you can work around this issue comes to mind. > > 1) Allow non-local binds > > # echo 1 > /proc/sys/net/ipv4/ip_nonlocal_bind > > Put 'net.ipv4.ip_nonlocal_bind = 1' in /etc/sysctl.conf to make it stick. > > > 2) Configure nginx to listen to *:80 > > Add this to your config files somewhere. > > server { > listen 80; > } > > to your config files somewhere, and nginx will listen to 0.0.0.0:80 > instead of every IP address you mention. > > > Hope that helps, And I should send this text fragment to all nginx users? or this text fragment should be included in manual http://nginx.org/en/docs/http/ngx_http_core_module.html#listen ? or (better way) workaround should just be included in nginx unit file? -- Best regards, Gena From gmm at csdoc.com Wed Dec 30 19:03:40 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Wed, 30 Dec 2015 21:03:40 +0200 Subject: Workaround of race condition between systemd and nginx. In-Reply-To: References: <5683E00C.7080200@csdoc.com> <5683FD54.90105@csdoc.com> <56842256.9000501@syse.no> Message-ID: <56842A8C.7030203@csdoc.com> On 30.12.2015 20:49, Das, Jagannath wrote: > How to reproduce this issue? CentOS 7.2, nginx 1.9.9 from official nginx mainline repo, nginx config with "listen IP:port;" directives, "systemctl enable nginx", "systemctl start nginx", "reboot". after reboot - race condition between nginx and systemd. in my case: OpenVZ on hardware node, CentOS 7.2 template inside container. >> And that, due to using systemd, the nginx service gets started before >> the network-interface have been configured with the IP address shown. -- Best regards, Gena From dk at syse.no Wed Dec 30 20:40:21 2015 From: dk at syse.no (Daniel K.) Date: Wed, 30 Dec 2015 20:40:21 +0000 Subject: Workaround of race condition between systemd and nginx. In-Reply-To: <56842845.4090702@csdoc.com> References: <5683E00C.7080200@csdoc.com> <5683FD54.90105@csdoc.com> <56842256.9000501@syse.no> <56842845.4090702@csdoc.com> Message-ID: <56844135.8020606@syse.no> On 12/30/2015 06:53 PM, Gena Makhomed wrote: > On 30.12.2015 20:28, Daniel K. wrote: >>> nginx now requires configured and up network, before starting daemon. >>> Replace network.target with network-online.target is easy workaround. >> >> Actually it does not require that at all. > > nginx failed to start if network is down via systemd race condition. Again, no, nginx failed to start due to a local misconfiguration. >> It would be more helpful if you posted your config files, but from your >> log file i gather they look something like: >> >> server { >> listen 172.22.22.202:80; >> [...] >> } > > this is allowed syntax: > > http://nginx.org/en/docs/http/ngx_http_core_module.html#listen I never said it wasn't. I just wanted to express what I had pulled out of my hat based on reading the log you provided. That way you can see if I'm completely off track, and tell me, and other readers can get the context of the conversation more easily. >> And that, due to using systemd, the nginx service gets started before >> the network-interface have been configured with the IP address shown. > > Yes. And nginx failed to start with *correct* config. Well, syntactically correct, and logically correct is not the same thing. Your config makes nginx try to bind to a non-assigned IP address, which fails. A logical error in your config files. You have two options to fix it. >> 1) Allow non-local binds >> 2) Configure nginx to listen to *:80 Of which option 2 is probably the better approach. Note that the nginx config that comes with the source does exactly this. >From the distributed conf/nginx.conf: server { listen 80; [...] } > And I should send this text fragment to all nginx users? I don't know what you should do, I feel like I am still missing a part of the puzzle. Did you create this config yourself or did it come with something you installed? If you are providing config files for an application that you are the maintainer of, then yes, you should probably distribute something that works. If you arrange for an nginx server block to be added to the config files you could probably omit the listen directive altogether (listen *:80 is the default) and let the sysadmin add it back if he so chooses. > or this text fragment should be included in manual > > http://nginx.org/en/docs/http/ngx_http_core_module.html#listen As far as I can tell, it's there, only the port 8000 is used instead. > or (better way) workaround should just be included in nginx unit file? Arguably not better. The link you provided (repeated for context) tells you this on using network-online.target. http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/ It is strongly recommended not to pull in this target too liberally: [...] network server software should generally not pull this in Jim quoted from this as well. There you have it; the systemd folks tell us that your suggested workaround is not a good idea to use for server software. Daniel K. From artem.povaluhin at gmail.com Wed Dec 30 22:44:43 2015 From: artem.povaluhin at gmail.com (Artem S. Povaluhin) Date: Thu, 31 Dec 2015 01:44:43 +0300 Subject: Workaround of race condition between systemd and nginx. In-Reply-To: <56844135.8020606@syse.no> References: <5683E00C.7080200@csdoc.com> <5683FD54.90105@csdoc.com> <56842256.9000501@syse.no> <56842845.4090702@csdoc.com> <56844135.8020606@syse.no> Message-ID: <56845E5B.8030409@gmail.com> Hi! On 12/30/2015 11:40 PM, Daniel K. wrote: > I never said it wasn't. I just wanted to express what I had pulled out > of my hat based on reading the log you provided. That way you can see if > I'm completely off track, and tell me, and other readers can get the > context of the conversation more easily. > the context is simple. > >>> And that, due to using systemd, the nginx service gets started before >>> the network-interface have been configured with the IP address shown. >> >> Yes. And nginx failed to start with *correct* config. > > Well, syntactically correct, and logically correct is not the same thing. > why this config is correct everywhere except systemd, > Your config makes nginx try to bind to a non-assigned IP address, which > fails. A logical error in your config files. > > You have two options to fix it. > >>> 1) Allow non-local binds and we have to hack the OS >>> 2) Configure nginx to listen to *:80 or change it, in order to not to misconfigure systemd because of sombody's recommendations? wbr, Artem From gmm at csdoc.com Thu Dec 31 05:05:40 2015 From: gmm at csdoc.com (Gena Makhomed) Date: Thu, 31 Dec 2015 07:05:40 +0200 Subject: Workaround of race condition between systemd and nginx. In-Reply-To: <56844135.8020606@syse.no> References: <5683E00C.7080200@csdoc.com> <5683FD54.90105@csdoc.com> <56842256.9000501@syse.no> <56842845.4090702@csdoc.com> <56844135.8020606@syse.no> Message-ID: <5684B7A4.807@csdoc.com> On 30.12.2015 22:40, Daniel K. wrote: >> nginx failed to start if network is down via systemd race condition. > Again, no, nginx failed to start due to a local misconfiguration. Configuration is correct. "nginx -t" syntax check say: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok >>> listen 172.22.22.202:80; >> this is allowed syntax: >> http://nginx.org/en/docs/http/ngx_http_core_module.html#listen > I never said it wasn't. You say about "misconfiguration". >>> And that, due to using systemd, the nginx service gets started before >>> the network-interface have been configured with the IP address shown. >> >> Yes. And nginx failed to start with *correct* config. > > Well, syntactically correct, and logically correct is not the same thing. My config is syntactically correct *and* it is logically correct too. > Your config makes nginx try to bind to a non-assigned IP address, > which fails. A logical error in your config files. No. A logical error in nginx unit file or in systemd source code or in nginx source code. Result of error is race condition between systemd and nginx. The simplest workaround of race condition is to fix nginx unit file. >> And I should send this text fragment to all nginx users? > > I don't know what you should do, I feel like I am still missing a part > of the puzzle. Yes. OpenVZ used by hosting providers on multiple hardware nodes. Not always possible use only "listen 80;" and "listen 443;" directives. > Arguably not better. The link you provided (repeated for context) tells > you this on using network-online.target. > > http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/ > > It is strongly recommended not to pull in this target too liberally: > [...] network server software should generally not pull this in "should generally not pull this in". Workaround is not "generally". > There you have it; the systemd folks tell us that your suggested > workaround is not a good idea to use for server software. Systemd folks tell me and other nginx developers how *exactly* nginx should work. You have time and money to rewrite core parts of nginx? -- Best regards, Gena